id
stringlengths 10
10
| title
stringlengths 7
231
| abstract
stringlengths 3
2.43k
| authors
stringlengths 5
21.5k
| published_date
stringlengths 20
20
| link
stringlengths 33
34
| markdown
stringlengths 133
1.92M
|
---|---|---|---|---|---|---|
2308.11090 | Fairness Explainability using Optimal Transport with Applications in
Image Classification | Ensuring trust and accountability in Artificial Intelligence systems demands
explainability of its outcomes. Despite significant progress in Explainable AI,
human biases still taint a substantial portion of its training data, raising
concerns about unfairness or discriminatory tendencies. Current approaches in
the field of Algorithmic Fairness focus on mitigating such biases in the
outcomes of a model, but few attempts have been made to try to explain
\emph{why} a model is biased. To bridge this gap between the two fields, we
propose a comprehensive approach that uses optimal transport theory to uncover
the causes of discrimination in Machine Learning applications, with a
particular emphasis on image classification. We leverage Wasserstein
barycenters to achieve fair predictions and introduce an extension to pinpoint
bias-associated regions. This allows us to derive a cohesive system which uses
the enforced fairness to measure each features influence \emph{on} the bias.
Taking advantage of this interplay of enforcing and explaining fairness, our
method hold significant implications for the development of trustworthy and
unbiased AI systems, fostering transparency, accountability, and fairness in
critical decision-making scenarios across diverse domains. | Philipp Ratz, François Hu, Arthur Charpentier | 2023-08-22T00:10:23Z | http://arxiv.org/abs/2308.11090v2 | # Fairness Explainability using Optimal Transport
###### Abstract
Ensuring trust and accountability in Artificial Intelligence systems demands explainability of its outcomes. Despite significant progress in Explainable AI, human biases still taint a substantial portion of its training data, raising concerns about unfairness or discriminatory tendencies. Current approaches in the field of Algorithmic Fairness focus on mitigating such biases in the outcomes of a model, but few attempts have been made to try to explain _why_ a model is biased. To bridge this gap between the two fields, we propose a comprehensive approach that uses optimal transport theory to uncover the causes of discrimination in Machine Learning applications, with a particular emphasis on image classification. We leverage Wasserstein barycenters to achieve fair predictions and introduce an extension to pinpoint bias-associated regions. This allows us to derive a cohesive system which uses the enforced fairness to measure each features influence _on_ the bias. Taking advantage of this interplay of enforcing and explaining fairness, our method hold significant implications for the development of trustworthy and unbiased AI systems, fostering transparency, accountability, and fairness in critical decision-making scenarios across diverse domains.
Fairness Explainability using Optimal Transport
Algorithmic Fairness, Explainable Artificial Intelligence, Image Classification
## 1 Introduction
Machine Learning (ML) algorithms are widely used in critical domains ranging from recruiting and law enforcement to personalized medicine Berk (2012); Garcia-Penalvo et al. (2018); Esteva et al. (2021). Their usage is not beyond debate however, as fairness related issues remain poorly understood. Further, ML algorithms can perpetuate societal stereotypes and discriminatory practices by associating protected attributes, such as gender and race, with predictions - even if the attribute in question is not directly used in the modelling process Pedreshi et al. (2008); Noiret et al. (2021); Mehrabi et al. (2021). This can lead to discriminatory behavior towards certain subgroups, where examples include sexist recruiting algorithm, facial recognition systems that perform poorly for females with darker skin and challenges in recognizing specific subgroups in self-driving cars Goodall (2014); Nyholm and Smids (2016); Dastin (2018). None of these algorithms were designed with explicit malice, but nevertheless delivered biased results. As Kearns and Roth (2019) put it, "_machine learning won't give you anything like gender neutrality 'for free' that you didn't explicitely ask for_". Whereas there has been notable progress in the elimination of biases from black box models, challenges persist in identifying the source of biases and explaining _why_ unfair outcomes materialized Ali et al. (2023). Especially in fields where complex black-box models are employed, explaining unfairness is often reliant on testing hypotheses one-by-one, which can quickly become
infeasible in the era of big-data. A key reason for this is that existing methods aim to explain how a given _score_ was constructed not how a _bias_ was introduced. Additionally, criticism has been raised regarding the potential misuse of standard explainable AI tools, which can result in misleading explanations that validate incorrect models Alvarez Melis and Jaakkola (2018); Rudin (2019). This issue, often referred to as "_fairwashing_" Aivodji et al. (2019, 2021), underscores the importance of exercising caution in the application of such tools.
To address both the fairness and explainablity concerns, we take a two-fold approach in this article. As a **First Step** (**A**), we use existing research on mitigating the impact of sensitive attributes within a _fairness-aware_ framework and extend this explicitly to pre-trained models using optimal transport theory Villani (2021). We then turn our attention to explainablity and issues arising from fairwashing in a **second step** (**B**). Here, aim to model the algorithmic bias directly, providing valuable insights into the root causes behind the biases. Importantly, our work fulfills both local (specific model outputs) and global (model insights from data) explanation requirements, as highlighted in Arrieta et al. (2020). The combination of these two steps, (**A**) and (**B**), has the advantage that algorithmic fairness can be enforced and steps to mitigate the source of the biases can be put into place. By filling this important gap, we contribute to enhancing fairness, transparency and accountability in Artificial Intelligence (AI) systems. Note that throughout this article, we focus on applications involving images, rather than traditional tabular data, as it allows us to showcase the effectiveness of our approach. Specifically, with images, a more direct interpretation is possible, even without domain knowledge. However, the techniques presented here can easily be applied to standard tabular data sets as well.
### Scoping and Definitions
The field of Algorithmic Fairness considers different metrics for distinct goals. Here we opt to consider fairness at the distributional level, specifically, we focus on the Demographic Parity (DP) notion of fairness Calders et al. (2009). This strict definition aims to achieve independence between sensitive attributes and predictions without relying on labels. Formally, let \((\mathbf{X},S,Y)\) be a random tuple with distribution \(\mathbb{P}\), where \(\mathbf{X}\in\mathcal{X}\subset\mathbb{R}^{d}\) represents the features, \(S\in\mathcal{S}\subset\mathbb{N}\) a sensitive feature, considered discrete, across which we would like to impose fairness and \(Y\in\mathcal{Y}:=\{0,1\}\) represents the task to be estimated. As an illustration, consider Figure 1. Our study primarily focuses on binary classification tasks; however, the methodologies and techniques discussed can be readily extended and generalized to regression tasks or multi-task problems. Also note that we include the sensitive variable \(S\) in the model, which is a somewhat paradoxical feature in Algorithmic Fairness. However, both empirical studies Lipton et al. (2018) and theoretical research Gaucher et al. (2023) have consistently shown that de-biasing algorithms that do not consider the sensitive attribute, referred to as _fairness-unaware_, exhibit inferior fairness compared to _fairness-aware_ approaches Dwork et al. (2012) that leverage the sensitive feature.
In binary classification, we aim to determine the probability response to get \(1\) in \(\mathcal{Z}:=[0,1]\) for classifying \((\mathbf{X},S)\), also known as a _probabilistic classifier_ (or _soft_ classifier). To achieve strong DP-fairness for a given predictor \(f\), the objective is to ensure that \(S\) and \(f(\mathbf{X},S)\) are independent. This definition is more flexible than its commonly used _weak_ counterpart. Weak DP-fairness, aims to establish the independence of \(S\) and the _hard_ classifier \(c_{f}(\mathbf{X},S):=\mathds{1}\{f(\mathbf{X},S)\geq 0.5\}\) but restricts the user to a given threshold. Throughout our study, our goal is to satisfy the strong DP-fairness, formally defined as the following conditions for all \(s,s^{\prime}\in\mathcal{S}\) and \(u\in\mathbb{R}\):
\[\mathbb{P}\left(f(\mathbf{X},S)\leq u|S=s\right)=\mathbb{P}\left(f(\mathbf{X},S)\leq u|S=s ^{\prime}\right)\ . \tag{1}\]
Further, we define \(\mathcal{F}\) as the set of soft classifiers of the form \(f:\mathcal{X}\times\mathcal{S}\rightarrow\mathcal{Z}\). Also, given \(s\in\mathcal{S}\) we denote,
* \(\nu_{f}\) (resp. \(\nu_{f|s}\)) the probability measure of \(f(\mathbf{X},S)\) (resp. \(f(\mathbf{X},S)|S=s\));
* \(F_{f|s}(u):=\mathbb{P}\left(f(\mathbf{X},S)\leq u|S=s\right)\) its cumulative distribution function (CDF);
* \(Q_{f|s}(v):=\inf\{u\in\mathbb{R}:F_{f|s}(u)\geq v\}\) the associated quantile function.
Throughout the remainder of the paper, we assume that for any \(f\in\mathcal{F}\), both the measures \(\nu_{f}\) and \(\nu_{f|s}\) have finite second-order moments and their densities exist. With these notations, the DP notion of fairness defined in Equation (1) is rewritten as \(F_{f|s}(u)=F_{f|s^{\prime}}(u)\) for all \(s,s^{\prime}\in\mathcal{S}\) and \(u\in\mathbb{R}\).
### Related Work
#### 1.2.1 Algorithmic Fairness
In recent years, research on algorithmic fairness has grown significantly. The most common approaches can broadly be categorized into pre-processing, in-processing, and post-processing methods. Pre-processing ensures fairness in the input data by removing biases before applying ML models Park et al. (2021); Qiang et al. (2022). In-processing methods incorporate fairness constraints during model training Wang et al. (2020); Joo and Karkkainen (2020), where fairness constraints usually modify the loss landscape and prevent the model from learning an unfair solution. Whereas these two approaches focus on the model parameters itself, post-processing techniques aim to achieve fairness through modifications of the final scores Karako and Manggala (2018); Kim et al. (2019). This has the advantage that it works with any kind of estimator, including (partially) pre-trained models. Especially in computationally intensive fields, transfer learning or partial fine-tuning are prevalent to reduce training time and improve generalization. Due to this, post-processing methods are easily integrable into a standard workflow at low computational cost, hence we focus our work to this latter category. Of particular importance is the literature using Optimal Transport, a mathematical framework for measuring distributional differences. Intuitively, the goal is to _transport_ unfair scores to fair ones while minimizing the effects of this intervention to maintain predictive accuracy. In regression, methods like Chzhen et al. (2020) and Gouic et al. (2020) minimize
Figure 1: Attributes of an Image, here we consider the image to be the features (that is, \(\mathbf{X}\)) and the sensitive attribute (here, gender) as well as the label (young) a categorical variable.
Wasserstein distance to reduce discrimination. Similarly, in classification, Chiappa et al. (2020) and Gaucher et al. (2023) leverage optimal transport to achieve fair scores. Recent work by Hu et al. (2023) also achieves fairness in multi-task learning through joint optimization. Though still a nascent field, applications thereof become more common Zehlike et al. (2020); Charpentier et al. (2023). However, despite the extensive use of optimal transport theory in algorithmic fairness, there is only limited research that delves into applications that go beyond the standard case of tabular data, a first shortcoming we address in this article.
#### 1.2.2 Explainable AI
We focus on creating a simple fairness explanation method for computer vision, narrowing down our exploration to two Explainable AI (XAI) subfields. Among the most widely known methods are _model-agnostic_ techniques, such as LIME Ribeiro et al. (2016); Garreau and Mardaoui (2021) and SHAP Shapley (1997); Lundberg and Lee (2017), that do not depend on specific assumptions about the model's architecture. Though these approaches can be extended to the analysis of images, a range of XAI methods have been developed more specifically for the use of deep neural networks. These methods commonly focus on local explainability, which often involves highlighting important pixels (referred to as _attention maps_) for individual task predictions. Global explainablity can then be achieved through the identification of significant regions across the whole prediction analysis. As these approaches leverage the specific architecture of neural networks they are referred to as _model-specific_ approaches. Notable examples include Grad-CAM Selvaraju et al. (2017) and its various variants, like Grad-CAM++ Chattopadhay et al. (2018) and Score-CAM Wang et al. (2020). Recent work by Franco et al. (2021) for fair and explainable systems, generating two attention maps for local insight. Global explainability is achieved through t-SNE representations, but explicit discrimination explanations are often lacking, raising potential _fairwashing_ concerns Alikhademi et al. (2021). Our approach sets itself apart from the aforementioned methods by directly generating attention maps that specifically describe the model's unfair decisions, offering a clearer and more focused explanation for discriminatory outcomes.
### Contributions and Outline
In summary, we extend the literature of fairness-aware algorithms based on optimal transport theory through the following points:
* **Fair decision-making**: We adapt a post-processing model using optimal transport theory for computer vision tasks, bringing the theory closer to the community. We ensure fair and unbiased outcomes and show that the solution is optimal with respect to the relative rankings, and independent of the bias.
* **Explainable artificial intelligence**: Our main contribution is to use the optimal transport plan to develop an XAI approach for identifying changes in the data, describing unfair results. In computer vision applications, our method directly highlights the regions most responsible for the stated bias, facilitating direct identification of discrimination. The method is also easily extendable to tabular data sets.
The remainder of this article is structured as follows. First, we provide a brief background on optimal transport theory and establish its connection to algorithmic fairness. Then, we turn our
focus to our XAI methodologies to uncover the causes of discrimination. Finally, we showcase their performance through numerical experiments on the CelebA dataset.
## 2 Background on Optimal Transport
In this section, we present the fundamental concepts from optimal transportation theory. Specifically, we focus on the Wasserstein distance and give a brief overview of notable results in optimal transport theory with one-dimensional measures, where all the main results can be found in Villani et al. (2009); Santambrogio (2015); Villani (2021).
### Wasserstein Distances
Let \(\nu_{f_{1}}\) and \(\nu_{f_{2}}\) be two probability measures on \(\mathcal{Z}\). The squared Wasserstein distance (cf. Santambrogio (2015), definition SS5.5.1) between \(\nu_{f_{1}}\) and \(\nu_{f_{2}}\) is defined as
\[\mathcal{W}_{2}^{2}(\nu_{f_{1}},\nu_{f_{2}})=\inf_{\pi\in\Pi(\nu_{f_{1}},\nu_{ f_{2}})}\mathbb{E}_{(Z_{1},Z_{2})\sim\pi}\left(Z_{2}-Z_{1}\right)^{2}\ \,\]
where \(\Pi(\nu_{f_{1}},\nu_{f_{2}})\) is the set of distributions on \(\mathcal{Z}\times\mathcal{Z}\) having \(\nu_{f_{1}}\) and \(\nu_{f_{2}}\) as marginals. If the infimum is achieved, the resulting coupling is referred to as the optimal coupling between \(\nu_{f_{1}}\) and \(\nu_{f_{2}}\). If either of the predictors belongs to \(\mathcal{F}\), the optimal coupling can be determined (refer to Villani (2021), Thm 2.12) as follows: if \(Z_{1}\sim\nu_{f_{1}}\) and \(Z_{2}\sim\nu_{f_{2}}\), where \(f_{2}\in\mathcal{F}\), there exists a mapping \(T:\mathbb{R}\to\mathbb{R}\) such that
\[\mathcal{W}_{2}^{2}(\nu_{f_{1}},\nu_{f_{2}})=\mathbb{E}\left(Z_{2}-T(Z_{2}) \right)^{2}\ \,\]
with \(T(Z_{2})\sim\nu_{f_{1}}\). We call \(T\) the optimal transport map from \(\nu_{f_{2}}\) to \(\nu_{f_{1}}\). Moreover, in the univariate setting, a closed-form solution is explicitly provided as: \(T(\cdot)=Q_{f_{1}}\circ F_{f_{2}}(\cdot)\).
### Wasserstein Barycenters
Throughout this article, we will frequently make use of _Wasserstein Barycenters_. It can be defined for a given family of \(K\) measures \((\nu_{f_{1}},\ldots,\nu_{f_{K}})\) and weights \(\mathbf{w}=(w_{1},\ldots,w_{K})\in\mathbb{R}_{+}^{K}\) such that \(\sum_{s=1}^{K}w_{s}=1\). Then \(Bar(w_{s},\nu_{f_{s}})_{s=1}^{K}\) represents the Wasserstein barycenter of these measures which is the minimizer of
\[Bar(w_{s},\nu_{f_{s}})_{s=1}^{K}=\operatorname*{argmin}_{\nu}\ \sum_{s=1}^{K}w_{s} \cdot\mathcal{W}_{2}^{2}\left(\nu_{f_{s}},\nu\right)\ \.\]
The work in Agueh and Carlier (2011) shows that in our configuration, with \(f_{s}\in\mathcal{F}\), this barycenter exists and is unique. Put into words, the Wasserstein barycenter can be used to find a representative distribution that lies between multiple given distributions. For our applications, this will ensure that the predictive distributions given any values \(s\) will coincide. The optimal transport problem then aims to minimize the total amount of changes required to achieve this.
## 3 Fairness Projection using Optimal Transport
Optimal transport theory provides a means to ensure specific forms of algorithmic fairness. We provide a short summary of the some necessary concepts, all the main results can be found in Chiappa et al. (2020); Chzhen et al. (2020) and Gouic et al. (2020).
### Unfairness and Risk: a Warm-up
Paraphrasing the quote above that machine learning will not give a fair classifier for free, we first need to define both the objective of the classification and the concept of DP in a unified manner. DP will be used to determine the fairness of a classifier.
**Definition 1** (Demographic Parity): _Given a soft classifier \(f\), its unfairness is quantified by_
\[\mathcal{U}(f)=\max_{s,s^{\prime}\in\mathcal{S}}\sup_{u\in\mathcal{Z}}\left|F_ {f|s}(u)-F_{f|s^{\prime}}(u)\right|\enspace, \tag{2}\]
_and \(f\) is called (DP-)fair if and only if \(\mathcal{U}(f)=0\)._
Consider \(f^{*}(\mathbf{X},S):=\mathbb{E}\left[Y|\mathbf{X},S\right]\), the Bayes rule that minimizes the following squared risk
\[\mathcal{R}(f):=\mathbb{E}\left(Y-f(\mathbf{X},S)\right)^{2}\enspace.\]
The associated _hard_ classifier \(c_{f^{*}}(\mathbf{X},S)\) has the property of minimizing the risk of misclassification, which makes the squared risk applicable for both regression and classification (for more details, see Gaucher et al. (2023)). In line with this, we adopt a popular approach to algorithmic fairness by incorporating DP-fairness principles into risk minimization Chzhen et al. (2020); Gouic et al. (2020), that is:
\[\min_{f\in\mathcal{F}}\left\{\mathcal{R}(f):\mathcal{U}(f)=0\right\}\enspace.\]
By construction, this optimization effectively balances both risk \(\mathcal{R}\) and unfairness \(\mathcal{U}\), leading to improved predictive performance, reduced biases, and mitigation of potentially offensive or discriminatory errors.
### Optimal Fair Projection: Theoretical and Empirical Estimators
The two objectives, fairness and predictive accuracy are often in conflict with one another. Most of the recent work in algorithmic fairness has therefore focused on finding either a precise joint solution or an optimal trade-off between the two. When starting from the best possible predictor without any constraints, we refer to its optimal _fair_ counterpart as the _optimal fair projection_. This estimator should minimize the unfairness across the sensitive variables, while maintaining the best predictive accuracy under this constraint. Much work has been done within the field to achieve univariate optimal fair projections. Recall \(p_{s}=\mathbb{P}(S=s)\) and let \(f_{B}\in\mathcal{F}\), where its measure is the Wasserstein barycenter \(\nu_{f_{B}}:=Bar(p_{s},\nu_{f^{*}|s})_{s\in\mathcal{S}}\). Then, studies conducted by Chzhen et al. (2020) and Gouic et al. (2020) demonstrate that
\[f_{B}=\operatorname*{argmin}_{f\in\mathcal{F}}\left\{\mathcal{R}(f):\mathcal{U }(f)=0\right\}\enspace.\]
Therefore, \(f_{B}\) represents the optimal fair predictor in terms of minimizing unfairness-risk. Previous studies also offer a precise closed-form solution: for all \((\mathbf{x},s)\in\mathcal{X}\times\mathcal{S}\),
\[f_{B}(\mathbf{x},s)=\left(\sum_{s^{\prime}=1}^{K}p_{s^{\prime}}Q_{f^{*}|s^{\prime} }\right)\circ F_{f^{*}|s}\left(f^{*}(\mathbf{x},s)\right)\enspace. \tag{3}\]
To employ the results on real data the plug-in estimator of the Bayes rule \(f^{*}\) is given by \(\hat{f}\), which corresponds to any DP-unconstrained ML model trained on a training set \(\{(\mathbf{x}_{i},s_{i},y_{i})\}_{i=1}^{n}\)\(n\) i.i.d. realizations of \((\mathbf{X},S,Y)\). The empirical counterpart is then defined as:
\[\widehat{f_{B}}(\mathbf{x},s)=\left(\sum_{s^{\prime}=1}^{K}\hat{p}_{s^{\prime}} \hat{Q}_{\hat{f}|s^{\prime}}\right)\circ\hat{F}_{\hat{f}|s}\left(\hat{f}(\mathbf{x },s)\right)\ \, \tag{4}\]
where \(\hat{p}_{s}\), \(\hat{F}_{\hat{f}|s}\) and \(\hat{Q}_{\hat{f}|s}\) corresponds to the empirical counterparts of \(p_{s}\), \(F_{f^{*}|s}\) and \(Q_{f^{*}|s}\). Note that, with the exception of \(\hat{f}\), the remaining quantities can be constructed using an unlabeled _calibration_ dataset, denoted as \(\mathcal{D}^{\text{pool}}:=\{(\mathbf{X}_{i},S_{i})\}_{i=1}^{N}\), which consists of N i.i.d. copies of \((\mathbf{X},S)\). The pseudo-code of this approach is provided in Algorithm 1. We also visualize a possible model flow in Figure 2, where the _calibration layer_ corresponds to the inner workings of Algorithm 1 and specifically Equation 4. Chzhen et al. (2020) show that if the estimator \(\hat{f}\) is a good proxy for \(f^{*}\), then under mild assumptions on the distribution \(\mathbb{P}\), the calibrated post-processing approach \(\widehat{f_{B}}\) is a good estimator of \(f_{B}\), enabling accurate and fair estimation of the instances. Gaucher et al. (2023) demonstrates that the hard classifier \(c_{f_{B}}\) maximizes accuracy under DP-constraint, and the classifier \(c_{\widehat{f_{B}}}\) is proven to be a good estimator.
```
Input: instance \((\mathbf{x},s)\), base estimator \(\hat{f}\), unlabeled data \(\mathcal{D}^{\text{pool}}=\{(\mathbf{x}_{i},s_{i})\}_{i=1}^{N}\). Step 0. Split \(\mathcal{D}^{\text{pool}}\) to construct the group-wise sample \[\{\mathbf{x}_{i}^{s}\}_{i=1}^{N_{s}}\sim\mathbb{P}_{\mathbf{X}|S=s}\quad\text{for any }s\in\mathcal{S}\ \ ;\] with \(N_{s}\) the number of images corresponding to \(S=s\) Step 1. Compute the frequencies \((\widehat{p}_{s})_{s}\) from \(\{s_{i}\}_{i=1}^{N}\); Step 2. Estimate \((\hat{F}_{\hat{f}|s})_{s}\) and \((\hat{Q}_{\hat{f}|s})_{s}\) from \(\{\mathbf{x}_{i}^{s}\}_{i=1}^{N_{s}}\); Step 3. Compute \(\widehat{f}_{B}\) according to Eq. (4); Output: fair predictor \(\widehat{f}_{B}(\mathbf{x},s)\) at point \((\mathbf{x},s)\).
```
**Algorithm 1** Fairness projection.
## 4 Explainable AI using the Transportation Plan
Whereas the results from above enable the correction of given scores, they can be considered a treatment of the symptoms rather than the cause of unfairness. To this end, we extend the fairness procedures from above to explicitly pinpoint these sources bias. The key idea that we pursue is that the transport map used in Equation (4) can be used to construct group-wise counterfactual estimates. This approach clarifies differences between pre- and post-processed scores, enabling the use of established XAI methods to uncover the underlying causes of unfairness.
### Local explainability
To isolate the features responsible for discrimination, we extend the fair projection method by introducing an auxiliary learning task to detect the source of biases directly. The binary task, denoted \(\tilde{Y}\in\tilde{\mathcal{Y}}:=\{0,1\}\), is estimated using the distributions of the unfair predictor \(f^{*}(\mathbf{X},S)\) and the
DP-fair predictor \(f^{*}_{B}(\mathbf{X},S)\). This task can be chosen according to specific goals, and we provide a sample of possibilities in Table 1. The formulation
\[d_{B}(\mathbf{X},S):=f^{*}_{B}(\mathbf{X},S)-f^{*}(\mathbf{X},S)\enspace,\]
then offers an intuitive explanation of unfairness. As an example, in the context of a wage model, a positive value of \(d_{B}(\mathbf{X},S)\) indicates a group discrimination for an individual \((\mathbf{X},S)\) while a negative value might indicate favoritism. Its magnitude \(|d_{B}(\mathbf{X},S)|\) can be interpreted as the "degree" of discrimination or favoritism, and its squared version an indicator for extremes. The proposition below provides an interpretation of the quantity \(d_{B}(\mathbf{X},S)\) within the probabilistic framework, specifically in the context of a binary sensitive attribute scenario.
**Proposition 4.1** (Bias detection characterization): _Suppose \(\mathcal{S}=\{1,2\}\) a binary sensitive feature scenario. Given \((\mathbf{x},s)\in\mathcal{X}\times S\) and \(\bar{s}\in\mathcal{S}-\{s\}\), there exists an optimal transport map from \(\nu_{f^{*}|s}\) to \(\nu_{f^{*}|\bar{s}}\), denoted \(T_{s\to\bar{s}}:\mathcal{Z}\to\mathcal{Z}\), such that \(d_{B}(\mathbf{x},s)=f^{*}_{B}(\mathbf{x},s)-f^{*}(\mathbf{x},s)\) can be rewritten as,_
\[d_{B}(\mathbf{x},s)=p_{\bar{s}}\cdot\left(\ T_{s\to\bar{s}}\circ f^{*}(\mathbf{x},s)-f ^{*}(\mathbf{x},s)\ \right)\enspace, \tag{5}\]
_where \(T_{s\to\bar{s}}\circ f^{*}(\mathbf{X},s)\sim\nu_{f^{*}|\bar{s}}\)._
**Proof** Given \((\mathbf{x},s)\in\mathcal{X}\times S\), we are interested in the quantity \(d_{B}(\mathbf{x},s)=f^{*}_{B}(\mathbf{x},s)-f^{*}(\mathbf{x},s)\). For simplicity, we denote \(u_{s}(\mathbf{x})=F_{f^{*}|s}(f^{*}(\mathbf{x},s))\). Then, given \(\bar{s}\in\mathcal{S}-\{s\}\), we have,
\[f^{*}_{B}(\mathbf{x},s)-f^{*}(\mathbf{x},s) =\left(\sum_{s^{\prime}=1,2}p_{s^{\prime}}Q_{f^{*}|s^{\prime}} \right)\circ F_{f^{*}|s}\left(f^{*}(\mathbf{x},s)\right)-f^{*}(\mathbf{x},s)\] \[=p_{\bar{s}}\cdot Q_{f^{*}|\bar{s}}(u_{s}(\mathbf{x}))-p_{s}\cdot Q_{ f^{*}|s}(u_{s}(\mathbf{x}))+Q_{f^{*}|s}(u_{s}(\mathbf{x}))\] \[=p_{\bar{s}}\cdot Q_{f^{*}|\bar{s}}(u_{s}(\mathbf{x}))-p_{\bar{s}} \cdot Q_{f^{*}|s}(u_{s}(\mathbf{x}))\] \[=p_{\bar{s}}\cdot\left(Q_{f^{*}|\bar{s}}(u_{s}(\mathbf{x}))-f^{*}(\mathbf{ x},s)\right)\enspace.\]
Since \(f^{*}\in\mathcal{F}\), by definition of the Wasserstein distance, there exists a transport map \(T_{s\to\bar{s}}:\mathcal{Z}\to\mathcal{Z}\) such that \(T_{s\to\bar{s}}(\cdot)=Q_{f^{*}|\bar{s}}\circ F_{f^{*}|s}(\cdot)\) with \(T_{s\to\bar{s}}\circ f^{*}(\mathbf{X},s)\sim\nu_{f^{*}|\bar{s}}\), which concludes the proof. \(\blacksquare\)
In a binary sensitive framework, Proposition 4.1 asserts that \(|d_{B}(\mathbf{X},s)|\) depends on how much the DP-unconstrained prediction for \((\mathbf{X},s)\) deviates from its projection onto \(\nu_{f^{*}|\bar{s}}\). In other words, the _r.h.s._ of Equation (5) measures the disparity between the initial prediction and the projected prediction, where features \(\mathbf{X}|S=s\) are aligned with \(\mathbf{X}|S=\bar{s}\). As a concrete example, changing a male individuals' gender to female, the projection also modifies related attributes (such as height or weight) to match a female counterpart, ensuring comparability when these attributes naturally differ on a group level. Note that \(p_{\bar{s}}\) enhances this bias for over-represented \(\bar{s}\) groups, but reduces its significance for under-represented ones.
Alternatively, the new task described in Equation (5) can be viewed as a decomposition of biases into implicit and explicit components. Indeed, applying triangle inequality we have
\[|d_{B}(\mathbf{x},s)|\leq p_{\bar{s}}\cdot\left(\ |T_{s\to\bar{s}}\circ f^{*}(\mathbf{x},s)-f^{*}(\bm {x},\bar{s})|+|f^{*}(\mathbf{x},\bar{s})-f^{*}(\mathbf{x},s)|\ \right)\enspace.\]
In this context, implicit bias, which refers to the hidden influence of \(s\), is measured as the difference between two values \(T_{s\to\bar{s}}\circ f^{*}(\mathbf{X},s)-f^{*}(\mathbf{X},\bar{s})\) both follow the same distribution \(\nu_{f^{*}|\bar{s}}\) (although
not independent). This implicit bias represents the variation in predicted outcomes when the features \(\mathbf{X}\) under \(S=s\) are aligned with those under \(S=\bar{s}\), in contrast to unconstrained predictions where \(s\) is simply replaced with \(\bar{s}\) without aligning the features. Explicit bias, conversely, is simplistically expressed as \(f^{*}(\mathbf{X},\bar{s})-f^{*}(\mathbf{X},s)\). This measurement aligns to the principle of _ceteris paribus_, meaning "_all other things being equal_". However, when considered in isolation, this condition can lead to unrealistic situations. We provide a visual explanation in the Appendix A.
Data-driven procedureIn real datasets, we use plug-in estimators from Section 3.2 to estimate \(f^{*}\) and \(f^{*}_{B}\), producing \(\widehat{d_{B}}=\widehat{f_{B}}-\widehat{f}\), the empirical counterpart of \(d_{B}\). Our goal is to train an estimator \(g:\mathcal{X}\to\widetilde{\mathcal{Y}}\), where \(\widetilde{Y}\in\widetilde{\mathcal{Y}}\) represents the new target task outlined in Table 1's last column. XAI methods are then used to pinpoint areas causing observed model unfairness in the initial ML model. For image classification, popular techniques like Grad-CAM (Selvaraju et al. (2017)) create attention maps highlighting these biased areas. The pseudo-code for this approach is provided in Algorithm 2 with \(\widetilde{Y}^{\tau}:=\mathds{1}\{|\widehat{f_{B}}(\mathbf{X},S)-\widehat{f}(\mathbf{ X},S)|\geq\tau\}\) as the desired XAI task.
**Remark 4.2** (Impact of the parameter \(\tau\) on the bias detection): _In Table 1, various tasks require establishing a threshold \(\tau>0\) to identify essential bias-contributing regions. We suggest determining \(\tau\) at a specific quantile \(\alpha\in(0,1)\) within the sample \(\{|\widehat{d_{B}}(\mathbf{x}_{i},s_{i})|\}_{1\leq i\leq N}\), denoted as \(\widehat{Q}_{|\widehat{d_{B}}|}(\alpha)\). In particular, the choice of \(\alpha\) significantly influences the behavior of the bias detector. Indeed, a larger \(\alpha\) emphasizes causes with very high biases, while a smaller value identifies all possible causes the bias detector can identify. Opting for \(\alpha=0.75\) might be a good choice since it results in a more balanced dataset, with approximately equal occurrences of \(\widetilde{Y}^{\tau}=0\) and \(\widetilde{Y}^{\tau}=1\), while also distinguishing significant areas for unfairness._
## 5 Experiments
We opt to showcase our method on image data, rather than tabular data, as it helps to highlight an important practical aspect. Computer vision tasks are often performed on (partially) pre-trained models and compute time presents a major issue. The post-processing approach outlined in Equation (4) is particularly attractive in these circumstances. As an example, rendering the scores fair
\begin{table}
\begin{tabular}{|c|c|c|} \hline _Task description_ & _Probabilistic framework_ & _Empirical framework_ \\ \hline Discrimination & \(\mathds{1}\{f^{*}_{B}(\mathbf{X},S)-f^{*}(\mathbf{X},S)\geq 0\}\) & \(\mathds{1}\{\widehat{f_{B}}(\mathbf{X},S)-\widehat{f}(\mathbf{X},S)\geq 0\}\) \\ \hline Bias size & \(\mathds{1}\{|f^{*}_{B}(\mathbf{X},S)-f^{*}(\mathbf{X},S)|\geq\tau\}\) & \(\mathds{1}\{|\widehat{f_{B}}(\mathbf{X},S)-\widehat{f}(\mathbf{X},S)|\geq\tau\}\) \\ \hline Outliers & \(\mathds{1}\{(f^{*}_{B}(\mathbf{X},S)-f^{*}(\mathbf{X},S))^{2}\geq\tau\}\) & \(\mathds{1}\{(\widehat{f_{B}}(\mathbf{X},S)-\widehat{f}(\mathbf{X},S))^{2}\geq\tau\}\) \\ \hline \end{tabular}
\end{table}
Table 1: Bias / discrimination detection task
throughout the applications in the experimental section took less than 0.1 seconds on average. Further, the XAI approach outlined in the previous section also works on pre-trained models as the transportation plan only depends on the produced scores. We first present how the standard approach outlined in Section 3 can be extended to standard computer vision architectures and then show how the bias detection task can help identify the regions associated with the bias.
### Extension to Image Classification
An important detail from the above section is that in order to eliminate a bias from the predictions, the sensitive feature \(S\) must be included in the modelling process to satisfy the assumptions of the optimal transport theory. This is due to the fact that simply excluding the sensitive information might lead the model to proxy for the sensitive variable which leads back to the initial problem of the biased predictions. For image classification, we consider a standard split into a feature (or embedding) and classification block, where we use a pre-trained embedding model and keep its parameters fixed throughout the whole procedure. The sensitive feature can then be added through a simple layer concatenation of the output from the embedding before it is fed into a classification block. As this will still result in biased scores, a supplementary calibration layer is added, which implements Equation (4) and needs to be trained separately from the main model on a calibration data set. This indicates the need to split the data into three separate parts, the train set to fit the classification block to the specific data, a calibration set (corresponding to \(\mathcal{D}^{\text{pool}}\) in Algorithm 1, that does _not_ need to be labeled), and a standard test set. The architecture is visualized in Figure 2.
### Dataset and model
For our illustrations, we use the CelebA Liu et al. (2015) data set, containing more than 200,000 celebrity images, each annotated with 40 attributes. For the visualisations we selected a subset of 5,000 images with well-aligned facial features to obtain averaged predictions. Following the setup from Figure 2, we choose the pre-trained version of torchvision's IMAGENET1K_V2 trained on the imagenet database as our embedding model. With it, we use the built-in preprocessing
Figure 2: Standard fairness approach, the sensitive variable \(S\) must be included the latest in the classification block. The calibration layer contains an operation which makes use of the Wasserstein Barycenter. The illustration above _Unfair Scores_ represent the marginal score distributions induced by a binary sensitive variable which are then transported to a common distribution for the _Fair Scores_.
steps on each image but consider the parameters fixed. On top of the embedding, we then add a classification block, similar in spirit to what Wang et al. (2020) proposed. This block contains, before the output layer, three layers of size \([512,256,32]\) which take as input the vector of size 2048 from the embedding block and the sensitive feature of size 1. Each intermediate layer has a ReLU activation function applied to it and uses a 0.1 dropout. We split the data into 64% training, 16% calibration and 20% test data. The model is then trained task wise for 10 epochs using a binary cross entropy loss and average results over 10 runs1.
Footnote 1: All code can be found at: github.com/FairInterpret/fair_images
### Metrics and Prediction Tasks
As the approach is valid for soft classifiers, we use the Area under the ROC curve (AUC) as the performance metric on the test set (denoted \(A(f)\)). We also measure the unfairness \(\widehat{\mathcal{U}}(f)\) on the test-set as the empirical counterpart of Equation (2), based on the Kolmogorov-Smirnov test,
\[\widehat{\mathcal{U}}(f):=\max_{s,s^{\prime}\in\mathcal{S}}\sup_{t\in\mathcal{ Z}}\left|\hat{F}_{f|s}(t)-\hat{F}_{f|s^{\prime}}(t)\right|\enspace,\]
where \(\hat{F}_{f|s}\) is the empirical CDF of \(f(\mathbf{X},S)|S=s\).
We consider three different binary prediction tasks from the data set (the variables _Attractive_, _Beard_ and _Young_) and consider _Gender_ as the sensitive variable. As bias identification usually requires substantial domain expertise, we demonstrate how our method works when we isolate the _Beard_ prediction task from influences of _Gender_. As the positive labels for the task are almost exclusively present for male instances, we would expect the bias to be the largest for this task. Further, the task also has a well defined region of the image that should be used by the model to predict the label, making it suitable for visualisations with Grad-CAM.
### Results
The numerical results are summarized in Table 2. The _Uncalibrated_ columns present the results for a standard model that does not have a Calibration Layer, the _Fairness-aware_ columns represent a model of the form of Figure 2. As expected, all uncalibrated models present a significant level of unfairness, indicating the model learned to use gender in its predictions. The fairness-aware architecture manages to eliminate the bias, as indicated in the highlighted column in the results table, though it also results in a lower predictive accuracy as suggested by the theoretical analysis.
We then apply our XAI methodology to the data. We use the _Bias size_ task from Table 1, refit the model to the new task \(\tilde{Y}^{\tau}\) and evaluate the attention maps using the Grad-CAM algorithm of Selvaraju et al. (2017). To see how this can effectively prevent _fairwashing_ and help establish
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline & \multicolumn{2}{c|}{**Uncalibrated**} & \multicolumn{2}{c|}{**Fairness-aware**} \\ \hline _Metric_ & \(A(f)\) & \(\mathcal{U}(f)\) & \(A(f)\) & \(\mathcal{U}(f)\) \\ \hline \hline Attractive & 0.855\(\pm\)0.002 & 0.447\(\pm\) 0.028 & 0.769\(\pm\) 0.002 & 0.011\(\pm\) 0.001 \\ \hline Beard & 0.941\(\pm\) 0.002 & 0.896 \(\pm\) 0.009 & 0.731 \(\pm\) 0.004 & 0.010\(\pm\) 0.002 \\ \hline Young & 0.858\(\pm\)0.003 & 0.323\(\pm\) 0.036 & 0.81\(\pm\) 0.003 & 0.013\(\pm\) 0.003 \\ \hline \end{tabular}
\end{table}
Table 2: AUC & Unfairness over 10 repetitions. Colored values highlight the achieved fairness.
a causal relation, we compare the results to attention maps obtained from the initial model (that is, from the model that modelled the _Beard_ task). Results are visualized in Figure 3. The left three columns are averages of 5,000 well-aligned raw images, split by the _Gender_ variable and the Grad-CAM attention map of the initial model. Most of the attention is indeed focused around the region where one would expect the relevant characteristics of a beard to lie, indicating a small bias (second and third column). However, this would stand in stark contrast with the numerical results in Table 2. With the help of our methodology we can instead isolate the regions that contribute to the bias which we observe (fourth column). We can clearly see that the neural network extensively focuses on features associated with gender (such as a receding hairline, twice as likely for male individuals or blond hair, around 10 times more likely for female individuals, or earrings around 20 times more likely for females), but not the area where a beard would be expected. Such ratios are easily identifiable and can be investigated further and our method can effectively help identify relevant features.
## 6 Conclusion
We investigated the use of optimal transport in both fairness and explainability applications for machine learning methods. Though both fields have produced significant advanced in recent years, the strong ties between the two fields remain underexplored. We showed through our applications that a standard optimal transport method for tabular data can readily be extended to computer vision tasks, by adapting the model architecture slightly. This resulted in an extremely efficient routine that can easily be integrated into standard workflows. Given that the method is based on optimal transport, we were then able to derive an XAI method based on the optimal transportation plan, which helps to identify the sources of a bias, permitting a more targeted investigation into the causes rather than the symptoms. This method also enables researchers to adopt a more holistic approach to the choice of sensitive variables, alleviating concerns of _fairwashing_Aivodji et al. (2019) and indeed opens up a more objective discussion about emerging issues related to intersectional - Kong (2022) or sequential Hu et al. (2023b) fairness.
Figure 3: GradCAM activations with gender as a sensitive feature when estimating “Beard” task.
## Acknowledgments
This research was enabled in part by support provided by Calcul Quebec (calculquebec.ca) and the Digital Research Alliance of Canada (allianeccan.ca)
|
2307.06369 | Fusing photons into nothing, a new search for invisible ALPs and Dark
Matter at Belle II | We consider an axion-like particle coupled to the Standard Model photons and
decaying invisibly at Belle II. We propose a new search in the
$e^+e^-+\text{invisible}$ channel that we compare against the standard
$\gamma+\text{invisible}$ channel. We find that the $e^+e^-+\text{invisible}$
channel has the potential to ameliorate the reach for the whole ALP mass range.
This search leverages dedicated kinematic variables which significantly
suppress the Standard Model background. We explore the implications of our
expected reach for Dark Matter freeze-out through ALP-mediated annihilations. | Francesca Acanfora, Roberto Franceschini, Alessio Mastroddi, Diego Redigolo | 2023-07-12T18:00:04Z | http://arxiv.org/abs/2307.06369v1 | # Fusing photons into nothing,
###### Abstract
We consider an axion-like particle coupled to the Standard Model photons and decaying invisibly at Belle II. We propose a new search in the \(e^{+}e^{-}+\mathrm{invisible}\) channel that we compare against the standard \(\gamma+\mathrm{invisible}\) channel. We find that the \(e^{+}e^{-}+\mathrm{invisible}\) channel has the potential to ameliorate the reach for the whole ALP mass range. This search leverages dedicated kinematic variables which significantly suppress the Standard Model background. We explore the implications of our expected reach for Dark Matter freeze-out through ALP-mediated annihilations.
## I Introduction
Dark Matter (DM) searches at the intensity frontier are like a fishing expedition in the high sea at a depth never explored before. Going to high intensity opens up the possibility to test directly extremely feebly interactions of DM with the Standard Model which would be impossible to probe otherwise. These interactions can be responsible to produce light DM (in the \(\mathrm{MeV}-\mathrm{GeV}\) range) in the early Universe through thermal freeze-out [1].
Collider searches support direct detection experiments and indirect detection observations in the joint effort of testing the allowed parameter space of thermal DM freeze-out. This complementarity is particularly important for specific kinematic configurations suppressing the DM elastic scattering with the target materials or the DM annihilation in the galactic environment [2; 3].
In this paper we revisit the sensitivity of the Belle II experiment to DM communicating with the SM through an axion-like particle coupled only to photons. This simple dark sector scenario was considered in Ref. [4], where the sensitivity of Belle II was derived focusing on the standard mono-photon final state accompanied by missing energy. The same experimental strategy was implemented before at BaBar [5; 9; 10] and it is under implementation of the Belle II collaboration [11] and expected to give results soon [12].
We develop an alternative strategy based on the
\[e^{+}e^{-}\to e^{+}e^{-}+\mathrm{invisible}\]
channel, leveraging a more detailed knowledge of the signal kinematics (given the two visible particles) at the price of the reduced production cross section. In Fig. 1 we show our main result, demonstrating how this search strategy can provide a new powerful and independent probe of this type of new physics.
A key observation behind our strategy is that a system of multiple invisible particles, as it is the typical for SM backgrounds, is not likely to have _large_ missing energy, and at the same time a _small_ invariant mass and a _small_ longitudinal missing momentum with respect to the missing energy of the system. On the contrary, this kind of kinematics is peculiar of a single invisible body of small mass such as an ALP decaying invisibly.
The paper is organized as follows. In Sec. II we spec
Figure 1: Expected sensitivity of Belle II at 95% C.L. to the ALP coupling to photons \(g_{a\gamma\gamma}\) as defined in Eq. (1). The branching ratio of the ALP into invisible states is taken to be one as motivated by Eq. (3). The **orange** line is the expected reach of the \(\gamma+\mathrm{invisible}\) channel derived in Ref. [4]. The **red** line shows the reach of the \(e^{+}e^{-}+\mathrm{invisible}\) channel discussed here. The **gray shaded** region shows existing constraints from LEP and Babar \(\gamma+\mathrm{invisible}\) searches [5; 6; 7] and from \(\Delta N_{\mathrm{eff}}\) constraints from CMB [8]. We also show the expected constraint from SN cooling estimated in Ref. [4]. The **dotted blue** lines show the freeze-out prediction for resonant DM annihilation with fine tuning (F.T.) \(10\%,1\%\) and \(0.1\%\) as discussed below Eq. (4).
ify the theoretical setup, discuss its parameter space and the ancillary constraints from cosmology, astrophysics, direct and indirect detection. In Sec. III we compare the \(\gamma+\text{invisible}\) and \(e^{+}e^{-}+\text{invisible}\) final states at Belle II. We discuss the signal cross sections and the signal kinematics. Additional information about the signal is given in App. A.1. In Sec. IV we characterize the SM backgrounds with further details given in App. A.2. In Sec. V we specify the event selection that lead to the expected sensitivity in Fig. 1. In Sec. VI we discuss our result, the necessary further steps to complete our study and the possible future directions.
## II Setup
For concreteness and to characterize our experimental reach we specify the DM model with an ALP mediator already considered in Ref. [4]. The Lagrangian reads1
Footnote 1: Notice that \(g_{a\chi\chi}\) in Eq. (1) is related to one defined in Ref. [4] as \(\mathcal{L}\supset\bar{g}_{a\chi\chi}\bar{\chi}\gamma^{\mu}\gamma_{5\chi} \partial_{\mu}a\) by a simple redefinition \(g_{a\chi\chi}=2\bar{g}_{a\chi\chi}\) upon integration by parts and DM equation of motion.
\[\mathcal{L}=\frac{1}{2}(\partial_{\mu}a)^{2}-\frac{m_{a}^{2}}{2}a^{2}-\frac{g_{ a\gamma\gamma}}{4}aF_{\mu\nu}\tilde{F}^{\mu\nu}+\frac{i}{2}\tilde{\chi} \gamma^{\mu}\partial_{\mu}\chi+\frac{M_{\chi}}{2}\bar{\chi}\chi+\frac{g_{a \chi\chi}}{2}M_{\chi}a\bar{\chi}\gamma_{5}\chi\, \tag{1}\]
where \(\chi\) is a Majorana fermion.
If the ALP is the pseudo-Nambu-Golstone boson (pNGB) of a global \(U(1)\) symmetry, we can estimate the size of its coupling to photons and DM in terms of the decay constant \(f_{a}\) which controls the UV cutoff of the theory \(\Lambda_{\text{UV}}=g_{*}f_{a}\), where \(g_{*}\) is an \(\mathcal{O}(1)\) coupling. In this setup, the ALP coupling to photons originates from the ABJ anomaly of the global \(U(1)\) symmetry with respect to QED and
\[g_{a\gamma\gamma}\equiv\frac{\alpha_{\text{em}}c_{\gamma\gamma}}{2\pi f_{a}}\, \tag{2}\]
where the anomaly coefficient \(c_{\gamma\gamma}=\sum_{E}Q_{E}^{2}\) is controlled by the charge \(Q_{E}\) of chiral fermions of mass \(\Lambda_{\text{UV}}\).
The ALP coupling to DM can be generated through explicit breaking of the ALP shift symmetry. As a consequence, the ALP mass is naturally of the same order of the DM one, motivating a mass hierarchy like the one considered in this paper, where the ALP mass is slightly heavier than the DM one.
In such a theory, the ALP decays invisibly into DM pairs with a branching ratio close to 1 since the decay into a pair of photons is loop-suppressed. The ratio of the partial decay widths can be estimated to be
\[\frac{\Gamma(a\rightarrow\gamma\gamma)}{\Gamma(a\rightarrow\chi\chi)}\sim \left(\frac{\alpha_{\text{em}}}{4\pi}\right)^{2}\frac{1}{r^{2}\sqrt{1-4r^{2}} }\, \tag{3}\]
with \(r\equiv M_{\chi}/m_{a}\lesssim 1/2\) and for \(c_{\gamma\gamma}\sim\mathcal{O}(1)\) and \(g_{a\chi\chi}\sim 1/f_{a}\). Where the overall \(1/r^{2}\) factor comes from the scale dependence of the partial width while \(\sqrt{1-4r^{2}}\) accounts for the possible phase space suppression of the invisible channel. The suppression of the diphoton width in Eq. (3) is generically of order \(\mathcal{O}(10^{-7}-10^{-6})\) and can only be compensated in scenarios where either the DM mass is very light, i.e. \(r\to 0\), or the DM mass \(M_{\chi}\) is very close to \(m_{a}/2\), i.e. \(r\to 1/2\).
The simple theory in Eq. (1) can be used as a model of DM freeze out through ALP-mediated annihilations into photons as first studied in Ref. [4]. Setting the relic abundance of \(\chi\) to match the measured DM abundance today [13] generically requires \(f_{a}\) to be around the GeV with a mild dependence on the DM mass. This region is already excluded by existing collider searches.
A simple way to push the freeze-out region at weaker coupling is to make the annihilation resonant by tuning the DM mass to be close to the ALP resonance (i.e. \(r\to 1/2\)). As first discussed in Ref. [4], defining \(x=M_{\chi}/T\) the thermally averaged annihilation cross section in this limit can be approximated as
\[\langle\sigma v\rangle\approx\frac{\pi}{64}\cdot g_{a\gamma\gamma}^{2}\cdot \frac{x}{r^{5}}\frac{K_{1}(x/r)}{K_{2}(x)^{2}}\, \tag{4}\]
which is independent on \(g_{a\chi\chi}\) as long as the total width of the ALP can be approximated with its invisible decay width. The prediction of resonant freeze-out are shown in Fig. 1 for fine-tuned values of \(r\) close to the resonance, where F.T. \(\equiv 1/2-r\) can be taken as a measure of the fine-tuning. Assuming instantaneous freeze-out we impose \(\langle\sigma v\rangle\left|{}_{x_{*}}=\sigma_{t.o.}\right.\) using the results of Ref. [13] for the freeze-out temperature \(x_{*}\) and for \(\sigma_{t.o.}\). Interestingly, the boost to the annihilation cross section given by the resonance saturates for fine-tunings smaller than \(10^{-3}\) making it possible to define a _minimal_\(g_{a\gamma\gamma}\) compatible with thermal freeze-out which is shown in Fig. 1. Remarkably, the reach of Belle II based on the \(e^{+}e^{-}+\text{invisible}\) channel proposed here could be able to probe this coupling for DM masses below 6 GeV.
Given the average velocity of DM in the Milky Way, which correspond to \(x\sim 10^{6}\), the ALP-mediated annihilation cross section today is suppressed by the asymptotic of the Bessel functions in Eq. (4). As a result the present constraints from indirect detection (see e.g. Ref. [14]) cannot test resonant thermal freeze-out. The direct detection reach can be extracted by integrating out the ALP and writing dimension 7 operators coupling the DM
to the photon bilinear. The rate of DM scattering onto nuclear and electron targets is heavily suppressed by the high dimension of the operators mediating the DM scattering so that the reach of direct detection is not constraining the parameter space of resonant ALP-mediated freeze-out [15; 16; 17]. Colliders are then our best hope to test such a peculiar scenario of DM production.
For low ALP masses, strong constraints come from the measurements of the effective number of relativistic species at BBN and CMB. In our setup the strongest bound can be derived from current Planck measurements [18] by computing the DM entropy transfer to the electron-photon bath after neutrino decoupling [19; 20]. We show this constraint in Fig. 1 which robustly rules out DM masses below roughly 10 MeV.
Stronger constraints could be derived by requiring the ALPs produced in the nascent proto-neutron star (PNS) during the supernova (SN) explosion to not substantially modify the canonical neutrino cooling mechanism [21]. This constraint assumes that the DM mean-free-path is longer than the size of the PNS (\(\sim 10\) km) which is always the case in the parameter space of interest. Setting a rigorous bound for heavy ALP masses (beyond the CMB bound) depends very much on the parameters controlling the SN thermodynamics and goes beyond the scope of this work. In Fig. 1 we show the approximate line derived in [4] as an indication of the possible constraining power of this observable.
## III Signal
Starting from the model in Eq. (1) two possible ALP production mechanisms at a lepton collider are
\[e^{+}e^{-}\to\gamma_{\rm vis}a\,, \tag{5}\] \[e^{+}e^{-}\to e^{+}_{\rm vis}e^{-}_{\rm vis}a\, \tag{6}\]
where the subscript "vis" indicates that we require the photon and the electron-positron pair to be within the geometric acceptance of the detector. The first process is the well studied ALP-strahlung leading to the \(\gamma+\text{invisible signal}\). The second process leads to a \(e^{+}e^{-}+\text{invisible signal}\) where two different topologies contribute to the total cross section: \(i)\) the "ALP-Dalitz" process, given by the ALP-strahlung Eq. (5) with a photon conversion into \(e^{+}e^{-2}\), \(ii)\) the "photon-fusion" into ALP, given by a photon line exchanged in the \(t\)-channel that radiates the ALP.
As shown in Fig. 2 left, the angular acceptance controls the hierarchy between the \(\gamma+\text{invisible}\) and the \(e^{+}e^{-}+\text{invisible cross}\) sections. Moreover, within the \(e^{+}e^{-}+\text{invisible final state}\), the angular acceptance controls the hierarchy of the ALP-Dalitz vs the photon-fusion channels.
The angular acceptance of the Belle II tracking system requires the polar angle of every electron in the center-of-mass frame, denoted by \(\theta^{*}_{e}\), to be more than \(17^{\circ}\) away from the beam axis. However, a successful electron ID should be supplemented by information from the ECAL. This raises the requirement on the minimal acceptance angle and introduces a further requirement on the minimal energy
\[\theta^{*}_{\rm min}=22^{\circ}\,\qquad E^{*}_{\rm min}=0.25\ \text{GeV}\,, \tag{7}\]
in the center of mass frame. The same acceptance applies to photons, that are reconstructed mainly by the ECAL. The signal cross sections in the acceptance of Belle II with center-of-mass energy \(\sqrt{s}=10.58\) GeV are given in Fig. 2 right. They can be approximated for small enough ALP masses as
\[\begin{split}\sigma(e^{+}e^{-}\to\gamma a)\approx 10^{-3}\ \text{pb}\left[ \frac{g_{a\gamma\gamma}}{10^{-4}\ \text{GeV}^{-1}}\right]^{2}\,,\\ \sigma(e^{+}e^{-}\to e^{+}e^{-}a)\approx 7\times 10^{-5}\ \text{ pb}\left[\frac{g_{a\gamma\gamma}}{10^{-4}\ \text{GeV}^{-1}}\right]^{2}.\end{split} \tag{8}\]
As a result, the ALP-strahlung cross section is larger than the photon-fusion one by roughly a factor of 14 at Belle II. As shown in Fig. 2 left, the \(e^{+}e^{-}\to e^{+}e^{-}a\) cross section at Belle II is still dominated by the photon-fusion channel, although the Dalitz contribution is only a factor of 5 smaller.
Both these facts are the result of the strong suppression of the inclusive \(e^{+}e^{-}\to e^{+}e^{-}a\) cross section due to the Belle II acceptance. The photon-fusion rate is dominated by electrons and positrons close to the beam axis, that unavoidably fall out of the Belle II acceptance. Indeed, as it is shown in Fig. 2 left, extending the angular coverage down to \(\theta^{*}_{\rm min}\simeq 1^{\circ}\) the cross section of the \(e^{+}e^{-}\to e^{+}e^{-}a\) process would become of the same size of the ALP-strahlung \(e^{+}e^{-}\to a\gamma\), with the photon-fusion dominating over the Dalitz process by almost two orders of magnitude.
For comparison we show in Fig. 2 right the inclusive cross section of \(e^{+}e^{-}\to e^{+}e^{-}a\) which is more than two orders of magnitude larger than the one where the \(e^{+}e^{-}\) pair is efficiently reconstructed at Belle II. The inclusive cross section can be approximated using the effective photon approximation (EPA) [22; 23; 24; 25; 26] which is shown as a green band in Fig. 2 using the implementation of MadGraph 3.5 [27] with the associated theoretical uncertainty corresponding to a variation of the factorization scale from \(10\,m_{a}\) to \(0.1\,m_{a}\). A more rigorous theory uncertainty could be associated following Ref. [26]. On the contrary, requiring the \(e^{+}e^{-}\) pair to be reconstructed at Belle II forces the kinematic of \(e^{+}e^{-}\to e^{+}e^{-}a\) to depart substantially from the kinematics encompassed by the EPA approximation. A full analytical understanding of this \(2\to 3\) process in a general kinematic configuration will be presented in a forthcoming publication [28].
### Signal kinematics
We now describe the signal kinematics for the two signal production modes of Eqs. (5) and (6). In the mono-\(\gamma\) topology the photon and the ALP momenta are back-to-back \(\vec{p}_{\gamma}=-\vec{p}_{a}=-\vec{p}_{\rm miss}\). Under these circumstances the signal is characterized by a photon of fixed energy \(E_{\gamma}\) and the amount of missing energy \(E_{\rm miss}\) required by energy conservation. Explicitly one finds
\[E_{\gamma}^{*}=\frac{s-m_{a}^{2}}{2\sqrt{s}}\,,\qquad E_{\rm miss}^{*}=\frac{s +m_{a}^{2}}{2\sqrt{s}}\, \tag{9}\]
where \(\sqrt{s}\) is the center of mass energy and the quantities defined in the center of mass frame (CoM) will be denoted with a \(*\) throughout this work.
After requiring a single photon of fixed energy up to the experimental resolution, a further discrimination between signal and background can only be achieved by selecting the central detector region. A standard way to do so is to define the missing pseudo-rapidity in the lab frame as
\[\eta_{\rm miss}\!=\!\frac{1}{2}\!\log\!\left[\frac{|\vec{p}_{\rm miss}|+p_{\rm miss }^{L}}{|\vec{p}_{\rm miss}|-p_{\rm miss}^{L}}\right]\!=\!-\log\left[\tan\frac{ \theta_{\rm miss}}{2}\right]\!, \tag{10}\]
where we defined \(p_{\rm miss}^{L}\) as the component of the missing momentum along the beam pipe and \(\theta_{\rm miss}\) as the angle made by the ALP trajectory with the beam pipe axis. Selecting the central region is equivalent to requiring an upper bound \(|\eta_{\rm miss}|\).
Due to its three-body nature, signal kinematics for the \(e_{\rm vis}^{+}e_{\rm vis}^{-}a\) channel can be characterized by the missing mass \(m_{\rm miss}\) together with the \(E_{\rm miss}\) and \(\eta_{\rm miss}\). These quantities can be used to distinguish the ALP production from background processes.
The missing mass
\[m_{\rm miss}^{2}=E_{\rm miss}^{2}-|\vec{p}_{\rm miss}|^{2}=m_{a}^{2}\,, \tag{11}\]
is equal to the ALP mass up to the experimental resolution. The missing energy and the missing momentum can be written as a function of the visible electron-positron pair in the final state and the CoM energy:
\[E_{\rm miss}^{*}=\sqrt{s}-E_{e^{+}}^{*}-E_{e^{-}}^{*}\,\qquad\vec{p}_{\rm miss }^{*}=-\vec{p}_{e^{+}}^{*}-\vec{p}_{e^{-}}^{*}. \tag{12}\]
Analogously to Eq. (10) a missing pseudo-rapidity can be defined from the initial and final state electrons and positrons.3
Footnote 3: In principle further information can be extracted from the azimuthal angle between the positron and the electron in the final state.
Besides the fixed missing mass, the ALP signal is expected to be central (i.e. small \(\eta_{\rm miss}\)) as confirmed by the distribution in Fig. 3 left. Moreover, requiring the electron-positron pair within the Belle II acceptance favors a kinematics where the ALP is not produced at rest resulting in a large \(E_{\rm miss}^{*}\), even for a very light ALP. For example for an effectively ma
Figure 2: Characterization of the different production channels for an ALP coupled to SM photons as defined in Eq. (1). **Left:** In **blue** we show the ratio of the cross section for the \(e_{\rm vis}^{+}e_{\rm vis}^{-}a\) final state vs \(\gamma_{\rm vis}a\) final state as a function of the angular acceptance of a hypothetical lepton collider with center of mass energy \(\sqrt{s}=10.58\) GeV. In **yellow** we show the ratio between the two processes contributing to \(e_{\rm vis}^{+}e_{\rm vis}^{-}a\): the “ALP Dalitz” process where a photon is exchanged in \(s\)-channel and the “photon-fusion” into ALP where the photon is exchanged in \(t\)-channel. The **red dashed** line shows the Belle II minimal opening angle for photons and electrons/positrons to be detected by the calorimeter. The **red dotted** line shows the Belle II minimal opening angle for electrons/positrons to be detected by the tracker. **Right:** ALP cross sections at Belle II as a function of the ALP mass \(m_{a}\). The mono-photon ALP cross section (**blue**) is roughly 10 times larger than the photon-fusion cross-section with final state \(e^{+}\) and \(e^{-}\) in the Belle II acceptance one (**yellow**). We also show the _inclusive_\(e^{+}e^{-}a\) (**green** line) which is orders of magnitude larger than \(e_{\rm vis}^{+}e_{\rm vis}^{-}a\) due to the acceptance of the Belle II detector. The **green** band indicates the EPA approximation of \(e^{+}e^{-}a\) with its associated uncertainty.
more than 90% of the signal has \(E_{\rm miss}^{*}\gtrsim 2\;{\rm GeV}\) (see Fig. 4). Indeed, in Fig. 3 left we can see the ALP signal at Belle II is characterized by a large missing energy. This is at odds with the usual expectation for the fusion production mechanism, e.g. Higgs boson production at LHC in VBF [29; 30], because the phase space of the photon-fusion process producing the ALP at rest is cut out from the Belle II geometric acceptance. In App. A.1 we comment further on how this feature depends on a hypothetical change of the angular acceptance of Belle II.
Leveraging all these distinctive features of the signal, we will argue that it is very difficult for any SM background to have a _large_\(E_{\rm miss}\) together with a _small_\(|\eta_{\rm miss}|\) and a _small_\(m_{\rm miss}\). In the next section we will show how it is possible (especially for light ALP masses) to design a search at Belle II where the background rejection is so good to compensate the suppressed production cross section with respect to the \(\gamma+\) invisible channel illustrated in Eq. (6).
## IV Backgrounds
SM processes that give the same final state as our signal Eq. (6) are
\[e^{+}e^{-} \to e^{+}e^{-}\,+\,n\gamma_{\rm inv}\,, \tag{13}\] \[e^{+}e^{-} \to\tau^{-}\tau^{+},\tau^{\pm}\to e^{\pm}\nu\nu\,, \tag{14}\]
where \(\gamma_{\rm inv}\) indicates a "missed photon" that cannot be detected at Belle II because either its energy is below the ECAL energy threshold, or it is emitted too close to the beam direction and ends up in the two blind cones around the forward and backward direction. In formulas \(\gamma_{\rm inv}\) for Belle II is defined as a photon that fails the requirement of Eq. (7). This definition does not take into account further detector inefficiencies which will be discussed in Sec. IV.3. We will refer to Eq. (13) as QED\({}^{n}\) background, for which the leading order QED in perturbation theory corresponds to \(n=2\). Eq. (14) is referred in the following as the \(\tau\tau\) background.
Both backgrounds above are _reducible_ thanks to the fact that their kinematics does not resemble at all the one of the signal.4 However, both background processes have a huge cross section compared to the signal Eq. (8). In particular, given the Belle II acceptance, the \(\tau\tau\) background cross-section is around 75.4 pb and the QED\({}^{2}\) background is of 1.66 nb. These number set the challenge of the photon-fusion search which needs to achieve a background rejection at the level of \(10^{-8}\) to be sensitive to \(g_{a\gamma\gamma}\simeq 10^{-5}\;{\rm GeV}^{-1}\), where the target for resonant ALP-mediated freeze-out lies.
Footnote 4: An _irreducible_ SM background for the signal topology considered here would be the production of \(Z\) bosons followed by their invisible decay into neutrinos, \(Z\to\nu\nu\). Given the large mass of the \(Z\) boson with respect to the Belle II CoM energy this background is not going to be a concern for us.
Luckily, we will show in Sec. IV.1 that it is possible to select a region of phase space that contains most of the signal where the QED\({}^{n}\) background is forbidden for kinematical reasons. In Sec. IV.2 we discuss the \(\tau\tau\)-background and we take advantage of its peculiar kinematics to define different regions of phase space distinguished by their level of signal purity.
### QED\({}^{n}\) backgrounds
We explore here the kinematics of the QED\({}^{n}\) background where the missing energy is faked by an arbitrary number of \(\gamma_{\rm inv}\) not satisfying either the minimal energy threshold or the minimal angular acceptance defined in Eq. (7). Given the energy and angular cuts at Belle II these backgrounds can be studied in perturbation theory without the need of resumming large logarithms due to soft or collinear divergences.
The goal is to quantify the difficulty for a kinematical configuration where the missing energy is given by \(n\)-missing particles which are either _soft_ or _forward/backward_ to fake a signal which has _large_ missing energy, _small_ missing mass and a small \(|\eta_{\rm miss}^{*}|\). In particular we will prove that for every \(\eta_{\rm miss}^{*}\) there is a _minimal_ missing mass \(\tilde{m}_{\rm miss}^{2}(\eta_{\rm miss}^{*})\) that the QED\({}^{n}\) background can realize.
In order to simplify our discussion we will set \(\eta_{\rm miss}^{*}=0\) from now on. This choice corresponds to the largest minimal missing mass that the QED\({}^{n}\) background can realize and will be indicated as \(\tilde{m}_{\rm miss}^{2}\) for brevity. We also take all the photons azimuthal angles to be equal and we set them to zero for simplicity and without loss of generality [31].
Given the large amount of missing energy in the signal event, it is very difficult for soft \(\gamma_{\rm inv}\) to fake the signal. The only possibility would be to have \(n\) soft photons such that \(nE_{\rm min}^{*}=E_{\rm miss}^{*}\) and \(\eta_{\rm miss}^{*}=0\). However, in practice the required \(n\) is so large, of order \(\mathcal{O}(E_{\rm miss}^{*}/E_{\rm min}^{*})\), to make the rate of this process extremely suppressed. We can then ignore soft photons from now on.
Similarly, in QED\({}^{n}\) background topology with only one hard \(\gamma_{\rm inv}\), the remaining \(n-1\) soft photons should nearly cancel the \(z\) component of the momentum of the hard photon, as to pull the resulting invisible body towards central rapidity. This process will be also highly suppressed in perturbation theory and can be neglected.
The most dangerous configuration is the one with two hard photons undetected because they fly close to the beam axis with large longitudinal momenta in opposite directions. Such configurations feature _large_ angular separation between the photons, hence \(m_{\rm miss}\) is typically large. In particular, we can define the minimal invariant mass of the QED\({}^{2}\) background
\[\bar{m}_{\rm miss}|_{\rm QED^{2}}=E_{\rm miss}^{*}\cos\theta_{\rm min}^{*}\, \tag{15}\]
by sending one \(\gamma_{\rm inv}\) at an angle \(\theta_{\rm min}^{*}\) with respect to the
beam and the other at \(\pi-\theta^{*}_{\text{min}}\). This result tell us that the minimal invariant mass of a QED\({}^{2}\) background for \(\eta_{\text{miss}}=0\) and for \(E_{\text{miss}}\gtrsim 4\text{ GeV}\) is around 3.7 GeV making it easy to separate ALP with \(m_{a}<\bar{m}_{\text{miss}}|_{\text{QED}^{2}}\) from the QED\({}^{2}\) background.
We now consider the QED\({}^{3}\) background. In particular we want to demonstrate that adding extra \(\gamma_{\text{inv}}\) cannot substantially change the minimal invariant mass in Eq. (15) ensuring the robustness of our strategy. Fixing two hard photon along the ECAL edges (\(\theta_{1}=\theta_{\text{min}},\theta_{2}=\pi-\theta_{\text{min}}\)) and adding a third photon of angle \(\theta_{3}\) and energy \(E_{3}\) the missing mass can be written as a deformation of Eq. (15)
\[m_{\text{miss}}^{2}|_{\text{QED}^{3}}=\bar{m}_{\text{miss}}^{2}|_{\text{QED}^{ 2}}+2E_{3}^{*}E_{\text{miss}}^{*}\sin\theta^{*}_{\text{min}}\Delta_{3}-E_{3}^{ *2}\Delta_{3}^{2}\;, \tag{16}\]
where we defined \(\Delta_{3}=\sin\theta^{*}_{\text{min}}-\sin\theta^{*}_{3}\). For soft photons \(\Delta_{3}\) can take any sign and at leading order the invariant mass of the QED\({}^{2}\) process gets shifted by a correction which is however suppressed by \(\mathcal{O}(E_{\text{min}}^{*}/E_{\text{miss}}^{*})\lesssim 0.06\). Hard photons can escape the detection only for \(\theta^{*}_{3}<\theta^{*}_{\text{min}}\) which implies that \(\Delta_{3}>0\). In this regime \(m_{\text{miss}}^{2}|_{\text{QED}^{3}}>\bar{m}_{\text{miss}}^{2}|_{\text{QED}^{ 2}}\). For this reason in our background simulation we focus on the QED\({}^{2}\) background, which should capture the kinematical features of the relevant QED backgrounds up to detector inefficiencies.
Fig. 3 left clearly shows a separation of QED from ALP signal in the plane \((\eta^{*}_{\text{miss}},E_{\text{miss}}^{*})\). Both background and signal distribution are shown after the missing mass cut has been enforced. As can be seen by eye the QED background in red is hitting a kinematical boundary at \(E_{\text{miss}}^{*}\simeq 1\text{ GeV}\) and \(|\eta^{*}_{\text{miss}}|\).
### \(\tau\tau\) background
For the \(\tau\tau\) background we take advantage of the peculiar kinematics of the electron-positron pair which originates from the the two \(\tau\) leptons being: \(i)\) on-shell resonances; \(ii)\) flying back-to-back. Indeed, one can understand \(\tau\tau\) background as an antler topology [32; 33] and exploit results for this type of events.
We describe each \(\tau\) decay as the decay
\[\tau^{\pm}\to e^{\pm}N_{\pm}\,,\]
where \(N_{\pm}\) is a composite object made of the two neutrinos that appear in the decay of \(\tau_{\pm}\). Being a composite object, the body \(N_{\pm}\) has a mass given by \(2p_{\nu_{e}}\cdot p_{\nu_{\mu}}\), hence it changes event by event depending on the actual values of the four-momenta of the two neutrinos. We write the event-dependent mass of the body \(N_{\pm}\) as \(m_{N_{\pm}}\). With this notation in mind, the energies in the rest frame of
the decaying \(\tau^{-}\) are
\[\begin{split}& E_{e^{-}}^{\tau^{-}}=\frac{m_{\tau}^{2}+m_{e}^{2}-m_{N_ {-}}^{2}}{2m_{\tau}}\,,\\ & E_{N_{-}}^{\tau^{-}}=\frac{m_{\tau}^{2}-m_{e}^{2}+m_{N_{-}}^{2} }{2m_{\tau}}\,.\end{split} \tag{17}\]
A similar set of equations can be written in the \(\tau^{+}\) rest frame for the energies of \(e^{+}\) and the composite body \(N_{+}\). We now aim at finding what phase-space will be _not_ accessible to the invisible particles of the \(\tau\tau\) process. The region of inaccessible phase-space will be used as a selection to remove the background from the \(\tau\tau\) process.
We will approximate the energies in Eq. (17) in the limit of negligible \(m_{e}\) and negligible \(m_{N_{\pm}}\). Neglecting \(m_{N}\) with respect to \(\sqrt{s}\) may not be an accurate approximation. Nevertheless it useful for our purposes, because it enlarges the phase-space of the invisible bodies to their maximal size. Thus, in this approximation the region of inaccessible phase-space that we find is smaller than the one resulting from an exact computation. This makes our approximation conservative.
Within this approximation, the kinematics of the \(e^{\pm}\) and the \(N_{\pm}\) can be written in the Belle II CoM frame as function of just 3 quantities: \(i)\) the angles \(\theta_{\pm}\) of the \(e^{\pm}\) with respect to the direction of flight of the respective parent \(\tau\) lepton; \(ii)\) the angle \(\phi\) between the planes of the decay products of the \(\tau^{\pm}\). Denoting with \(\Lambda_{\pm}\) the Lorentz transformation that connects the \(\tau^{\pm}\) CoM to the Belle II CoM we can write:
\[\begin{split}& p_{e^{-}}=\frac{m_{\tau}}{2}\Lambda_{-}(1,s_{-},0,c _{-})\,\\ & p_{N_{-}}=\frac{m_{\tau}}{2}\Lambda_{-}(1,-s_{-},0,-c_{-})\,\\ & p_{e^{+}}=\frac{m_{\tau}}{2}\Lambda_{+}(1,s_{+}c_{\phi},s_{+}s_ {\phi},c_{+})\,\\ & p_{N_{+}}=\frac{m_{\tau}}{2}\Lambda_{+}(1,-s_{+}c_{\phi},-s_{+} s_{\phi},-c_{+})\,,\end{split} \tag{18}\]
where \(p_{\rm miss}=p_{N_{-}}+p_{N_{+}}\) and we defined \(s_{x}=\sin\theta_{x},c_{x}=\cos\theta_{x}\) with \(x\in\{+,-,\phi\}\) encompassing the polar angles of the \(e^{\pm}\) and the angle between the decay planes introduced above.5
Footnote 5: We recall that the \(\tau^{+}\) and the \(\tau^{-}\) velocities are anti-aligned in the Belle II CoM frame, therefore, once the \(z\) axis is rotated along the direction of flight of the \(\tau\) leptons the CoM of the \(\tau^{-}\) is transformed in the CoM of Belle II and in that of the \(\tau^{+}\) by boosts along the \(z\) direction of rapidity \(y\) and \(2y\), respectively, where \(\cosh y=\sqrt{s}/2m_{\tau}\).
From the kinematic in Eq. (18) one can write down the invariant mass of the \(e^{+}e^{-}\) pair originating from the \(\tau\tau\) system and write the invariant mass as a function of \(\theta_{+}\), \(\theta_{-}\) and \(\phi\). Defining the energies of electrons and positions in the Belle II CoM as \(E_{e^{\pm}}^{*}=\frac{\sqrt{s}\mp c_{\pm}\sqrt{s-4m_{\tau}^{2}}}{4}\) we can write
\[(m_{ee}^{\tau})^{2}\!=\!\frac{2}{s-4m_{\tau}^{2}}[m_{\tau}^{4}-\sqrt{s}m_{ \tau}^{2}(E_{+}^{*}+E_{-}^{*})\!+\!2E_{-}^{*}E_{+}^{*}(s-2m_{\tau}^{2})\!+\!m_{ \tau}^{2}c_{\phi}\mathcal{M}_{-}\mathcal{M}_{+}]\, \tag{19}\]
where we defined \(\mathcal{M}_{\pm}=\sqrt{m_{\tau}^{2}-2E_{\pm}^{*}\sqrt{s}+4E_{\pm}^{*2}}\). Since for the QED and the signal it can happen that \(m_{\tau}^{2}-2E_{\pm}^{*}\sqrt{s}+4\left(E_{\pm}^{*}\right)^{2}<0\) we use the \(|m_{ee}^{(\tau)}|\) in our selection described Sec. V.
As apparent from Fig. 3 right, the \(\tau\tau\) background lives along a line of the space \(m_{ee}-m_{ee}^{\tau}\). Up to possible mismeasurements of the electron and positron momenta the \(\tau\tau\) background can be removed by filtering out events for which \(m_{ee}=m_{ee}^{\tau}\). In Fig. 3 we used a binning corresponding to \(\delta m_{ee}/m_{ee}=2\%\). As the signal populates the upper part of the plane above the line \(m_{ee}=m_{ee}^{\tau}\) it is possible to obtain high rejection of \(\tau\tau\) while keeping a substantial amount of signal.
As discussed in Sec. III our signal has both a \(t\)-channel dominated (photon-fusion) and an \(s\)-channel dominated (ALP-Dalitz) contribution. In Fig. 3 right we can identify a large-\(m_{ee}\) region dominated by the \(t\)-channel photon-fusion and a small-\(m_{ee}\) region dominated by the Dalitz process.
The Dalitz dominated region at small \(m_{ee}\) tends to be very well distinguishable from the \(\tau\tau\) background, thus it may give rise to a background-free search, very safe from potential systematic uncertainties in the background estimation. Conversely, the fusion dominated region at large \(m_{ee}\) lives close to the \(\tau\tau\) background and can only be efficiently separated thanks to the precision in the invariant mass measurements of Belle II.
### Further backgrounds
We conclude this section discussing possible further backgrounds which are not included in the projected sensitivity in Fig. 1 since their rate is difficult to estimate without a full comprehension of the detector performances.
One background can arise from partial reconstruction of microscopic fully visible 2-body processes, e.g.
\[e^{+}e^{-}\to q\bar{q}\rightarrow{\rm hadrons}\to e^{+}e^{-}+{\rm inv.}\,,\]
resulting in two isolated electrons and missing momen
tum. These kind of backgrounds arise because of the chance that a quark appears in the detector as an isolated charged lepton, e.g. because the leading hadron that appears from the color charge of the quark carries the vast majority of the quark momentum and decays semi-leptonically, that is \(q\to\pi+\text{soft-particles}\to e\nu\). These backgrounds can be assimilated to the \(\tau\tau\) background in that the unobserved particles of each of \(q\) and \(\bar{q}\) can be treated as a kind of invisible body for a suitable event-dependent mass analogous to \(m_{N_{\pm}}\). Accounting for these backgrounds would roughly amount to a variation of the total rate of the \(\tau\tau\) process after our selection.
In relation to the QED background the kinematical argument in Sec. IV.1 is quite robust with respect to modification of the minimal angle \(\theta^{*}_{\text{min}}\) or the minimal detectable energy \(E^{*}_{\text{min}}\) at Belle II. Vice-versa, losses of central photons around small non-instrumented regions of the detector are a potential source of very large backgrounds for our signal. For instance a lost \(\gamma\) can give rise to a final state \(e^{+}_{\text{vis}}e^{-}_{\text{vis}}\gamma_{\text{lost}}\), which is QED1 in our power counting of the background processes.
Footnote 1: The \(\gamma\)-decay is also a \(\gamma\)-decay.
In such a case the measured \(E_{\text{miss}}\) could be due to this lost central photon recoiling against the detected \(e^{+}e^{-}\) pair. Such configuration leads naturally to a small \(m_{\text{miss}}\), due to the small physical photon mass, and small \(\eta^{*}_{\text{miss}}\). In principle such background can be removed if \(\eta^{*}_{\text{miss}}\) is well measured by vetoing events for which the missing momentum falls in dead or not covered areas of the detector. A dedicated study of these background should be performed by the experimental collaboration. We further discuss this important caveat in Sec. VI.
## V Event selection and sensitivity
We are now ready to summarize the kinematic selection we used to distinguish the signal kinematic described in Sec. III.1 from the SM backgrounds described in Sec. IV.1 and Sec. IV.2.
A first discrimination between signal and backgrounds can be obtained from a selection on the _missing mass_ defined in Eq. (11) which we want to fix around the ALP mass \(m_{a}\) under examination up to the experimental resolution. The missing mass is obtained from a cancellation of two positive terms, \(E_{\text{miss}}\) and \(|p_{\text{miss}}|\), thus it is expected that when \(m_{\text{miss}}\) tends to zero its experimental uncertainty gets large. After detector effects are included as detailed in Appendix A, a good fit for the resolution on \(m_{\text{miss}}\) is
\[\delta m_{\text{miss}}^{2}\simeq\left[1-\left(\frac{m_{\text{miss}}}{10\text{ GeV}}\right)^{4}\right]\text{ GeV}^{2}\,. \tag{20}\]
In our selection we require
\[|m_{\text{miss}}^{2}-m_{a}^{2}|\leq\kappa\cdot\delta m_{\text{miss}}^{2}\, \tag{21}\]
where the parameter \(\kappa\) controls the width of the missing mass window which has been optimized to maximize the sensitivity.
We further characterize the signal kinematics demanding a large _missing energy_\(E^{*}_{\text{miss}}\) and a small _missing rapidity_\(\eta^{*}_{\text{miss}}\). In practice we require
\[E^{*}_{\text{miss}}\in\left[E^{\text{low}}_{\text{miss}},\frac{s+m_{a}^{2}}{2 \sqrt{s}}\right]\,\ \ \ \ \ |\eta^{*}_{\text{miss}}|\leq\eta^{\text{high}}_{\text{miss}}\,, \tag{22}\]
where both \(E^{\text{low}}_{\text{miss}}\) and \(\eta^{\text{high}}_{\text{miss}}\) are chosen for each ALP mass in order to maximize the sensitivity.
The cuts in Eq. (21) and Eq. (22) are chosen to optimize \(S/\sqrt{B}\) in the cut-and-count scheme where \(S,(B)\) indicates the number of signal (background) events. As an extra requirement we demand these cuts to keep at least 90% of the signal. An example of the selection that we identify as optimal is given in Table 1 for two choices of ALP mass.
As shown in Fig. 3 left the combination of Eq. (21) and Eq. (22) is enough to suppress most of the QED background. In addition, given the three-body nature of the signal, we can find a fourth selection variable to further improve the sensitivity. We find that the invariant mass of the visible \(e^{+}e^{-}\) final state
\[m_{ee}^{2}=2\,p_{e}\cdot p_{e^{+}}+2m_{e}^{2}=s+m_{a}^{2}-2\sqrt{s}E^{*}_{a} \tag{23}\]
is very effective to remove the background from \(\tau\tau\) or any of its "look-alike" backgrounds described in Sec. IV.3. The last equality in Eq. (23) holds for the ALP signal only, whereas for the \(\tau\tau\) background the "antiler" topology implies that \(m_{ee}\) should correspond to the value given in Eq. (19).
This variable has also the merit of being a good discriminator between the Dalitz contribution to our signal, concentrated at low \(m_{ee}\), and the fusion contribution concentrated at large \(m_{ee}\). We then want to construct a test statistic which is able to incorporate both the _low_\(m_{ee}\) region which is in large part background-free and well-separated from the background and the _high_\(m_{ee}\) region where most of the signal cross section lies. In the latter case the separation between signal and background relies crucially on the resolution on \(m_{ee}\) which distinguishes the \(\tau\tau\)-like background, aligned in the region where \(m_{ee}\sim m_{ee}^{\tau}\), from the photon-fusion signal.
In practice, we construct a log-likelihood using the expected signal and background counts for 50 ab\({}^{-1}\) at Belle II
\[\Lambda=-2\sum_{i,j}\ln\frac{L(S_{i,j},B_{i,j})}{L(0,B_{i,j})}\,, \tag{24}\]
\begin{table}
\begin{tabular}{c|c|c|c} \(m_{a}\) [GeV] & \(\eta^{\text{high}}_{\text{miss}}\) & \(E^{\text{low}}_{\text{miss}}\) [GeV] & \(\kappa\) \\ \hline
0.025 & 1.4 & 1.8 & 2.8 \\
7 & 2.5 & 7.1 & 2 \\ \end{tabular}
\end{table}
Table 1: Event selection parameters for two example ALP masses. \(\eta^{\text{high}}_{\text{miss}}\) is the missing rapidity cut as in Eq. (22); \(E^{\text{low}}_{\text{miss}}\) is the missing energy lower bound as in Eq. (22); \(\kappa\) controls the width of the missing mass cut as in Eq. (21).
where \(i\) and \(j\) run on the bins of the plane \((m_{ee},|m_{ee}^{\tau}|)\) as drawn in Fig. 3. For the computation of the likelihood we define bins of width \(\delta m/m=2\%\) motivated by the expected Belle II resolution (see App. A.2 for details). In each bin we compute the Poisson factor
\[L(S,B)=\frac{\left(S+B\right)^{B}}{B!}e^{-\left(S+B\right)}\,. \tag{25}\]
The sensitivity shown in Fig. 1 corresponds to 95% C.L. and it is obtained by requiring \(\Lambda<4\).
Some remarks on the robustness of our result are in order. The photon fusion channel contribution to \(e^{+}e^{-}+\text{inv.}\) is potentially in danger of suffering from \(\tau\tau\) background spillover in the signal-rich bins. This spill-over in real data may be due to resolutions effects. Our choice of 2% resolution in the definition of the bins for the computation of the likelihood Eq.(24) should protect us from this type of problems. Furthermore in Fig. 5 in the Appendix we show that our separation of the \(\tau\tau\) background from the signal does not rely on a overly fine measurement of \(m_{ee}\) and \(m_{ee}^{\tau}\). To ensure the robustness of our sensitivity, we estimate the uncertainty due to finite MC sample performing several independent generations of our background MC. The variation of the sensitivity over these replicas is negligible on the log-scale of our Fig. 1, and therefore not shown in the figure.
## VI Discussion
In this work we derived the expected sensitivity on an ALP decaying invisibly at Belle II with 50 \(\text{ab}^{-1}\) of data in the channel \(e^{+}e^{-}+\text{invisible}\). As shown in Fig. 1 our study demonstrates that there is potential to improve significantly over the results based on the \(\gamma+\text{invisible}\) final state [4; 5; 9; 10; 11] over the whole mass range. As shown in Fig. 1, the expected improvement on the reach can cover most of the allowed parameter space for DM freeze-out through ALP-mediated annihilations in the resonant regime (see Ref. [4] and the discussion in Sec. II for details).
For light ALP masses we argued that the signal kinematics is easily distinguishable from the SM background due to the interplay between low _missing mass_ and large central _missing energy_. Remarkably, the excellent resolution of the Belle II invariant mass measurements makes the \(e^{+}e^{-}+\text{invisible}\) search competitive also for heavy ALP masses. For very heavy ALP masses (i.e. \(m_{a}\gtrsim 8\) GeV) the \(e^{+}e^{-}+\text{invisible}\) search can fill the gap where the \(\gamma+\text{invisible}\) search encounters trigger issues related to the very large rate of single photon events at low photon energies.
In order to get a fair comparison of our proposal with the projected reach of the \(\gamma+\text{invisible}\) channel an important issue is the possibility of undetected hard central photons. As discussed in Sec. IV.3, this could be due to small non-instrumented regions of the detector or to other unspecified detector inefficiencies. This background is likely to be an important issue for our kinematic selection, but we cannot reliably include it in our simulation. On the contrary Ref. [4] claims to account for this type of background using not public official Belle II background simulations prepared for the Belle II physics book [11]. Ref. [12] confirms that events of this type constitute at present the main challenge of the \(\gamma+\text{invisible}\) channel. A similar background will affect our sensitivity in a way that should be estimated by the experimental collaboration. We hope that this work will trigger such a study.
Finally we mention two interesting future directions. First, the study of the fusion production mechanism for invisible ALP should be extended also for off-shell production (see for example Ref. [34] for a similar study for \(\gamma+\text{inv.}\) final state). In this case many of the kinematical cuts discussed here should be revised. Second, the importance of the production channel considered here should also carry over in many other experimental setups, including other high-intensity \(e^{+}e^{-}\) colliders as well as future colliders. Significant differences can arise in the latter case, as new backgrounds due to electroweak bosons arise (see Ref. [35] for a first related study in this direction). We defer this investigation to a future work.
###### Acknowledgements.
We thank Enrico Graziani and Torben Ferber for discussions about Belle II physics. We thank Ennio Salvioni and Alberto Mariotti for feedback on the draft.
## Appendix A Extra information on the event selection
In this appendix we collect further information supporting the logic of our signal selection. We start by detailing how the sensitivity of the invariant mass to resolution effects was derived. In Sec. A.1 we characterize the signal kinematics, in Sec. A.2 we discuss the separation between the QED and \(\tau\tau\) backgrounds and the signal.
We model the resolution on the measurements on \(e^{\pm}\) and \(\gamma\) with gaussians centered at the particle-level value and standard deviations given in Ref. [11]:
\[\frac{\delta E}{E} = \sqrt{\left[\frac{0.066\%}{E/\text{ GeV}}\right]^{2}+\left[\frac{0.81\%}{(E/\text{ GeV})^{1/4}}\right]^{2}+[1.34\%]^{2}}\,\] \[\delta\theta = 10^{-3}. \tag{35}\]
Assuming initial state electron and positrons to be perfectly well measured we have computed the expected resolution on \(m_{\text{miss}}\) in Eq. (20) by repeated computations of \(m_{\text{miss}}\) in Eq. (11) upon random variations of the energies and angles for the relevant kinematic configurations.
These results yielded the fit given in Eq. (20). Similar results have been obtained producing unweighted events with MadGraph and applying our selection described in Sec. V on events that have undergone a gaussian smearing described above. A similar procedure have been applied to \(m_{ee}\) and \(m_{ee}^{\tau}\) as shown in Fig. 5 right.
### Signal
Further features characterizing our signal can be understood from Fig. 4 which shows how the average missing energy in the signal events depends on the minimal angular acceptance of Belle II \(\theta_{\rm min}^{*}\). We computed the average missing energy with no restrictions on the missing rapidity (thick solid) or requiring \(|\eta_{\rm miss}^{*}|<1.4\) (thick dashed). Comparing the two lines we find that this requirement does not significantly alter the amount of \(E_{\rm miss}\) in signal events. We also show as thin green lines the the 10% lowest quantile of the sample. This gives the value of \(E_{\rm miss}^{*}\) such that only 10% of the signal sample has a smaller missing energy and can be used to understand the event selection of Table 1.
On a more speculative note, we notice that the angular acceptance of Belle II was fixed in our work as to enforce the standard electron and photon reconstruction of Belle II. This makes use of both ECAL and tracking information [11]. An interesting possibility would be to leverage the larger angular acceptance of the tracker going down to a \(\theta_{\rm min}=17^{\circ}\) with a tracker-only identification of the positron and electron pair. Such a possibility does not exist for \(\gamma+X\) signals, therefore it is an exploration specific of our signal and may lead to specific advantages for our final state. As shown in Fig. 2 left using only tracking for electron-ID will increase the signal cross section, thus potentially ameliorating the sensitivity. As discussed in Sec. IV.3 there are a number of possible backgrounds that can enter in our analysis if one considers less effective detection hardware, so the advantage from this loosening of the electron-ID must be carefully evaluated.
In general, changing the detector angular coverage may give large effects on the \(E_{\rm miss}\). However, at a quantitative level Fig. 4 shows that the average \(E_{\rm miss}\) remains sizable even for rather small \(\theta_{\rm min}^{*}\) around few degrees. Thus the preference for large \(E_{\rm miss}\) of our signal, which we leveraged in our selection, keeps being an important and appreciable consequence of detector with significantly larger acceptance than the current Belle II. From this consideration we conclude that imagining to instrument detectors in the forward region of Belle II down to \(\theta_{\rm min}=1-2^{\circ}\) would increase the photon fusion cross section while maintaining the distinctive large missing energy in the signal events.
### SM backgrounds
In Fig. 5 we corroborate our analysis of the background kinematic in Sec. IV that lead to our selection in Sec. V.
In the left panel we show the importance of an upper cut on \(\eta_{\rm miss}\) to select a signal kinematics which is not populated by the QED\({}^{2}\) background as defined in Sec. IV.1. Despite we focused our discussion on light ALPs Fig. 5 left shows that the separation between the signal and QED\({}^{2}\) background is excellent for _all_ the ALP masses. This fact explain the flatness of our expected sensitivity in Fig. 1 where the degradation at higher ALP masses can be ascribed to the reduced separation of the signal from the \(\tau\tau\) background which we now explain.
In the right panel we show how much our separation between the \(\tau\tau\) background and the signal depends on the ALP mass. The signal normalized distribution is well spread in the \((m_{ee},m_{ee}^{\tau})\) plane with the Dalitz contribution concentrated at small \(m_{ee}\) and the fusion contribution at large \(m_{ee}\). Increasing the ALP mass reduces the maximal \(m_{ee}\) making the signal distribution closer to the background. This effect explain the degradation of our expected sensitivity at high ALP masses.
Contrary to the QED\({}^{2}\) background, the separation between the \(\tau\tau\) background and the signal benefits from a good resolution in invariant mass. This is again clear from Fig. 5 left where the separation between the QED\({}^{2}\) and the signal can be seen "by eye" while the one between the \(\tau\tau\) and the signal (right plot) very much depends on how precisely we can resolve the diagonal \(m_{ee}\simeq m_{ee}^{\tau}\).
Figure 4: Missing energy for signal events of \(e^{+}e^{-}\to e^{+}e^{-}a\) with \(m_{a}=0.1\,\mathrm{GeV}\) as a function of the minimal angular acceptance of Belle II. **Thick blue** lines are sample averages. **Thin green** lines are the 10% lowest quantile of the sample. In the **solid** lines no restriction on \(\eta_{\rm miss}\) are imposed. In the **dashed** lines \(|\eta_{\rm miss}^{*}|<1.4\). The vertical **red dashed** (**red dotted**) line corresponds to the angular acceptance of Belle II ECAL (tracking system) taken from Ref. [11]. |
2304.07527 | Align-DETR: Improving DETR with Simple IoU-aware BCE loss | DETR has set up a simple end-to-end pipeline for object detection by
formulating this task as a set prediction problem, showing promising potential.
However, despite the significant progress in improving DETR, this paper
identifies a problem of misalignment in the output distribution, which prevents
the best-regressed samples from being assigned with high confidence, hindering
the model's accuracy. We propose a metric, recall of best-regressed samples, to
quantitively evaluate the misalignment problem. Observing its importance, we
propose a novel Align-DETR that incorporates a localization precision-aware
classification loss in optimization. The proposed loss, IA-BCE, guides the
training of DETR to build a strong correlation between classification score and
localization precision. We also adopt the mixed-matching strategy, to
facilitate DETR-based detectors with faster training convergence while keeping
an end-to-end scheme. Moreover, to overcome the dramatic decrease in sample
quality induced by the sparsity of queries, we introduce a prime sample
weighting mechanism to suppress the interference of unimportant samples.
Extensive experiments are conducted with very competitive results reported. In
particular, it delivers a 46 (+3.8)% AP on the DAB-DETR baseline with the
ResNet-50 backbone and reaches a new SOTA performance of 50.2% AP in the 1x
setting on the COCO validation set when employing the strong baseline DINO. Our
code is available at https://github.com/FelixCaae/AlignDETR. | Zhi Cai, Songtao Liu, Guodong Wang, Zheng Ge, Xiangyu Zhang, Di Huang | 2023-04-15T10:24:51Z | http://arxiv.org/abs/2304.07527v1 | # Align-DETR: Improving DETR with Simple IoU-aware BCE loss
###### Abstract
DETR has set up a simple end-to-end pipeline for object detection by formulating this task as a set prediction problem, showing promising potential. However, despite the significant progress in improving DETR, this paper identifies a problem of misalignment in the output distribution, which prevents the best-regressed samples from being assigned with high confidence, hindering the model's accuracy. We propose a metric, recall of best-regressed samples, to quantitively evaluate the misalignment problem. Observing its importance, we propose a novel Align-DETR that incorporates a localization precision aware classification loss in optimization. The proposed loss, IA-BCE, guides the training of DETR to build a strong correlation between classification score and localization precision. We also adopt the mixed-matching strategy, to facilitate DETR-based detectors with faster training convergence while keeping an end-to-end scheme. Moreover, to overcome the dramatic decrease in sample quality induced by the sparsity of queries, we introduce a prime sample weighting mechanism to suppress the interference of unimportant samples. Extensive experiments are conducted with very competitive results reported. In particular, it delivers a \(46\)\((+3.8)\%\) AP on the DAB-DETR baseline with the ResNet-50 backbone and reaches a new SOTA performance of \(50.2\%\) AP in the 1x setting on the COCO validation set when employing the strong baseline DINO. Our code is available at [https://github.com/FelixCaae/AlignDETR](https://github.com/FelixCaae/AlignDETR).
## 1 Introduction
Recently, transformer-based methods have gained significant attention in the community of object detection, due to the development of the DETR paradigm proposed by [2]. Different from the previous CNN-based detectors, DETR formulates this task as a set prediction problem and adopts learnable queries to represent each object in one-to-one correspondence. Such unique correspondence derives from bipartite graph matching by means of label assignment during training. It bypasses the hand-crafted components such as non-maximum suppression (NMS) and anchor generation. With this simple and extensible pipeline, DETR shows great potential in a wide variety of areas, including 2D segmentation [4, 6, 17], 3D detection [34, 26, 24], _etc._, in addition to 2D detection [31, 42].
During the past few years, the successors have advanced DETR in many ways. For instance, some methods attempt to incorporate local operators, such as ROI pooling [31] or deformable attention [42, 9], to increase the convergence
Figure 1: (a) Recalls of the BR samples calculated on DINO with two losses, focal loss [21] and IA-BCE loss (ours). \(N\) denotes the number of ground truths and we calculate the recalls under different numbers of HC samples. (b) Frequency histogram of the IoU distributions of two types of samples. We use COCO _val._ as the test set for the two experiments.
speed and reduce the computational cost; some methods indicate that those learnable queries can be improved through extra physical embeddings [25, 35, 23]; and some methods [16, 13, 3, 37] notice the defect of one-to-one matching and introduce more positive samples by adding training-only queries. Box refinement [42, 31, 37] is another helpful technique, which explicitly takes previous predictions as priors at the next stages.
Despite the recent progress in DETR-based detectors [2, 20, 42, 9, 37, 23], an underlying misalignment problem has been overlooked. This problem refers to the inconsistency of output predictions between the classification confidence and localization precision, _e.g._ a highly confident prediction with a relatively low intersection-over-union (IoU) score or vice versa [14, 19]. Ideally, the predictions with the highest classification scores (HC samples) also have the best-regressed bounding boxes (BR samples); however, it does not hold when the misalignment problem occurs, creating the risk of missing BR predictions and thus deteriorating the detection performance [14, 19].
Fig. 1a shows an empirical study on a strong baseline DINO [37], and the recall of BR samples by the well-trained model on the top-\(k\) confident outputs in an image is calculated, where a higher recall indicates that more BR samples are selected in final prediction. As we can see, only 45% and 48% BR samples are covered by HC samples with top-\(N\) and top-\(2N\) scores, respectively, suggesting that more than half of the well-localized predictions have low confidence scores. In Fig. 1b, the frequency histogram of the HC and BR samples is plotted from 5,000 samples on COCO _val._ and a clear discrepancy between the two distributions is observed, revealing that even the highly-optimized model DINO [37] suffers from the misalignment problem.
In fact, this problem also appears in CNN-based detectors [14, 19, 38]. To address it, IoU-Net [14] proposes an individual IoU prediction branch and an IoU-guided NMS to align the classification confidence and regression precision. A number of alternatives [19, 38] introduce an IoU-aware loss or weight to integrate the IoU branch into the original classification branch and adopt a joint training strategy. These methods are specially designed for NMS-based detectors, but DETR implicitly selects samples by modeling query relations under the guidance of one-to-one matching without an explicit query selection step like NMS, making them less applicable to DETR [2, 42, 23, 37]. To the best of our knowledge, the misalignment problem still remains unexplored in the DETR series.
This paper investigates the solution to the misalignment problem in DETR. To this end, we propose a novel method, namely Align-DETR. It makes use of the standard binary cross-entropy (BCE) loss with an IoU-aware target on foreground samples, which we term the IA-BCE loss. For background samples, we still keep the focal loss [21] to conduct hard-negative mining. The IA-BCE loss depends on a quality metric that combines the classification confidence and regression accuracy as a target and dynamically adjusts the target for foreground samples according to the quality metric, which smooths the training target and strengthens the correlation between classification and regression, hence increasing the chance for BR samples to be selected, as supported by the improved recall of the BR samples depicted in Fig. 1 (note that there is no change to the matching process).
Moreover, Align-DETR takes advantage of the many-to-one matching technique adopted in CNN-based object detectors [40, 11, 28] as well as recent DETR ones [16, 13, 3], which allows it to accelerate the training process. Despite this, another issue raises that assigning more positive samples tends to force some samples to be closely associated with the background. To overcome this difficulty, we then design a prime sample weighting scheme that downgrades the losses of the secondary positive samples.
Overall, Align-DETR offers a simple yet effective solution to the misalignment problem aforementioned, facilitating DETR in terms of better query selection and faster training speed. Equipped with a ResNet-50 [12] backbone and a DAB-DETR [23] baseline, our method achieves +3.8% AP gain under the schedule with 50 epochs. We also combine it with the strong baseline DINO [37] and establish a new SOTA performance with \(50.2\%\) AP in the \(1\times\) setting on the COCO [22] validation set. To sum up, our contributions are three-fold. **First**, we are the first to spot the misalignment problem in the query selection mechanism of DETR and propose a metric, recall of BR samples, to evaluate it qualitatively. **Second**, we propose Align-DETR, which includes the IoU-aware BCE loss and the mixed-matching strategy, strengthened with prime sample weighting, as a strong baseline for the object detection task. **Last**, we conduct extensive experiments and ablations on COCO to validate the effectiveness of the proposed method with new SOTA results reported.
## 2 Related Work
### End-to-end Object Detection
The pursuit of end-to-end object detection or segmentation dates back to several early efforts [27, 29, 30]. They rely on recurrent neural network (RNN) to remove duplicates or adopt a complex subnet to replace NMS. Recently, DETR [2] has established a set-prediction framework based on the transformer. Compared to previous work, DETR is rather simpler but still suffers from the downside of slow convergence with a number of subsequent DETR variants [42, 9, 8, 23, 16, 37] working on this issue. Some methods make improvements on the cross-attention in decoders [8, 25]. Deformable DETR [42] presents a deformable-attention module that only scans a small set of
points near the reference point, while Sparse RCNN [31] makes use of ROI pooling to replace the attention module. Alternatively, others consider introducing prior knowledge into queries. Conditional-DETR [25] incorporates reference coordinates into the position embedding; SMCA [8] applies a modulated Gaussian bias; AnchorDETR [35] and DAB-DETR [23] turn to improving the learnable queries to eliminate the ambiguity of physical meanings; and Efficient DETR [36] conducts dense prediction in place of learnable queries. The recent DN-DETR [16] focuses on the unstable optimization process in one-to-one matching and relies on a denoising mechanism to stabilize the training, which is extended by DINO [37]. Despite the progress achieved, we observe that the problem of misalignment between classification confidence and regression accuracy widely exists in DETR-based detectors, which is still underexplored, leaving much room for improvement. In this work, we focus on this problem and aim at strengthening the correlation between the two sub-tasks.
A few recent studies have noticed limitations of one-to-one matching and have proposed many-to-one assigning strategies to ameliorate DETR regarding training efficiency. Group-DETR [3] accelerates the training process with multiple groups of samples and ground truths. H-DETR [13] introduces a hybrid branch mechanism to increase the training samples without ruining the end-to-end property. Thanks to their efforts, the DETR pipeline benefits from efficient training but at the cost of complexity and computation burden. For example, Group-DETR [3] and Hybrid-DETR [13] use 3300 (11 groups) and 1800 queries (5x in extra branch), respectively. In contrast, our proposed strategy does not introduce more queries and keeps the pipeline training efficient.
### Label Assignment in Object Detection
As the CNN-based object detectors develop from the anchor-based framework to the anchor-free one, many works realize the importance of label assignment (which is previously hidden by anchor and IoU matching) during training. Some works [15, 11, 39] identify positive samples by measuring their dynamic prediction quality for each object.
Others [41, 19, 18, 10] learn the assignment in a soft way and achieve better alignment on prediction quality by incorporating IoU [19, 38] or a combination of IoU and confidence [41, 18, 10]. To make a clear comparison, we summarize some loss designs in Tab. 1.
The misalignment problem in object detection has been addressed by various traditional solutions, such as incorporating an additional IoU-branch to fine-tune the confidence scores [14] or integrating the IoU prediction branch into classification losses [19]. Although they also design losses [19, 38], these approaches mainly improve the model's performance through the NMS aspect as shown in Fig. 2.
In the contrast, this work solves the problem solely during training and makes no modifications to the post-processing. We illustrate the comparison in Fig. 2. We propose an IoU-aware BCE loss for DETR to better optimize the model. Notably, our method can also be seen as a soft-label assignment method.
## 3 Method
### Motivation and Framework
**First**, DETR suffers from the problem of misalignment in output with inconsistent classification confidence and regression accuracy. This problem reduces the probability of BR samples being the resulting predictions and inevitably produce sub-optimal outputs (_e.g._ inaccurate bounding boxes but with confident classification scores). While the problem has been investigated in the previous literature [19, 38, 14], unfortunately, all the methods are either over-complicated in the loss design or not well-suited to DETR, which motivates us to design a proper solution dedicated to DETR. **Second**, DETR also suffers from slow
\begin{table}
\begin{tabular}{c|c|c|c} \hline \hline Method & \(w_{pos}\) & \(w_{neg}\) & t \\ \hline \hline GFL [19] & \((s-t)^{2}\cdot t\) & \((s-t)^{2}\cdot(1-t)\) & \(IoU\) \\ VFL [38] & \(t\cdot t\) & \(t\cdot(1-t)\) & \(IoU\) \\ TOOD [7] & \((s-t)^{2}\cdot t\) & \((s-t)^{2}\cdot(1-t)\) & \(f(IoU,s)\) \\ MuSu [10] & \((s-t)^{2}\cdot t\) & \(s^{2}\cdot(1-t)^{4}\) & \(f(IoU,s)\) \\ DW [18] & \(f_{pos}(IoU,s)\) & \(P_{neg}\cdot In_{neg}\) & - \\ IA-BCE (Ours) & \(t\) & \(1-t\) & \(f(IoU,s)\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Comparison with some related loss designs.
Figure 2: Comparison of the solutions to the misalignment problem. In CNN-based object detectors, the main efforts are made to guide the NMS [14] better selection. While ours IA-BCE focuses on improving the model’s relation modeling ability.
convergence. A common practice [33, 40, 3, 13] is applying a many-to-one matching strategy and one-to-one matching in different branches to maintain the property of end-to-end prediction. Differently, the other line of works [13, 36] apply the strategy at shallow decoder layers while keeping one-to-one matching at top layers. We choose to follow the second line as described in Sec. 3.4 since it introduces a minimal modification to the original pipeline with negligible extra computational burdens. Moreover, we find that the sparsity of queries leads to a problem in many-to-one label assignment. The increased positive samples in matching occasionally involve samples that are similar to the background (with poor quality). To address this issue, we propose a prime sample weighting strategy to configure the training target properly.
We illustrate our framework in Fig. 3 and introduce the detailed implementations in the following sections.
### Preliminaries
The original DETR [2] framework consists of three main components: a CNN-backbone, an encoder-decoder transformer [32], and a prediction head. The backbone processes the input image first, and the resulting feature is flattened into a series of tokens \(X=\{x_{1},x_{2},...,x_{m}\}\). Then the transformer extracts information from \(X\) with a group of learnable queries \(Q=\{q_{1},q_{2},...,q_{n}\}\) as containers. At last, the updated queries are transformed into predictions \(P=\{p_{1},p_{2},...,p_{n}\}\) through the prediction head. In most cases, \(m\) is much less than \(n\), making DETR a sparse object detection pipeline.
DETR adopts the one-to-one label assignment on all layers to help eliminate redundant predictions. Given the ground truth (GT) \(G=\{g_{0},g_{1},...,g_{n}\}\), which is padded with background tokens to the same size as predictions, the one-to-one label assignment finds a minimum weighted bipartite graph matching \(\sigma\in\mathcal{S}\) between \(G\) and \(P\) as :
\[\sigma=\operatorname*{argmin}_{\sigma\in\mathcal{S}}\sum_{i}^{n}\mathcal{L}_ {match}(p_{\sigma(i)},g_{i}), \tag{1}\]
\[\mathcal{L}_{match}=\mathcal{C}_{cls}+\mathcal{C}_{reg}, \tag{2}\]
where \(n\) is the number of predictions, \(\mathcal{S}\) is the set of all permutations and \(\mathcal{L}_{match}\) is a pair-wise matching cost that combines classification cost \(\mathcal{C}_{cls}\) and regression cost \(\mathcal{C}_{reg}\).
DETR uses Hungarian loss \(\mathcal{L}_{o2o}\) as the training target defined as:
\[\mathcal{L}_{o2o}=\frac{1}{N_{gt}}\sum_{i}^{n}\lambda_{cls}\mathcal{L}_{cls}+ \lambda_{reg}\mathcal{L}_{reg}, \tag{3}\]
where \(N_{gt}\) is the number of ground-truth objects, \(\mathcal{L}_{cls}\) is classification loss and \(\mathcal{L}_{reg}\) is regression loss. \(\lambda_{cls}\) and \(\lambda_{reg}\) are loss balance terms.
### Iou-aware Classification Loss
Recall that DETR adopts a different mechanism to remove redundant predictions compared to CNN-based object detectors. In the decoder's self-attention modules, the final output is determined through the communication among all the queries, guided by the one-to-one matching as shown in Eq. 1. Additionally, it simply sums the classification cost and regression cost as individual metrics before matching in Eq. 2. This design expects the best samples to be matched in terms of regression and classification.
However, the original matching process ignores the misalignment between the classification score and regression accuracy and thus is incapable of ensuring that the BR samples have a high overlap with HC samples.
Inspired by previous works [19, 38, 14], we use a simple BCE loss with IoU as targets to align these two scores.
Figure 3: The architecture overview of the proposed approach Align-DETR. It mainly consists of two parts, matching and loss computation. One-to-one matching is applied on the output layer to remove duplicated predictions. And many-to-one matching is applied on the intermediate layers to accelerate training. After the matching, predictions are sent to an evaluation process to get their quality and relative ranks (the colored area). These results are then transformed into the targets for computing BCE loss (the gray area).
First, we define \(t\) as the weighted geometric average of the confidence score \(s\) and the IoU score (with ground-truth) \(u\), similar to [10, 7]:
\[t=s^{\alpha}\cdot u^{(1-\alpha)}, \tag{4}\]
where \(\alpha\) is a hyper-parameter that controls the proportion of each term, and we empirically find \(\alpha=0.25\) leads to the good results. The IoU score term in Eq. 4 takes the main role of guiding the model to build a strong relation between IoU accuracy and classification scores. Let \(\alpha=0\), \(t=u\), and the loss target is fully IoU-dependant; let \(\alpha=1\), \(t=s\) and the loss target is \(s\), thus no training signals would be applied to \(s\). In practice, we set \(\alpha\) between 0 and 1 to strike a balance between the amount of guidance and training stability. We design an asymmetric classification loss, which assigns different weights to the foreground samples and background samples:
\[\mathcal{L}_{cls}=\sum_{i}^{N_{pos}}BCE(s_{i},t_{i})+\sum_{j}^{N_{neg}}s_{j}^{ 2}BCE(s_{j},0), \tag{5}\]
where \(s\) is the predicted confidence score and \(t\) is the proposed metric that absorbs the IoU score. For foreground samples, we do not use the focal loss term to suppress "easy" positive samples, since positive samples in DETR are relatively rare, and we want to keep their influence [38]. For background samples, the focal loss weight \(s_{j}^{2}\) is still kept to do the hard negative mining.
A detailed comparison between our method (IA-BCE) and other losses designed for the misalignment problem can be seen in Tab. 1. Different from previous solutions, such as MuSu [10], which have complicated forms, our loss design is much simpler, continuing the simplicity property of DETR.
### Mixed Matching Strategy
Based on the consensus among the previous studies [37, 16, 13, 3, 33] that pure one-to-one label assignment is inefficient in training, we propose a mixed-matching strategy that utilizes many-to-one matching at shallow decoder layers and one-to-one matching at the top layer. As a result, more positive samples can participate in training, leading to faster convergence of the transformer structure.
First, we copy GT set \(G\) by \(k\) times to get \(G^{\prime}\) for each intermediate layer and then input these new GT to the criterion. Then, in the matching process, one GT is assigned to \(k\) predictions, resulting in a total number of \((L-1)\times(k-1)\) auxiliary positive samples plus \(L\) original positive ones for each GT, where \(L\) is the number of decoder layers.
Though our mixed training strategy is motivated by the hybrid layer matching in H-DETR [13], we differ it in two ways: (a) H-DETR uses an increased number of augmented queries while we keep the query number unchanged for computation efficiency; (b) H-DETR applies many-to-one matching at only a part of intermediate layers and introduce hand-craft designs while we apply it at all decoder layers but the last one.
In addition, our study in Section 4.3.3 reveals that assigning more positive samples does not necessarily result in greater benefits. We attribute this to the sparsity of queries, which leads to a dramatic decrease in sample quality as the ranks increased. This makes low-ranking positive samples similar to background [41, 37], which interferes with the learning of decision boundary. In this work, we calculate \(w_{i}\) of positive samples based on their ranks \(r_{i}\) sorted by \(t_{i}\) in a group (all samples belong to one GT), expressed as \(w_{i}=exp(-r_{i}/\tau)\) where \(\tau\) is the temperature controlling the sharpness. Then the classification loss is rewritten as:
\[\mathcal{L}_{cls}=\sum_{i}^{N_{pos}}BCE(s_{i},w_{i}t_{i})+\sum_{j}^{N_{neg}}s_ {j}^{2}BCE(s_{j},0), \tag{6}\]
where \(t_{i}\) is down-weighted by the \(w_{i}\) factor, and it produces a weaker target for secondary samples. We term this trick prime sample weighting. To be consistent, regression losses are also down-weighted by \(w_{i}\) as:
\[\mathcal{L}_{reg}=\sum_{i}^{N_{pos}}w_{i}l_{reg}(\hat{b}_{i},b_{i}), \tag{7}\]
where \(l_{reg}(\hat{b}_{i},b_{i})\) is the regression loss function applied on predicted bounding box \(\hat{b}_{i}\) and GT bounding box \(b_{i}\).
Our final loss form is defined as:
\[\mathcal{L}=\sum_{l=1}^{L-1}\mathcal{L}_{m2o}(P_{l},G^{\prime})+\mathcal{L}_{ o2o}(P_{L},G), \tag{8}\]
where \(L\) is the number of decoder layers,\(P_{l}\) is the predictions of \(lth\) layer and \(\mathcal{L}_{m2o}\) has the same form as \(\mathcal{L}_{o2o}\) except that it receives an augmented version of \(G\).
In summary, the Align-DETR introduces an IoU-aware BCE loss (IA-BCE) to solve the misalignment issue for higher precision on localization of DETR. It also extends the matching from pure one-to-one to many-to-one to accelerate the training. To overcome the sparsity of sample distribution, we introduce a prime sample weighting mechanism to suppress the weights of relatively unimportant samples.
## 4 Experiments
### Setup
DatasetsWe conduct all our experiments on MS-COCO 2017 [22] Detection Track and report our results with the mean average precision (mAP) metric on the validation dataset.
Implementation detailsWe use DAB-DETR [23] and DINO [37] as our baseline methods. The DAB-DETR baseline employs a standard transformer architecture that takes single-scale features as inputs. The DINO baseline adopts deformable-transformer [42] and multi-scale features as inputs. Our hyperparameters are set as \(k=3\), \(\alpha=0.25\), and \(\tau=1.5\). To optimize the models, we set the initial learning rate to \(1\times 10^{-4}\) and decay it by multiplying \(0.1\) for backbone learning. We use AdamW with \(1\times 10^{-4}\) weight decay as the optimizer and set 16 as batch size for all our experiments. To build our code, we use the open-source library detrex [5], along with their other default hyper-parameter settings.
To ensure a fair comparison, we separate single-scale methods and multi-scale methods for different groups. For comparing with single-scale DETR-variants, we train the
\begin{table}
\begin{tabular}{l c c c c c c c c c} \hline \hline Model & \#epochs & AP & AP\({}_{50}\) & AP\({}_{75}\) & AP\({}_{S}\) & AP\({}_{M}\) & AP\({}_{L}\) & Params & GFLOPS \\ \hline DETR-R50 [2] & \(500\) & \(42.0\) & \(62.4\) & \(44.2\) & \(20.5\) & \(45.8\) & \(61.1\) & \(41\)M & \(86\) \\ Anchor DETR-R50 [35] & \(50\) & \(42.1\) & \(63.1\) & \(44.9\) & \(22.3\) & \(46.2\) & \(60.0\) & \(39\)M & \(--\) \\ Conditional DETR-R50 [25] & \(50\) & \(40.9\) & \(61.8\) & \(43.3\) & \(20.8\) & \(44.6\) & \(59.2\) & \(44\)M & \(90\) \\ DAB-DETR-R50 [23] & \(50\) & \(42.2\) & \(63.1\) & \(44.7\) & \(21.5\) & \(45.7\) & \(60.3\) & \(44\)M & \(94\) \\ DN-DETR-R50 [16] & \(50\) & \(44.1\) & \(64.4\) & \(46.7\) & \(22.9\) & \(48.0\) & \(63.4\) & \(44\)M & \(94\) \\ Align-DETR-R50 & \(50\) & \(46.0\) & \(64.9\) & \(49.5\) & \(25.2\) & \(50.5\) & \(64.7\) & \(42\)M & \(94\) \\ \hline DETR-R101 [2] & \(500\) & \(43.5\) & \(63.8\) & \(46.4\) & \(21.9\) & \(48.0\) & \(61.8\) & \(60\)M & \(152\) \\ Anchor DETR-R101 [35] & \(50\) & \(43.5\) & \(64.3\) & \(46.6\) & \(23.2\) & \(47.7\) & \(61.4\) & \(58\)M & \(--\) \\ Conditional DETR-R101 [25] & \(50\) & \(42.8\) & \(63.7\) & \(46.0\) & \(21.7\) & \(46.6\) & \(60.9\) & \(63\)M & \(156\) \\ DAB-DETR-R101 [23] & \(50\) & \(43.5\) & \(63.9\) & \(46.6\) & \(23.6\) & \(47.3\) & \(61.5\) & \(63\)M & \(174\) \\ DN-DETR-R101 [16] & \(50\) & \(45.2\) & \(65.5\) & \(48.3\) & \(24.1\) & \(49.1\) & \(65.1\) & \(63\)M & \(174\) \\ Align-DETR-R101 & \(50\) & \(46.9\) & \(65.5\) & \(50.9\) & \(25.6\) & \(51.9\) & \(66\) & \(61\)M & \(174\) \\ \hline DETR-DC5-R50 [2] & \(500\) & \(43.3\) & \(63.1\) & \(45.9\) & \(22.5\) & \(47.3\) & \(61.1\) & \(41\)M & \(187\) \\ Anchor DETR-DCS-R50 [35] & \(50\) & \(44.2\) & \(64.7\) & \(47.5\) & \(24.7\) & \(48.2\) & \(60.6\) & \(39\)M & \(151\) \\ Conditional DETR-DC5-R50 [25] & \(50\) & \(43.8\) & \(64.4\) & \(46.7\) & \(24.0\) & \(47.6\) & \(60.7\) & \(44\)M & \(195\) \\ DAB-DETR-DC5-R50 [23] & \(50\) & \(44.5\) & \(65.1\) & \(47.7\) & \(25.3\) & \(48.2\) & \(62.3\) & \(44\)M & \(202\) \\ DN-DETR-DCS-R50 [16] & \(50\) & \(46.3\) & \(66.4\) & \(49.7\) & \(26.7\) & \(50.0\) & \(64.3\) & \(44\)M & \(202\) \\ Align-DETR-DC5-R50 & \(50\) & \(48.3\) & \(66.7\) & \(52.5\) & \(29.7\) & \(52.8\) & \(65.9\) & \(42\)M & \(200\) \\ \hline DETR-DC5-R101 [2] & \(500\) & \(44.9\) & \(64.7\) & \(47.7\) & \(23.7\) & \(49.5\) & \(62.3\) & \(60\)M & \(253\) \\ Anchor DETR-DCS-R101 [35] & \(50\) & \(45.1\) & \(65.7\) & \(48.8\) & \(25.8\) & \(49.4\) & \(61.6\) & \(58\)M & \(--\) \\ Conditional DETR-DCS-R101 [25] & \(50\) & \(45.0\) & \(65.5\) & \(48.4\) & \(26.1\) & \(48.9\) & \(62.8\) & \(63\)M & \(262\) \\ DAB-DETR-DC5-R101 [23] & \(50\) & \(45.8\) & \(65.9\) & \(49.3\) & \(27.0\) & \(49.8\) & \(63.8\) & \(63\)M & \(282\) \\ DN-DETR-DCS-R101 [16] & \(50\) & \(47.3\) & \(67.5\) & \(50.8\) & \(28.6\) & \(51.5\) & \(65.0\) & \(63\)M & \(282\) \\ Align-DETR-DC5-R101 & \(50\) & \(49.3\) & \(67.4\) & \(53.7\) & \(30.6\) & \(54.3\) & \(66.4\) & \(61\)M & \(280\) \\ \hline \hline \end{tabular}
\end{table}
Table 2: Results of our Align-DETR using DAB-DETR [23] as the baseline and other models. All models here use 300 queries except for DETR which uses 100 queries.
\begin{table}
\begin{tabular}{l c c c c c c c c} \hline \hline Model & \#epochs & \# queries & AP & AP\({}_{50}\) & AP\({}_{75}\) & AP\({}_{S}\) & AP\({}_{M}\) & AP\({}_{L}\) & Params & GFLOPS \\ \hline Faster RCNN-FPN [28] & \(108\) & \(--\) & \(42.0\) & \(62.1\) & \(45.5\) & \(26.6\) & \(45.5\) & \(53.4\) & \(42\)M & \(180\) \\ SMCA [8] & \(50\) & \(300\) & \(43.7\) & \(63.6\) & \(47.2\) & \(24.2\) & \(47.0\) & \(60.4\) & \(40\)M & \(--\) \\ Deformable DETR [42] & \(50\) & \(300\) & \(45.4\) & \(64.7\) & \(49.0\) & \(26.8\) & \(48.3\) & \(61.7\) & \(40\)M & \(235\) \\ DN-DETR [16] &
Align-DETR based on DAB-DETR [23] by 50 epochs and decay the learning rate by multiplying 0.1 at the 40th epoch. For comparison with multi-scale object detectors, we train Align-DETR based on DINO [37] for 1x and 2x schedules.
### Main Results
#### 4.2.1 Comparison with Single-scale Methods
Using the DAB-DETR [23] as the baseline, we conduct a series of experiments on different CNN backbones, including ResNet-50 (R50), ResNet-101 (R101), ResNet-50-DC5 (DC5-R50) and ResNet-101-DC5 (DC5-R101) to validate the effectiveness of our proposed Align-DETR. Our method is mainly compared with other competitive DETR variants, including our baseline DAB-DETR [23].
The results are summarized in Tab. 2. Our method Align-DETR surpasses the baseline by a large margin, _e.g_ Align-DETR has a 3.8% AP improvement on R50. In the reported results, DN-DETR shows strong competence, and even so, Align-DETR leads DN-DETR by 1\(\sim\)2% AP across several backbone settings. These improvements are mainly in the high IoU-threshold metric like AP\({}_{75}\) which supports our assumption that misalignment is a critical issue that downgrades the localization precision of DETR.
#### 4.2.2 Comparison with Multi-scale Methods
We conduct experiments using DINO [37] as the baseline, which adopts the deformable-transformer as the backbone. DINO uses tricks such as CDN, look forward-twice, and bounding box refinement for better performance. We follow DINO's approach and adopt its tricks. Regarding to backbone, we use an R-50 backbone with 4-scale features (P3, P4, P5, and P6) as input.
The results are presented in Tab. 3. Despite the highly optimized structure of DINO [37], our method still outperforms it by 1.2% and 0.9% AP in 1x and 2x schedules, respectively. This indicates that even the advanced DETR-variant can be affected by the misalignment problem. Then we compare Align-DETR to two recent state-of-the-art methods, Group-DETR [3] and H-DETR [13], and find that Align-DETR achieves higher AP while using fewer queries, demonstrating its superior computational efficiency. It is worth noting that Align-DETR also outperforms other competitors such as SMCA [8], Faster RCNN-FPN [28], Deformable-DETR [42] with much less training schedule. These results suggest that Align-DETR is a highly effective and efficient method for object detection tasks.
#### 4.2.3 Comparison with Related Methods
In addition to comparison with state-of-the-art DETR-variants, we also implement methods like Quality Focal Loss (QFL) [19], and Varifocal Loss (VFL) [38] on DINO [37], and the results are presented in Tab. 4. Interestingly, we find that the IoU-branch brings limited improvement to the performance. We speculate that this may be due to the fact that the duplicate removal process in DETR is performed in the self-attention module through relation modeling. As a result, multiplying the IoU score with the confidence score only slightly changes the order of predictions and improves the ranking result in AP computation [1]. QFL [19] also performs poorly in our experiments and we assume their loss shapes are over-smoothed due to the focal loss term. Notably, we observe that decreasing the value of \(\gamma\) led to an improvement in performance. These findings support our hypothesis that DETR's positive samples are rare and are not be suppressed too much. Compared to the most related method, VFL [38], our approach outperforms it in terms of average precision. VFL also deprecates the focal loss term, but our simpler design provides positive samples with stronger gradients, particularly when \(t\) is small during the early training stage. This is likely a contributing factor to the superior performance of our method.
### Ablation Study
We conduct a series of ablation studies to validate the effectiveness of the components. All experiments here use an R50 backbone and a schedule of standard 1x training schedule.
\begin{table}
\begin{tabular}{l c c c} \hline \hline Method & AP & AP\({}_{50}\) & AP\({}_{75}\) \\ \hline baseline (Focal loss) & 49.0 & 66.0 & 53.5 \\ \hline w/ IoU branch* [14] & 49.0 & 66.0 & 54.0 \\ w/ IoU branch [14] & 49.2 & 66.3 & 53.5 \\ w/ QFL (\(\gamma=2\)) [19] & 47.6 & 64.3 & 51.8 \\ w/ QFL (\(\gamma=1\)) [19] & 48.6 & 65.7 & 53.5 \\ w/ VFL [38] & 48.7 & 67.0 & 52.3 \\ w/ IA-BCE (Ours) & 50.0 & 67.8 & 54.4 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Comparison with other methods on the misalignment problem on COCO val. * represents no score fusing is performed. For the score fusing step, we set the weight factor of the IoU score to 0.3 for best results.
\begin{table}
\begin{tabular}{c c c c c} \hline \hline IA-BCE & Mixed Matching & AP & AP\({}_{50}\) & AP\({}_{75}\) \\ \hline ✗ & ✗ & 49.0 & 66.0 & 53.5 \\ ✓ & ✗ & 50.0 & 67.8 & 54.2 \\ ✗ & ✓ & 49.1 & 67.5 & 53.4 \\ ✓ & ✓ & 50.2 & 67.8 & 54.4 \\ \hline \hline \end{tabular}
\end{table}
Table 5: Ablation study of Align-DETR on each component in terms of AP on COCO val. The results demonstrate the effectiveness of our proposed component.
#### 4.3.1 Ablation Study on Components
We conduct a series of ablations on main components, including IA-BCE and Mixed Matching, to assess their effectiveness, and the results are summarized in Tab. 5. It is observed that both components contribute to the final performance, and the main contribution comes from IA-BCE loss. As for the mixed matching strategy, we speculate since DINO [37] already uses CDN group to support training, the gain may partially overlap with our method.
To further investigate the impact of the hyper-parameters we introduced, _i.e._\(\alpha\), \(k\) and \(\tau\), we conduct sensitivity analysis by changing one variable and keeping other variables controlled. Our default values are \(k=3\), \(\alpha=0.25\), and \(\tau=1.5\). As shown in Tab. 6, \(\alpha\) is has the greatest influence on the performance while \(\tau\) and \(k\) have moderate effects. This sensitivity analysis supports our hypothesis that \(\alpha\) should be kept small to prevent effective training signals from suppression.
#### 4.3.2 Ablation Study on IoU-aware BCE Loss
To validate the generalizability of our approach, We apply our method IA-BCE on a strong competitor H-Deformable-DETR and achieve 0.6% AP improvement as shown in Tab. 7.
#### 4.3.3 Ablation Study on Prime Sample Weighting
We conduct ablation studies to validate the effectiveness of our proposed prime sample weighting. The results are represented in Tab. 8. After disabling prime sample weighting (PS Weighting), the performance drops by around 0.5% AP. The comparison between the performance of models trained with and without prime sample weighting supports the effectiveness of this trick.
### Visualization
To delve deeper into the improvement of our approach, we visualize and compare the confidence-IoU distribution of the outputs from the two models, DINO, and Align-DETR. The density map is transformed into a color map where a bright area represents a peak in distribution. As shown in Fig. 4, our approach performs better in aligning the two scores. To analyze the data quantitively, we calculate the Pearson coefficients between the IoU and confidence, demonstrating the superiority of our method.
## 5 Conclusion
This paper examines a common problem, _i.e._ the misalignment problem, on DETR and observes its existence and importance of it. To address it, we propose a simple solution to this problem which is denoted as IA-BCE, which provides strong supervision on the correlation between classification score and localization precision. We also address the training inefficiency problem of DETR which is partially due to the one-to-one matching. To optimize the paradigm better, we propose a mixed matching strategy that applies many-to-one matching on intermediate predictions to provide richer training signals. To overcome the negative effect brought up by the poor label assignment, we pro
\begin{table}
\end{table}
Table 6: Influence of hyper-paramters \(\alpha\), \(k\) and \(\tau\) on our approach on COCO val.
\begin{table}
\end{table}
Table 7: Our IA-BCE applied on H-DETR on COCO val. All experiments are run with a 1x schedule.
Figure 4: The hot-map figure visualizes the confidence and IoU’s distribution of matched samples in DINO and Align-DETR. The confidence of our approach is scaled to set the maximum to 1.
\begin{table}
\end{table}
Table 8: Ablation study of prime sample weighting on COCO val. The first row represents the baseline where \(k=1\) and no prime sample weighting is applied.
pose prime sample weighting suppress the interference of unimportant samples. Competitive experimental results are achieved on the common COCO benchmark, demonstrating its superiority in effectiveness.
|
2304.08614 | Signal Processing Grand Challenge 2023 -- e-Prevention: Sleep Behavior
as an Indicator of Relapses in Psychotic Patients | This paper presents the approach and results of USC SAIL's submission to the
Signal Processing Grand Challenge 2023 - e-Prevention (Task 2), on detecting
relapses in psychotic patients. Relapse prediction has proven to be
challenging, primarily due to the heterogeneity of symptoms and responses to
treatment between individuals. We address these challenges by investigating the
use of sleep behavior features to estimate relapse days as outliers in an
unsupervised machine learning setting. We extract informative features from
human activity and heart rate data collected in the wild, and evaluate various
combinations of feature types and time resolutions. We found that short-time
sleep behavior features outperformed their awake counterparts and larger time
intervals. Our submission was ranked 3rd in the Task's official leaderboard,
demonstrating the potential of such features as an objective and non-invasive
predictor of psychotic relapses. | Kleanthis Avramidis, Kranti Adsul, Digbalay Bose, Shrikanth Narayanan | 2023-04-17T21:02:46Z | http://arxiv.org/abs/2304.08614v1 | # Signal Processing Grand Challenge 2023 - E-Prevention:
###### Abstract
This paper presents the approach and results of USC SAIL's submission to the Signal Processing Grand Challenge 2023 - e-Prevention (Task 2), on detecting relapses in psychotic patients. Relapse prediction has proven to be challenging, primarily due to the heterogeneity of symptoms and responses to treatment between individuals. We address these challenges by investigating the use of sleep behavior features to estimate relapse days as outliers in an unsupervised machine learning setting. We extract informative features from human activity and heart rate data collected in the wild, and evaluate various combinations of feature types and time resolutions. We found that short-time sleep behavior features outperformed their awake counterparts and larger time intervals. Our submission was ranked 3rd in the Task's official leaderboard, demonstrating the potential of such features as an objective and non-invasive predictor of psychotic relapses.
Kleanthis Avramidis Kranti Adsul Digbalay Bose Shrikanth Narayanan Signal Analysis and Interpretation Lab, University of Southern California, Los Angeles, CA 90089 Psychotic Relapses, Anomaly Detection, Unsupervised Learning, Biosignals, Sleep Behavior
## 1 Introduction
Psychotic disorders are a category of mental illnesses that can significantly impact an individual's thoughts, emotions, and behavior. These disorders are characterized by a distorted perception of reality, which can manifest in the form of delusions, hallucinations, disordered thinking, and other cognitive impairments [1]. Despite numerous related studies in neurophysiology [2], effective biomarkers of psychotic episodes and relapses have not yet been established. This is due to the wide range of symptoms and variable treatment responses observed in patients [3]. The potential of such biomarkers to timely diagnose or even prevent psychotic episodes is thus a prominent challenge in psychiatry.
Artificial Intelligence, and Machine Learning in particular, have emerged as promising tools in the search and identification of such possible markers, especially those derived from biosensors [4]. Nowadays, due to the adoption of wearables in everyday life, the potential of these studies is increasing. The e-Prevention project [5, 6] contributes in this direction by collecting features from physiological measures over the course of 6 months, on subjects in the psychotic spectrum.
The ICASSP Signal Processing Grand Challenge (SPGC) 2023 aimed to provide resources and algorithms to advance relapse days detection in the bio-sensing context. Our challenge submission, focused on sleep-related sensing, was ranked 3rd in the official leaderboard. Sleep disorders have been identified to correlate with psychotic episodes [7]. In this paper we present our approach, including feature extraction schemes over multiple temporal resolutions and during sleep and awake periods. Our results using an Isolation Forest algorithm for outlier detection, discriminated relapse from normal days with an AUC of 60.5% on the test set.
## 2 Data Processing
### Dataset
As part of the e-Prevention project, 37 patients on the psychotic spectrum were recruited and provided with a Samsung Gear S3 smartwatch. Through the wearable, the researchers collected measures of linear and angular acceleration (20Hz), RR peak intervals derived via photoplethysmography (5Hz), sleeping schedule, and step count, for a total monitoring period of up to 2.5 years. Clinicians then annotated patients' relapse days in cooperation with their physicians. The challenge provided us with a subset of 6-month daily data for 10 patients. The data come in 3 splits, namely train split, that contains only non-relapse days, validation split, categorized in relapse and non-relapse days, and unlabeled test split.
### Feature Extraction
In the pre-processing stage, we removed outliers and imputed missing values at a range of 1 hour with the use of the Hampel method [8]. For feature extraction, we derived several types of features from 5-minute intervals of the processed time-series, which were then aggregated at various resolutions. Specifically, we computed the normalized energy of the accelerometer and gyroscope measurements to account for changes in movement and activity levels. We also extracted the mean heart rate (BPM) and heart rate variability (HRV) from the RR intervals. Initially, the power spectral density of these features was estimated using Welch's method, after which the low (LF) and high frequency (HF) bands, along with their respective fractions, were isolated [5]. Daily sinusoidal encoding was employed for the timestamp feature.
Regarding the step count data, we incorporated the provided features and computed additionally step size and speed by applying a time-to-seconds conversion on the start and end times of the steps. The data are then distributed over 5-minute intervals by summing up the step counts and taking the mean of the distance, calories, step size, and speed.
## 3 Experimental Setup
The feature vectors outlined above are formed by considering the mean and standard deviation within non-overlapping 5-minute intervals, also aggregated to 1-hour intervals. After extracting the features for each participant, we standardize them to unit norm and concatenate them to run subject-agnostic trials. We compare using only data during sleep (or awake) and with or without step count information.
**Models:** The problem in hand is essentially a novelty detection task, wherein the objective is to identify whether a given sample is an outlier in the absence of any outlier data in the training set. To address this task, we evaluated a range of tree-based and clustering-based algorithms, and we selected Isolation Forest [9] for further experiments. Isolation Forest is a tree-based ensemble method that works by randomly selecting a feature and then randomly splitting between its extreme values. The number of splits required to isolate a sample is then used as a measure of normality, since random partitioning produces shorter paths for anomalies. This measure is averaged through a forest of such random trees.
**Evaluation:** Given that the task involved a ranking evaluation of relapse days, we assessed the model's performance and statistical significance by computing ROC-AUC and PR-AUC ranking scores. The ultimate reported metric is their harmonic mean, computed across subjects. In the absence of sleep-oriented features, we calculate the AUC score on the awake data and standardize based on the optimal relapse threshold of the sleep-related predictions.
## 4 Results
Table 1 presents the results of all versions of our experiments, with each score indicating the aggregate AUC metric, which is calculated for the samples of the validation set that include the respective features. Our analysis revealed that 5-minute-level features were more discriminative than 1-hour resolution features, with a 5-7% increase in performance during sleep and awake periods, respectively. And this comes in spite of the less data samples; the overall computation time was three times faster with the sleep features compared to the features from throughout the day. Additionally, features extracted during sleep were more informative than those obtained throughout the day, resulting in 5-10% increases in AUC score. Concatenating step count information also improved the derived performance, only slightly though. Our experimentation suggests that the optimal parameter setting is 5-minute data with step features during sleep, standardized to unit norm.
## 5 Conclusion
This paper presents USC SAIL's approach and results in the Signal Processing Grand Challenge 2023 for detecting relapses in psychotic patients. We investigated the use of sleep activity and heart rate features and evaluated various combinations of feature types and time resolutions. Our results show that short-time sleep behavior features outperformed their awake counterparts and larger time intervals, scoring 64.5% on the validation set and 60.5% on the test set.
## 6 Acknowledgements
We want to thank for their contribution the USC SAIL's members that competed in SPGC Task 1: Anfeng Xu, and Tiantian Feng, as well as our collaborators from the Department of Artificial Intelligence, Wroclaw University of Science and Technology, Stanislav Saganowski, and Bartosz Per.
|
2304.11345 | Studies of two-dimensional material resistive random-access memory by
kinetic Monte Carlo simulations | Resistive memory based on 2D WS2, MoS2, and h-BN materials has been studied,
including experiments and simulations. The influences with different active
layer thicknesses have been discussed, including experiments and simulations.
The thickness with the best On/Off ratio is also found for the 2D RRAM. This
work reveals fundamental differences between a 2D RRAM and a conventional oxide
RRAM. Furthermore, from the physical parameters extracted with the KMC model,
the 2D materials have a lower diffusion activation energy from the vertical
direction, where a smaller bias voltage and a shorter switching time can be
achieved. It was also found the diffusion activation energy from the CVD-grown
sample is much lower than the mechanical exfoliated sample. The result shows
MoS2 has the fastest switching speed among three 2D materials. | Ying-Chuan Chen, Yu-Ting Chao, Edward Chen, Chao-Hsin Wu, Yuh-Renn Wu | 2023-04-22T07:54:05Z | http://arxiv.org/abs/2304.11345v2 | # Studies of 2D Material Resistive Random-Access Memory by Kinetic Monte Carlo Simulation
###### Abstract
Resistive memory based on 2D WS\({}_{2}\), MoS\({}_{2}\), and h-BN materials has been studied, including experiments and simulations. The influences with different active layer thicknesses have been discussed, including experiments and simulations. The thickness with the best On/Off ratio is also found for the 2D RRAM. This work reveals fundamental differences between a 2D RRAM and a conventional oxide RRAM. Furthermore, from the physical parameters extracted with the KMC model, the 2D materials have a lower diffusion activation energy from the vertical direction, where a smaller bias voltage and a shorter switching time can be achieved. It was also found the diffusion activation energy from the CVD-grown sample is much lower than the mechanical exfoliated sample. The result shows MoS\({}_{2}\) has the fastest switching speed among three 2D materials.
## I Introduction
With the increasing popularity of artificial intelligence (AI) and the Internet of Things (IoT), the demand for storage devices is also increasing. However, traditional storage devices cannot meet such demands. For example, flash memory suffers from insufficient durability, the capacity of cache memory is too small, etc. Therefore, the demand for new types of memory has arisen accordingly. Currently, the most attention-grabbing group is called storage class memory (SCM), which is characterized by good access speed and larger cache memory capacity than before. Resistive random-access memory (RRAM) is one of them. Compared with traditional memory, RRAM has the advantages of high memory density (\(\sim\)2.5 times that of NOR FLASH), high switching speed (\(<10\) ns), and higher good durability (\(>10^{6}\) times) [1; 2].
The commonly used materials for RRAM are mainly transitional metal oxides (TMOs), which are fabricated as a sandwich structure of metal/insulator/metal (MIM). This established the characteristic index of RRAM. Scholars began to look for the possibility of new materials. With the vigorous development of material science in recent years, two-dimensional (2D) materials have jumped into people's eyes, and some research teams have begun to use 2D materials to make RRAM, such as graphene, hexagonal boron nitride (h-BN), molybdenum disulfide (MoS\({}_{2}\)), Tungsten disulfide (WS\({}_{2}\)), and molybdenum ditelluride (MoTe\({}_{2}\)) [3; 4; 5; 6; 7]. 2D materials have potential for back-end-of-line (BEOL) devices and monolithic 3-dimensional (3D) integrated circuits due to the low thermal budget [8]. WS\({}_{2}\) and MoS\({}_{2}\) are currently promising 2D candidates for logic applications attributed to their high mobility and large bandgap. To effectively suppress the delay time and the power consumption between logic and memory layers, the embedded memory made by WS\({}_{2}\)/MoS\({}_{2}\) resistive random-access memory (RRAM) can match the requirement and can be further used in the in-memory and neuromorphic computing [9]. In this work, WS\({}_{2}\) RRAM with different thickness have been fabricated with gold/titanium (Au/Ti) contacts. MoS\({}_{2}\) and h-BN 2D materials are also made for comparison. To extract the material properties, we applied Kinetic Monte Carlo (KMC) Method developed from Ginestra [10] in the experimental fitting, which will help us further finding the optimized structures in the RRAM design. Unlike the analytical model, which often oversimplify the conduction mechanism, the physical based KMC Method coupled with Poisson and drift-diffusion solver can extract the diffusion activation energy and obtain the defect formation and their distribution. This can be used to estimate the thickness-dependent switching behavior. The retention failure time and the breakdown voltage predicted by the physics-based model have a good agreement with experiments. Furthermore, we find the thickness with the best On/Off ratio through the extracted WS\({}_{2}\) parameters and obtain temperature-dependent characteristics. Then, ion transport properties are compared for RRAM fabricated by chemical vapor deposition (CVD) and mechanical exfoliation. Finally, we compare the differences in the material properties of WS\({}_{2}\), MoS\({}_{2}\), and h-BN RRAM and discuss the feasibility of 2D RRAM.
## II Method
### Device Fabrication and Measurement
This study conducted experiments and simulations on WS\({}_{2}\), MoS\({}_{2}\), and h-BN RRAM. The device structures are shown in FIG. 1 (a), (b), and (c), which use Au/Ti as the top and bottom electrodes. WS\({}_{2}\) is fabricated with asymmetric electrodes, and MoS\({}_{2}\) and h-BN are fabricated with symmetric electrodes for the experiments.
The active layers of the devices utilized various thicknesses of WS\({}_{2}\). 60 nm Au and 10 nm Ti were deposited on a heavily doped p-type silicon substrate as the bottom electrode (BE) of the RRAM device by e-gun evaporation. WS\({}_{2}\) flakes were obtained from single-crystal bulk material by mechanical exfoliation. Then, the flake was transferred above the BE by Polydimethylsiloxane. Afterward, Au/Ti of 60/10 nm as the top electrode (TE) was deposited on the WS\({}_{2}\) flake to form an asymmetric electrode WS\({}_{2}\) RRAM. The cross-sectional area of the RRAM device is controlled through the overlap of the TE and BE, as shown in FIG. 1 (f). The vertical structure of the device was observed and measured by transmission electron microscope (TEM) and atomic force microscope (AFM) scanning as shown in FIG. 1 (g) and (h).
### Simulation Methodology
In this simulation, the switching state of the RRAM module was simulated to plot the I-V curve using the GinestraTM software. This software uses the kinetic Monte Carlo (KMC) Modeling [11; 12] to simulate the generation, diffusion, and recombination of defects (vacancies and ions) in the RRAM device. The physics solver calculates the current value, including charge transport, temperature dependence, and 3D-space defects distribution. The KMC model is used to simulate the dynamic defect distribution of the active layer. The physical models of WS\({}_{2}\), MoS\({}_{2}\), and h-BN RRAM are built based on the experimental data, and the physical parameters of the three 2D materials are extracted.
FIG. 1 (d) and (e) are the device structures of the low resistance state (LRS) and high resistance state (HRS), respectively. It can be seen that the former has a complete conductive filament (CF), and the latter of CF becomes sparser or even fractured because ions and vacancies are recombined due to applied bias. Some defects assisting in carrier transport are recombined, causing the current to drop after the reset operation, and the trend of the current drop has a high correlation with the drift and diffusion of ions in the lattice space. In order to shorten the RRAM simulation time, we only form one CF in the small cross-sectional area model, which will cause the current drop to be discontinuous during the reset operation. Therefore, the reset I-V curve in each device is the result of averaging a lot of simulation curves. There is not only one CF formed on a large cross-sectional device, and the conduction state of each CF is different. Using the statistical averaging method can be closer to
Figure 1: (a) The schematic figures show the WS\({}_{2}\), (b) MoS\({}_{2}\), and (c) h-BN RRAM structures. (d) 33 nm WS\({}_{2}\) RRAM LRS device structure. (e) 33 nm WS\({}_{2}\) RRAM HRS device structure. (f) The RRAM was fabricated by exfoliated WS\({}_{2}\) and the conducting area was determined by the TE/BE overlap, shown in scanning electron microscopy (SEM) analysis. (g) The cross-section of the WS\({}_{2}\) RRAM was analyzed by TEM. (h) AFM scanning showed the WS\({}_{2}\) thickness.
the conduction state of the actual device.
From the distribution of ions in FIG. 1 (d) or (e), it can be seen that ions will gradually diffuse around CF so that the HRS resistance will decrease progressively with multiple resistance state switching. Therefore, the diffusion of ions in the in-plane direction is highly related to the endurance of the device. The 2D material has a polarity, which is a typical bipolar switching mode in RRAM. Usually, the forming operation is applied by a positive bias, the set operation is in the forward bias, and the reset operation is in the reverse bias. If a negative bias is applied in the forming process, the set/reset switching voltages are reversed. In the experiments, both positive and negative biases are applied in the forming operation of each device, so the set/reset bias directions may be opposite on different devices. In the simulations, a positive bias is uniformly applied for the forming operation to facilitate subsequent analysis.
The vacancies and ions are generated in the device for forming or set operation by continuously increasing applied bias. The generation rate is dependent on the 3D electric field in the device [13; 14]. This Arrhenius equation is defined as
\[R_{A,G}(x,y,z)=\nu\;exp[-\frac{E_{A,G}-p_{0}(2+\varepsilon_{r})/3\cdot F(x,y,z )}{k_{B}T}], \tag{1}\]
where \(\nu\) is a frequency prefactor, \(E_{A,G}\) is the zero-field generation activation energy, \(p_{0}\) is the polarizability, \(\varepsilon_{r}\) is the relative permittivity, \(k_{B}\) is the Boltzmann constant, and \(T\) is the temperature. The electric field will drift Sulfur ions in the active layer. The diffusion rate depends on the local effective electric field along the diffusion direction. This Arrhenius equation is defined as
\[R_{A,D}(x,y,z)=\nu\;exp[-\frac{E_{A,D}(x,y,z)-\gamma F_{eff}(x,y,z)}{k_{B}T}], \tag{2}\]
where \(E_{A,D}\) is the diffusion activation energy, \(\gamma\) is the field acceleration factor, and \(F_{eff}(x,y,z)\) is the local effective field along the ion diffusion direction by the unit vector \(\hat{\mathbf{r}}\).
Retention time is how long a memory can retain a bit state at a specific temperature. For non-volatile memory, the most stringent requirement is the ability to retain data for more than ten years (about \(3.1536\times 10^{8}\) seconds) at operating temperatures up to 85\({}^{\circ}\)C. In order to shorten the detection time, the device will be placed in a high-temperature environment, and the changes in time and resistance value must be recorded during the baking process of the device. By changing the temperature to record the retention failure time of the device, the Arrhenius diagram can be drawn to extract the activation energy. Then it is extrapolated to the working temperature to obtain the retention time at this temperature. According to the experimental report [15] by Bin Gao et al., the retention time of the device can also be calculated through generation activation energy and theoretical formulas. The generation probability of the unbiased voltage is defined as
\[p=exp(-E_{a}/k_{B}T), \tag{3}\]
where \(E_{a}\) is the generation activation energy, \(k_{B}\) is the Boltzmann constant, and \(T\) is the temperature. And the retention failure time of the device is defined as
\[t_{E}=t_{0}/(n|ln(1-p)|)\approx t_{0}/np, \tag{4}\]
where \(t_{0}\) is the oscillation period of lattice oxygen atoms, and \(n\) is the number of escape directions for ions inside the lattice (for a cube, \(n\) is substituted into 6). Usually, the probability of generation will be far less than 1, so we can take the original formula of Eq. 4 to Taylor expansion to obtain the first-order term. This can prevent the dilemma that the denominator is zero and cannot be solved.
## III Result and Discussion
### Analysis of 2D RRAM Characteristics
In this work, we worked on three 2D materials, which are WS\({}_{2}\), MoS\({}_{2}\), and h-BN for comparison. More detailed studies on WS\({}_{2}\) materials have been made to build the accurate model for KMC simulations, which can be used for device optimizations.
FIG. 2 (a) is the measured and simulated set/reset current characteristics with 33-nm-thick WS\({}_{2}\) RRAM (44 layers), and the set/reset switching voltages are 0.5 and -0.6 V, respectively. The experimental data is the result of measuring 100 set/reset operation cycles, and the simulated set operation also sets a compliance current of \(10^{-3}\) A. During the set switching, if the current is larger than \(10^{-3}\) A, the system will terminate the simulation and record the last data point. So, the simulated current after the set operation will be slightly higher than the experimental data. The retention failure time can be calculated by substituting the atomic oscillation period and the generation activation energy extracted from our simulation to Eq. 4. Calculated from the first principle, the oscillation period of WS\({}_{2}\) is 18 fs [16], and the retention failure time of WS\({}_{2}\) is \(1.23\times 10^{4}\) seconds at room temperature, about 3 hours. The experimental data provided by Yu-Ting Chao shows that the retention time of WS\({}_{2}\) can be maintained to \(10^{4}\) seconds. The experimental results of other researchers also show that the retention time of WS\({}_{2}\) can reach \(10^{4}\) seconds [17; 18].
FIG. 2 (b) is the measured and simulated set/reset I-V characteristics of 20-nm-thick MoS\({}_{2}\) RRAM. The switching voltages are 0.9 and -0.9 V, and the current of reset switching dropping trend is rapider than that of WS\({}_{2}\). It means that the field acceleration factor of MoS\({}_{2}\) is higher. The generation activation energy of MoS\({}_{2}\) extracted is 1.13 eV, the oscillation period is 21.51 fs [32], and the retention failure time at room temperature is estimated to be \(3.18\times 10^{4}\) seconds, about 8 hours. The experimental
results of other researchers also show that the retention time of MoS\({}_{2}\) can reach \(10^{4}\) seconds[33].
FIG. 2 (c) is the measured and simulated set/reset I-V characteristics of 39.4-nm-thick h-BN RRAM. The switching voltages are 0.65 and -0.6 V, and the current of reset switching dropping trend is smoother than that of WS\({}_{2}\). It means that the field acceleration factor of h-BN is lower. The generation activation energy of h-BN extracted is 1.28 eV, the oscillation period is 24.4 fs[34], and the retention failure time at room temperature is estimated to be \(1.18\times 10^{7}\) seconds, about 136 days. The experimental results of other scholars also show that the retention time of h-BN can reach \(10^{7}\) seconds[35]. Although h-BN has a larger bandgap, the contact between h-BN and Ti will cause the Fermi level of the electrode to be fixed on the p-type bandgap[24]. The defects are generated in the depth of the bandgap due to the distribution of the Fermi level, and these vacancies can only transport a tiny current. Therefore, the difference between HRS and LRS current is still tiny, and the On/Off ratio of h-BN cannot get great improvement.
TABLE I sorts out the basic parameters used by three kinds of 2D materials in this simulation. TABLE II sorts out the simulation parameters used by the three 2D materials and another hafnium oxide (HfO\({}_{\mathrm{x}}\)) RRAM research[36]. WS\({}_{2}\) and MoS\({}_{2}\) have similar generation activation energy, and their retention failure times are in the same order. The diffusion activation energy of MoS\({}_{2}\) in the out-of-plane direction is lower than that of WS\({}_{2}\), and the field acceleration factor is higher. It means that sulfur ions are easier to drift and transport inside MoS\({}_{2}\). The reason is speculated to be related to the mass density of the material. The mass density of MoS\({}_{2}\) is 5.06 g/cm\({}^{3}\)[37], and WS\({}_{2}\) is 7.5 g/cm\({}^{3}\)[38]. This makes the drift resistance of sulfur ions inside MoS\({}_{2}\) less by the collision between atoms, so the transport speed is faster. The reset switching current drops more rapidly.
The generation activation energy of h-BN is the highest among the three 2D materials and has a long retention failure time. Its field acceleration factor is much lower than WS\({}_{2}\) and MoS\({}_{2}\), so the decrease of reset switching is very small. This parameter characteristic can explain why h-BN produces the phenomenon of threshold switching[19] in the experiment. Therefore, h-BN needs to be applied with enough large operating power to completely separate ions and vacancies, forming non-volatile memory with stable CFs. Otherwise, ions and vacancies can only be stretched outward, forming electric dipoles. After removing the applied electric field, ions will recombine with vacancies, exhibiting the characteristics of volatile memory.
Because HfO\({}_{\mathrm{x}}\) RRAM has higher activation energy, the retention failure time at room temperature can exceed ten years. So, HfO\({}_{\mathrm{x}}\) had been widely studied in RRAM applications. The On/Off ratio of the WS\({}_{2}\) RRAM is about 10 at 0.1 V, and that of the HfO\({}_{\mathrm{x}}\) RRAM is about 50[36]. The On/Off ratio of WS\({}_{2}\) RRAM is five times smaller because the bandgap of WS\({}_{2}\) is smaller than that of HfO\({}_{\mathrm{x}}\). This makes the background current (is not the current transported through the defect) of WS\({}_{2}\) is relatively larger, resulting in a smaller difference between the resistance of HRS and LRS. It causes the On/Off ratio of the overall device to be rather poor. Therefore, choosing a material with a large bandgap to make an RRAM
\begin{table}
\begin{tabular}{l l c c c} Parameter & Description & WS\({}_{2}\) & MoS\({}_{2}\) & h-BN \\ \hline \(\epsilon_{r}\) & Relative permittivity & \(6^{20}\) & 7.1[21] & 5.65[22] \\ \(E_{g}\) (eV) & Bandgap & 1.54[20] & 1.23[23] & 5.97[24] \\ \(E_{a}\) (eV) & Electron affinity & 3.92[20] & 4.2[25] & 0.8[24] \\ \(k_{th}\) (W \(\cdot\) cm\({}^{-1}\) \(\cdot\) K\({}^{-1}\)) & Thermal conductivity & 1.21[26] & 0.035[27] & 7.5[28] \\ \(m_{e}\) (m\({}_{0}\)) & Electron density of states effective mass & 0.631[29] & 0.73[30] & 0.93[31] \\ \(m_{h}\) (m\({}_{0}\)) & Hole density of states effective mass & 0.832[29] & 0.78[30] & 0.77[31] \\ \end{tabular}
\end{table}
Table 1: Basic parameters of WS\({}_{2}\), MoS\({}_{2}\) and h-BN.
Figure 2: (a) 33 nm WS\({}_{2}\) RRAM set/reset I-V characteristics. (b) 20 nm MoS\({}_{2}\) RRAM set/reset I-V characteristics[19]. (c) 39.4 nm h-BN RRAM set/reset I-V characteristics[19].
device can usually obtain a higher On/Off ratio.
Compared with the benchmark device (HfO\({}_{\mathrm{x}}\) RRAM [36]), Our research shows that 2D RRAM has a lower generation activation energy to generate defects at a smaller bias. The diffusion activation energy of ions along the in-plane direction (i.e., X-Y plane) is greater than that of the out-of-plane direction (i.e., Z direction), which means that ions tend to diffuse along the out-of-plane direction. The reason may be due to the polar molecules between the layers. It is caused by the electrostatic attraction (van der Waals force) along the out-of-plane direction and the electromagnetic repulsion force along the in-plane direction of adjacent atoms in the same layer. Under the 2D layered molecular arrangement, the torque perpendicular to the plane is more likely to cause molecular bond breaking to form the defects, which act as channels for the transport of ions. Therefore, the set/reset switching voltage of 2D RRAM is lower than that of HfO\({}_{\mathrm{x}}\), which means 2D RRAM has a faster resistance switching speed. It can be seen from the data in TABLE 2 that MoS\({}_{2}\) has the shortest switching time. So, MoS\({}_{2}\) is the most suitable for making RRAM devices among the three 2D materials.
In order to verify the reliability of the KMC model, we will conduct a series of experiments and simulation comparisons for WS\({}_{2}\) in the next section.
### Comparison of WS\({}_{2}\) RRAM experiments and simulations
#### iii.2.1 Forming Operation
FIG. 3 (a) is the measured and simulated forming current characteristics of the RRAM device made of a 12-nm-thick WS\({}_{2}\)(16 layers) flake. We can see that the device begins to break down at 2.3 V, and the defects inside the material start to generate in large quantities, causing the current to rise rapidly. In the experiment, a compliance current will be set to prevent the over-reaction of the material from causing the device to burn, so the current will be limited to \(10^{-4}\) A when the applied voltage is larger than 2.3 V. The simulation can calculate the current characteristics of the device that has not been burned so the current continues to rise. After performing the forming operation, we will output the device model on the compliance current to keep going the simulation of reset and set.
#### iii.2.2 Retention Time
FIG. 3 (b) is the measured and simulated data at a reading voltage of 0.1 V. The solid dots are the HRS and LRS resistances measured in the experiment, which only recorded to \(10^{4}\) seconds. It shows that the retention time
Figure 3: (a) 12 nm WS\({}_{2}\) RRAM forming I-V characteristics [19]. (b) WS\({}_{2}\) RRAM retention characteristics. The dotted line is simulated prediction. (c) WS\({}_{2}\) RRAM HRS temperature-dependent I-V characteristics.
of WS\({}_{2}\) can be maintained at least to \(10^{4}\) seconds. The solid line is the resistance calculated in this simulation, and the actual device will reduce the HRS resistance due to switching multiple times. Therefore, it is reasonable that the simulated On/Off ratio is slightly larger than the experimental result. The dotted line is the result of the simulated prediction. It can be observed in FIG. 3 (b) that the HRS resistance drops sharply after \(10^{4}\) seconds. The resistance state cannot maintain stably. This result is in good agreement with the previous the retention failure time (\(1.23\times 10^{4}\) seconds) calculated by Eq. 4. It can be speculated that the resistance of the HRS experimental data falling to 6 k\(\Omega\) has a high probability due to experimental measurement error. The LRS experiment and simulation data all show a stable resistance state at 1 k\(\Omega\).
### Temperature-Dependent Analysis
24-nm-thick WS\({}_{2}\)(32 layers) was measured electrical characteristics in the temperature-dependent experiment, and the state of this device was HRS. The specific heat density and thermal conductivity used to simulate heat conduction are \(1.19~{}\mathrm{J\cdot cm^{-3}\cdot K^{-1}}\) and \(1.21~{}\mathrm{W\cdot cm^{-1}\cdot K^{-126}}\). The simulation and the experimental data are in good agreement, and the 360 K data missing from the experiment was added to FIG. 3 (a) by simulation. From the experimental trend of the adjacent temperature (340 and 380 K), it can be reasonably judged that there is no problem with the corrected data. In the ambient temperature range from 300 to 400 K, when the temperature rises, the current also rises proportionally. When the applied bias increases, the current difference of adjacent temperature will also increase. this mean the temperature-dependent characteristics will significantly affect the current. Therefore, we must take into account the change of the ambient temperature on the current when WS\({}_{2}\) RRAM works at high bias.
### CVD-Grown WS\({}_{2}\) RRAM Analysis
In this section, we study the differences in electrical characteristics and material properties of WS\({}_{2}\) RRAM fabricated by mechanical exfoliation and CVD. The previous WS\({}_{2}\) RRAM used mechanical exfoliation to obtain 2D flakes. In this experiment, the device used CVD technology to grow WS\({}_{2}\) flakes, which are ultra-thin devices with a thickness of 3 nm (4 layers). Therefore, it is necessary to fine-tune some parameters in this simulation. We change the basic parameters of the WS\({}_{2}\) bulk material to that of the three or four layers. For example, the bandgap was increased from the original 1.54 eV to 2.76 eV [39] to simulate the carrier properties of the low-number-layer semiconductor device. The experimental and simulated results are shown in FIG. 4 (a). The ultra-thin thickness makes the tunneling effect gradually obvious so that the overall current rises greatly with the applied bias.
TABLE III sorts out the simulation parameters used in mechanical exfoliation and CVD WS\({}_{2}\) RRAM. The generation activation energy, polarizability, and frequency prefactor of the two are the same. It indicates that the generation of defects has nothing to do with the arrangement state between layered molecules. It has a high correlation with the bond breaking of material molecular and induced electric field between polar molecules. The most obvious difference between the two fabricated methods lies in the drift and diffusion of sulfur ions inside the device. The WS\({}_{2}\) samples obtained by mechanical exfoliation mostly belong to the single crystal structure, and the WS\({}_{2}\) grown by CVD mostly belong to the poly
Figure 4: (a) CVD WS\({}_{2}\) RRAM set/reset I-V characteristics. (b) Correlation between thickness and breakdown voltage of WS\({}_{2}\) RRAM.
crystalline structure. Therefore, CVD WS\({}_{2}\) has smaller domains and more defects. It means that each layer has more gaps and vacancies, and the probability of sulfur ion drifting and diffusing between the upper and lower layers is greatly increased.
Comparing with the simulation results of the two can be seen that the CVD WS\({}_{2}\) RRAM has a lower out-of-plane diffusion activation energy and a higher field acceleration factor, so it has a faster reset switching speed and a lower switching voltage. Moreover, the CVD WS\({}_{2}\) device has many defects. It makes the required voltage for the forming operation is lower, and the switching time is shorter. In other words, the switching power consumption is lower. The experiment also confirmed this result. Therefore, WS\({}_{2}\) obtained by CVD is more suitable for making RRAM devices than mechanical exfoliation.
#### iii.2.5 Breakdown Electric Field
The experiment measured the breakdown voltage of five devices with different thicknesses, and the thicknesses of 10, 20, 30, 40, and 50 nm were used for RRAM modeling in the simulation. We assume that the device is an ideal defect-free material at the beginning, and WS\({}_{2}\) parameters of TABLE 2 were used to simulate. The simulated results are consistent with the experimental data, as shown in FIG. 4 (b). The breakdown electric field can be extracted through the thickness of WS\({}_{2}\) and the breakdown voltage. The experimental and simulated results show that the breakdown electric field of WS\({}_{2}\) is 155 MV/m. When the internal electric field is larger than 155 MV/m, defects will generate in the device. It causes the current to rise rapidly, and the initial forming operation will be performed. We also tried to use the initial model of a few defects to simulate the forming operation of natural materials. The result shows that the internal electric field generated by adding a few defects is not enough to affect the breakdown voltage. The breakdown voltage is the same as that of an ideal defect-free model.
### Simulated Prediction
Following the previous section, the simulation is carried out with the extracted WS\({}_{2}\) parameters to find which thickness with the best On/Off ratio. The simulated result can get a stable I-V characteristic curve as shown in FIG. 5 (a). The stopping voltage and compliance current of each thickness are set to -1.5 V and \(10^{-3}\) A. From the TE and BE bias and the vertical electric field formula (\(F=V/d\)), it can be known the thinner device has a smaller switching voltage. This inference is consistent with our simulation results. It can also be observed the current of the thinner device drops more rapidly during reset switching. The vertical electric field of the thicker device increasing value is smaller by the sweeping bias, and the drift distance of ions in the vertical direction inside the device is longer. They make overall ion drift speed slower and switching time longer. Therefore, the current declining curve is more smooth. After all the ions drift from the TE to the BE, there are not any ions and vacancies to occur bonding recombination reaction. The current of the element will gradually increase with the increase of the bias voltage at this moment.
From the current trend of each thickness in FIG. 5 (a), we can know the current of the thicker device can drop to a lower value during the reset operation because the total number of defects inside the thicker device is more. It makes the difference in the defect number larger between HRS and LRS, and the current difference is also more. For the thickness reaching 40 nm or thicker, the ions diffusing in the in-plane direction and the vertical distance increasing make the difference of the defect number gradually become less. So, the HRS current tends to be the same level. The On/Off ratio difference is small. TABLE 4 sorts out the current and On/Off ratio of each thickness at -0.1 and 0.1 V reading bias. It can be found that a 40-nm-thick device has the best On/Off ratio, which can avoid bit reading errors as much as possible. Thinner devices have a faster switching speed and less energy consumption, but their On/Off ratios are relatively small. This also explains that low-layer-number 2D materials are hardly made into RRAM devices.
Since MoS\({}_{2}\) has the fastest switching speed among the three 2D materials, we also conduct electrical simulations for MoS\({}_{2}\) RRAM at various thicknesses. The simulated method is the same as that in the previous section. The
\begin{table}
\begin{tabular}{c c c} Thickness & On/Off Ratio & On/Off Ratio \\ (nm) & at -0.1 V & at 0.1 V \\ \hline
10 & 5.56 & 3.20 \\
20 & 6.14 & 7.89 \\
30 & 7.05 & 14.43 \\
40 & 10.13 & 17.43 \\
50 & 6.38 & 17.10 \\ \end{tabular}
\end{table}
Table 4: On/Off ratio of WS\({}_{2}\) RRAM at \(\pm 0.1\) V for various thicknesses.
\begin{table}
\begin{tabular}{l c c} Parameter & Mechanical Exfoliation & CVD \\ \hline \(E_{A,G}\) (eV) & 1.11 & 1.11 \\ \(E_{A,D}\)(X/Y) (eV) & 0.7 & 0.7 \\ \(E_{A,D}\)(Z) (eV) & 0.39 & 0.36 \\ \(E_{T}\) (eV) & \(0.4\pm 0.04\) & \(0.75\pm 0.04\) \\ \(\nu\) (Hz) & \(4.5\times 10^{13}\) & \(4.5\times 10^{13}\) \\ \(p_{0}\) (eÅ) & 9 & 9 \\ \(\gamma\) (eÅ) & 0.2 & 0.4 \\ \(t_{R}\) (sec) & \(1.23\times 10^{4}\) & \(1.23\times 10^{4}\) \\ \(t_{switch}\) (sec) & \(3\times 10^{-4}\) & \(9\times 10^{-5}\) \\ \end{tabular}
\end{table}
Table 3: Simulation parameters and switching time of WS\({}_{2}\) RRAM with mechanical exfoliation and CVD.
difference is the stopping voltage is set according to the state of the reset operation of each thickness. The stopping voltages of 10 to 50 nm are -0.5, -1.0, -1.5, -2.0, and -2.5 V so that the devices can be fully reset to get the maximum on/off ratio.
From FIG. 5 (b), it can be confirmed that the thicker device has a larger On/Off ratio. The On/Off ratio of each thickness is sorted into TABLE V. It can be known that the 50 nm MoS\({}_{2}\) RRAM has the largest switching ratio. The value is higher than 200, which is much larger than that of WS\({}_{2}\) RRAM. Therefore, MoS\({}_{2}\) is more suitable for making an RRAM device than WS\({}_{2}\) in terms of switching speed and On/Off ratio.
It can be known from TABLE V that the thicker device has a greater switching voltage, which will lead to greater power consumption. The stopping voltage of 50 nm devices must be set to a reverse bias voltage greater than 2.0 V. The thickness of RRAM should be determined according to the given operating voltage (V\({}_{DD}\)) and power consumption constraints on the overall circuit system, and the current noise should be reduced to less than the device On/Off ratio to avoid memory misinterpretation.
## IV Summary
We built physical models of WS\({}_{2}\), MoS\({}_{2}\), and h-BN RRAM through KMC simulation and experimental data and extracted the physical parameters of the three 2D materials. Through theoretical formulas, we also calculated the retention failure times of WS\({}_{2}\), MoS\({}_{2}\), and h-BN RRAM, which are respectively \(1.23\times 10^{4}\), \(3.18\times 10^{4}\), and \(1.18\times 10^{7}\) seconds. Compared with the HfO\({}_{\rm x}\) RRAM (benchmark device), the models show the ions prefer to drift in the out-of-plane direction due to the molecular arrangement structure of the 2D material. It results in the 2D RRAM having a lower threshold voltage, which makes the device switching time faster. In particular, the switching speed of MoS\({}_{2}\) RRAM is the fastest in three 2D RRAM. To verify the reliability of the KMC model, we conducted a comparative analysis of the temperature-dependent experiment and the simulation for WS\({}_{2}\) RRAM. The results showed that the current characteristics at high bias voltage are more significantly affected by the temperature change. We discussed the physical characteristics of RRAM made of 2D materials obtained by mechanical exfoliation and CVD technology. The results show that 2D materials grown by CVD have better device characteristics and are more suitable for making RRAM. Finally, electrical analysis and simulation for different active layer thicknesses were conducted to obtain the breakdown electric field of WS\({}_{2}\) as 155 MV/m, and built the thickness models to simulate the On/Off ratio and switching voltage of WS\({}_{2}\) and MoS\({}_{2}\) RRAM at various thicknesses for reference in future 2D RRAM device design.
If high-retention RRAM devices are wanted to make, we suggest that the active layer can choose materials with high generation activation energy (i.e., bond-dissociation
\begin{table}
\begin{tabular}{c c c c} Thickness & On/Off Ratio & On/Off Ratio & Stopping Voltage \\ (nm) & at -0.1 V & at 0.1 V & (V) \\ \hline
10 & 12.98 & 13.47 & \(-\)0.4 \\
20 & 18.73 & 32.66 & \(-\)0.86 \\
30 & 51.39 & 137.54 & \(-\)1.2 \\
40 & 111.94 & 149.16 & \(-\)1.6 \\
50 & 208.68 & 253.84 & \(-\)2.0 \\ \end{tabular}
\end{table}
Table 5: On/Off ratio of MoS\({}_{2}\) RRAM at \(\pm\)0.1 V for various thicknesses.
Figure 5: (a) WS\({}_{2}\) RRAM set/reset I-V characteristics of various thicknesses. (b) MoS\({}_{2}\) RRAM set/reset I-V characteristics of various thicknesses.
energy) to improve retention. Although h-BN has a high retention failure time, its current difference is too low due to low electron affinity. When h-BN is in contact with the metal, the Fermi level will be pinned on the p-type. It results that h-BN RRAM can only generate deep-level defects to make a low On/Off ratio. Therefore, h-BN is not an ideal RRAM material. The RRAM materials with better retention should be outside the scope of 2D materials due to the conditions of high bandgap, high electron affinity, and high bond-dissociation energy. For the development of 2D RRAM, the rapid drift of ions in 2D materials helps to manufacture RRAM devices with faster switching speeds. This advantage can solve the slow switching speed of traditional solid-state disks and the volatile memory property of DRAM. 2D RRAM can make up for the speed gap of the memory hierarchy [40]. To find the faster switching speed RRAM, 2D RRAM can carry out more in-depth experimental and simulated studies on MoS\({}_{2}\) RRAM in the future.
###### Acknowledgements.
Yu-Ting Chao fabricated and measured the devices for WS\({}_{2}\). Tzu-Heng Wang fabricated and measured the devices for MoS\({}_{2}\) and h-BN. Ying-Chuan Chen simulated various devices and analyzed data. Ying-Chuan Chen wrote the manuscript. All the authors discussed the results and explanations. This work was supported by the National Science and Technology Council under Grant Nos. NSTC 111-2221-E-002-075, 111-2622-8-002-001, 112-2119-M-002-013 and 111-2218-E-002-025.
|
2305.06510 | Large Deviation Principles of Stochastic Reaction-Diffusion Lattice
Systems | This paper is concerned with the large deviation principle of the stochastic
reaction-diffusion lattice systems defined on the N-dimensional integer set,
where the nonlinear drift term is locally Lipschitz continuous with polynomial
growth of any degree and the nonlinear diffusion term is locally Lipschitz
continuous with linear growth. We first prove the convergence of the solutions
of the controlled stochastic lattice systems, and then establish the large
deviations by the weak convergence method based on the equivalence of the large
deviation principle and the Laplace principle. | Bixiang Wang | 2023-05-11T01:22:44Z | http://arxiv.org/abs/2305.06510v1 | # Large Deviation Principles of Stochastic Reaction-Diffusion Lattice Systems
###### Abstract
This paper is concerned with the large deviation principle of the stochastic reaction-diffusion lattice systems defined on the \(N\)-dimensional integer set, where the nonlinear drift term is locally Lipschitz continuous with polynomial growth of any degree and the nonlinear diffusion term is locally Lipschitz continuous with linear growth. We first prove the convergence of the solutions of the controlled stochastic lattice systems, and then establish the large deviations by the weak convergence method based on the equivalence of the large deviation principle and the Laplace principle.
**Key words.** Large deviation principle; Laplace principle; weak convergence; lattice system.
**MSC 2020.** Primary: 60F10; 37L55; Secondary 34F05, 39A50, 60H10.
## 1 Introduction
In this paper, we investigate the large deviation principle of the non-autonomous stochastic reaction-diffusion lattice system defined on the \(N\)-dimensional integer set \(\mathbb{Z}^{N}\). Given \(i\in\mathbb{Z}^{N}\), consider the Ito stochastic system:
\[du_{i}^{\varepsilon}(t)+\nu(Au^{\varepsilon}(t))_{i}dt+F_{i}(t,u_{i}^{ \varepsilon}(t))dt=\sqrt{\varepsilon}\sum_{k=1}^{\infty}\sigma_{k,i}(t,u_{i}^{ \varepsilon}(t))dW_{k},\quad t>0, \tag{1.1}\]
with initial data
\[u_{i}^{\varepsilon}(0)=u_{0,i}, \tag{1.2}\]
where \(i=(i_{1},\ldots,i_{N})\in\mathbb{Z}^{N}\), \(u=(u_{i})_{i\in\mathbb{Z}^{N}}\) is an unknown sequence, \(\nu>0\) and \(\varepsilon\in(0,1)\) are constants, \(A\) is the negative discrete \(N\)-dimensional Laplace operator defined on \(\mathbb{Z}^{N}\), and \((W_{k})_{k\in\mathbb{N}}\) is a sequence of independent real-valued standard Wiener processes on a probability space.
For every \(i\in\mathbb{Z}^{N}\), the nonlinear function \(F_{i}:\mathbb{R}^{+}\times\mathbb{R}\to\mathbb{R}\) is a locally Lipschitz function with polynomial growth of any degree with respect to the second argument. For the diffusion coefficients, we will assume that for every \(i\in\mathbb{Z}^{N}\) and \(k\in\mathbb{N}\), \(\sigma_{k,i}:\mathbb{R}^{+}\times\mathbb{R}\to\mathbb{R}\) is a locally Lipschitz function with linear growth with respect to the second argument.
Lattice systems can be used to describe the dynamics of physical systems with discrete structures, including electric circuits, pattern formation and propagation of nerve pulses [7, 30, 31, 38, 48, 50, 51]. Such systems also occur by discretizing partial differential equations in space variables defined on unbounded domains. The solutions of both deterministic and stochastic lattice systems have been investigated by many experts. For deterministic lattice systems, the traveling waves, chaotic solutions and global attractors have been studied in [1, 2, 3, 27, 36, 37, 38, 69], [26, 28, 29] and [4, 9, 46, 49, 56, 65], respectively. For stochastic lattice systems, the random attractors have been reported in [5, 6, 14, 15, 44, 45]. Recently, the invariant measures and periodic measures of stochastic lattice systems have been examined in [21, 22, 23, 52, 53, 66, 67, 68] and the references therein. In the present paper, we study the large deviation principle of the stochastic lattice system (1.1)-(1.2).
The large deviation principle of stochastic systems is concerned with the exponential decay of distributions of solutions on tail events as \(\varepsilon\to 0\), which has been investigated in [33, 41, 62, 63, 64] and the references therein. There are two basic approaches to deal with the large deviations of stochastic partial differential equations: the classical method and the weak convergence method. The classical method is based on the discretization and approximation arguments along with uniform exponential probability estimates, see, e.g., [10, 16, 17, 24, 25, 39, 40, 42, 47, 55, 61]. The weak convergence method is based on the equivalence of large deviation principles and Laplace principles as well as variational representations of positive functions of infinite-dimensional Brownian motions, see, e.g., [8, 11, 12, 13, 19, 20, 32, 34, 35, 54, 57, 59, 60].
The large deviation principle has been well developed for finite-dimensional dynamical systems generated by stochastic ordinary differential equations and infinite-dimensional dynamical systems generated by stochastic partial differential equations. However, as far as the author is aware, it seems that there is not a result available in the literature on large deviations for infinite-dimensional lattice systems. The goal of the present paper is to investigate this problem and establish the large deviation principle of the infinite-dimensional lattice system (1.1)-(1.2) by employing the weak convergence method as introduced in [12, 13, 35]. One of the advantages of the weak convergence method lies in the fact that it does not require the uniform exponential probability estimates of solutions.
Note that the lattice system (1.1) on \(\mathbb{Z}^{N}\) can be considered as spatial discretization of the cor
responding reaction-diffusion partial differential equation on \(\mathbb{R}^{n}\). The large deviations of reaction-diffusion equations in bounded domains have been studied in [17, 19, 43] when the nonlinear drift term is locally Lipschitz continuous with polynomial growth and the diffusion term is globally Lipschitz continuous. In the present paper, we will deal with the infinite-dimensional lattice system (1.1) with polynomial drift term and locally Lipschitz diffusion term. The reader is referred to [18, 24, 40, 58, 61] for large deviations of reaction-diffusion equations in bounded domains with globally Lipschitz drift terms.
We will recall basic concepts of large deviation principles and Laplace principles in the next section, and discuss the well-posedness of system (1.1)-(1.2) in Section 3. We finally prove the large deviation principle of (1.1)-(1.2) in Section 4.
## 2 Large deviation theory
In this section, we review the large deviation principle and the Laplace principle of a family of random variables based on the weak convergence method as introduced in [12, 35].
Let \((\Omega,\mathcal{F},\{\mathcal{F}_{t}\}_{t\geqslant 0},P)\) be a complete filtered probability space satisfying the usual condition. Suppose \(\{W(t)\}_{t\geqslant 0}\) is a cylindrical Wiener process with identity covariance operator in a separable Hilbert space \(H\) with respect to \((\Omega,\mathcal{F},\{\mathcal{F}_{t}\}_{t\geqslant 0},P)\), which means that there exists another separable Hilbert space \(U\) such that the embedding \(H\hookrightarrow U\) is Hilbert-Schmidt and \(W(t)\) takes values in \(U\).
Let \(\mathcal{E}\) be a polish space, and for every \(\varepsilon>0\), \(\mathcal{G}^{\varepsilon}:C([0,T],U)\to\mathcal{E}\) be a measurable map. Denote by
\[X^{\varepsilon}=\mathcal{G}^{\varepsilon}(W),\quad\forall\ \varepsilon>0. \tag{2.3}\]
We will investigate the large deviation principle of \(X^{\varepsilon}\) as \(\varepsilon\to 0\). To that end, we first recall some notation from [12].
Given \(N>0\), denote by
\[S_{N}=\{v\in L^{2}(0,T;H):\int_{0}^{T}\|v(t)\|_{H}^{2}dt\leqslant N\}. \tag{2.4}\]
Then \(S_{N}\) is a polish space endowed with the weak topology. Throughout this paper, we always assume \(S_{N}\) is equipped with the weak topology, unless otherwise stated. Let \(\mathcal{A}\) be the space of all \(H\)-valued stochastic processes \(v\) which are progressively measurable with respect to \(\{\mathcal{F}_{t}\}_{t\in[0,T]}\) and \(\int_{0}^{T}\|v(t)\|^{2}dt<\infty\)\(P\)-almost surely. Denote by
\[\mathcal{A}_{N}=\{v\in\mathcal{A}:v(\omega)\in S_{N}\ \text{for almost all}\ \omega\in\Omega\}. \tag{2.5}\]
**P
The large deviation principle of the family \(\{X^{\varepsilon}\}\) is concerned with the exponential decay of distributions of \(\{X^{\varepsilon}\}\) on tail events as \(\varepsilon\to 0\). Such exponential decay is characterized by a rate function \(I:\mathcal{E}\to[0,\infty]\).
**Definition 2.1**.: _A function \(I:\mathcal{E}\to[0,\infty]\) is called a rate function on \(\mathcal{E}\) if it is lower semi-continuous in \(\mathcal{E}\). A rate function \(I\) on \(\mathcal{E}\) is said to be a good rate function on \(\mathcal{E}\) if for every \(0\leq C<\infty\), the level set \(\{x\in\mathcal{E}:I(x)\leq C\}\) is a compact subset of \(\mathcal{E}\)._
**Definition 2.2**.: _The family \(\{X^{\varepsilon}\}\) is said to satisfy the large deviation principle in \(\mathcal{E}\) with a rate function \(I:\mathcal{E}\to[0,\infty]\) if for every Borel subset \(B\) of \(\mathcal{E}\),_
\[-\inf_{x\in B^{\circ}}I(x)\leq\liminf_{\varepsilon\to 0}\varepsilon\log P(X^{ \varepsilon}\in B)\leq\limsup_{\varepsilon\to 0}\varepsilon\log P(X^{ \varepsilon}\in B)\leq-\inf_{x\in\overline{B}}I(x).\]
_where \(B^{\circ}\) and \(\overline{B}\) are the interior and the closure of \(B\) in \(\mathcal{E}\), respectively._
Since \(\mathcal{E}\) is a polish space, it is well known that the family \(\{X^{\varepsilon}\}\) satisfies the large deviation principle on \(\mathcal{E}\) with a rate function \(I:\mathcal{E}\to[0,\infty]\) if and only if \(\{X^{\varepsilon}\}\) satisfies the Laplace principle on \(\mathcal{E}\) with the same rate function. The concept of Laplace principle of \(\{X^{\varepsilon}\}\) is given below.
**Definition 2.3**.: _The family \(\{X^{\varepsilon}\}\) is said to satisfy the Laplace principle in \(\mathcal{E}\) with a rate function \(I:\mathcal{E}\to[0,\infty]\) if for all bounded and continuous \(H:\mathcal{E}\to\mathbb{R}\),_
\[\lim_{\varepsilon\to 0}\varepsilon\log\mathbb{E}\left(e^{-\frac{1}{ \varepsilon}H(X^{\varepsilon})}\right)=-\inf_{x\in\mathcal{E}}\left\{H(x)+I(x )\right\}.\]
In order to prove the large deviation principle of \(X^{\varepsilon}\), we will assume that the family \(\{\mathcal{G}^{\varepsilon}\}\) fulfills the following conditions: there exists a measurable map \(\mathcal{G}^{0}:C([0,T],U)\to\mathcal{E}\) such that
**(H1)**: If \(N<\infty\) and \(\{v^{\varepsilon}\}\subseteq\mathcal{A}_{N}\) such that \(\{v^{\varepsilon}\}\) converges in distribution to \(v\) as \(S_{N}\)-valued random variables, then \(\mathcal{G}^{\varepsilon}\left(W+\varepsilon^{-\frac{1}{2}}\int_{0}v^{ \varepsilon}(t)dt\right)\) converges in distribution to \(\mathcal{G}^{0}\left(\int_{0}v(t)dt\right)\).
**(H2)**: For every \(N<\infty\), the set \(\left\{\mathcal{G}^{0}(\int_{0}^{\cdot}v(t)dt):\ v\in S_{N}\right\}\) is a compact subset of \(\mathcal{E}\).
Define \(I:\mathcal{E}\to[0,\infty]\) by, for every \(x\in\mathcal{E}\),
\[I(x)=\inf\left\{\frac{1}{2}\int_{0}^{T}\|v(t)\|_{H}^{2}dt:\ v\in L^{2}(0,T;H) \text{ such that }\mathcal{G}^{0}\left(\int_{0}^{\cdot}v(t)dt\right)=x\right\}, \tag{2.6}\]
with the convention that the infimum over an empty set is taken to be \(\infty\). By assumption \(\mathbf{(H2)}\), we find that every level set of the map \(I\) as defined by (2.6) is a compact subset of \(\mathcal{E}\), which further implies the lower semi-continuity of \(I\). By definition, this map \(I\) is a good rate function on \(\mathcal{E}\). Moreover, under \(\mathbf{(H1)}\) and \(\mathbf{(H2)}\), the family \(\{X^{\varepsilon}\}\) satisfies the Laplace principle in \(\mathcal{E}\) with rate function \(I\) as stated below (see, [12, Theorem 4.4]).
**Proposition 2.4**.: _If \(\{\mathcal{G}^{\varepsilon}\}\) satisfies \(\mathbf{(H1)}\)-\(\mathbf{(H2)}\), then the family \(\{X^{\varepsilon}\}\) as given by (2.3) satisfies the Laplace principle in \(\mathcal{E}\) with rate function \(I\) as defined by (2.6)._
Well-posedness of stochastic lattice systems
In this section, we discuss the existence and uniqueness of solutions to system (1.1)-(1.2), which is needed for establishing the large deviation principle of the solutions.
Let \((\Omega,\mathcal{F},\{\mathcal{F}_{t}\}_{t\in\mathbb{R}},P)\) be a complete filtered probability space satisfying the usual condition, and \((W_{k})_{k\in\mathbb{N}}\) is a sequence of independent real-valued standard Wiener processes defined on \((\Omega,\mathcal{F},\{\mathcal{F}_{t}\}_{t\in\mathbb{R}},P)\). Denote by
\[\ell^{2}=\left\{u=(u_{i})_{i\in\mathbb{Z}^{N}}\ :\ \sum_{i\in\mathbb{Z}^{N}}|u_{i}|^{2 }<\infty\right\}\]
with norm \(\|\cdot\|\) and inner product \((\cdot,\cdot)\), respectively. Recall that the negative discrete \(N\)-dimensional Laplace operator \(A:\ell^{2}\rightarrow\ell^{2}\) is given by, for every \(u=\left(u_{i}\right)_{i\in\mathbb{Z}^{N}}\in\ell^{2}\) and \(i=(i_{1},i_{2},\ldots,i_{N})\in\mathbb{Z}^{N}\),
\[(Au)_{i}=-u_{(i_{1}-1,i_{2},\ldots,i_{N})}-u_{(i_{1},i_{2}-1,\ldots,i_{N})}- \cdots-u_{(i_{1},i_{2},\ldots,i_{N}-1)}\]
\[+2Nu_{(i_{1},i_{2},\ldots,i_{N})}-u_{(i_{1}+1,i_{2},\ldots,i_{N})}-u_{(i_{1}, i_{2}+1,\ldots,i_{N})}-\cdots-u_{(i_{1},i_{2},\ldots,i_{N}+1)}\big{)}.\]
For convenience, for every \(j=1,\ldots,N\), define \(A_{j},B_{j},B_{j}^{*}:\ell^{2}\rightarrow\ell^{2}\) by, for \(u=(u_{i})_{i\in\mathbb{Z}^{N}}\in\ell^{2}\) and \(i=(i_{1},i_{2},\ldots,i_{N})\in\mathbb{Z}^{N}\),
\[(A_{j}u)_{i}=-u_{(i_{1},\ldots,i_{j}+1,\ldots,i_{N})}+2u_{(i_{1},\ldots,i_{j}, \ldots,i_{N})}-u_{(i_{1},\ldots,i_{j}-1,\ldots,i_{N})},\]
\[(B_{j}u)_{i}=u_{(i_{1},\ldots,i_{j}+1,\ldots,i_{N})}-u_{(i_{1},\ldots,i_{j}, \ldots,i_{N})},\]
and
\[(B_{j}^{*}u)_{i}=u_{(i_{1},\ldots,i_{j}-1,\ldots,i_{N})}-u_{(i_{1},\ldots,i_{j },\ldots,i_{N})}.\]
Then we have,
\[A=\sum_{j=1}^{N}A_{j},\quad A_{j}=B_{j}B_{j}^{*}=B_{j}^{*}B_{j}.\]
Given \(u=(u_{i})_{i\in\mathbb{Z}^{N}}\in\ell^{2}\), denote by
\[Bu=(B_{1}u,\ldots,B_{N}u)\quad\mbox{and}\quad\|Bu\|=\left(\sum_{j=1}^{N}\|B_{ j}u\|^{2}\right)^{\frac{1}{2}}.\]
In the sequel, we assume that for every \(i\in\mathbb{Z}^{N}\), the nonlinear function \(F_{i}:\mathbb{R}^{+}\times\mathbb{R}\rightarrow\mathbb{R}\) satisfies,
\[F_{i}(t,s)=F_{0}(s)-g_{i}(t),\quad\forall\ t\in\mathbb{R}^{+},\ s\in\mathbb{R}, \tag{3.1}\]
where \(F_{0}:\mathbb{R}\rightarrow\mathbb{R}\) is continuously differentiable and there exists \(\gamma\leqslant 0\) such that
\[F_{0}(0)=0\quad\mbox{and}\quad F_{0}^{\prime}(s)\geqslant\gamma\ \ \mbox{for all}\ \ s\in\mathbb{R}. \tag{3.2}\]
Note that for every \(k\in\mathbb{N}\), the classical nonlinearity \(F_{0}(s)=s^{2k+1}-s\) for the reaction-diffusion equation indeed satisfies condition (3.2).
For the sake of convenience, we write
\[f(s)=F_{0}(s)-\gamma s\ \ \text{for all}\ s\ \in\mathbb{R}.\]
It follows from (3.2) that
\[f(0)=0\ \ \text{and}\ \ f^{\prime}(s)\geq 0\ \ \text{for all}\ s\in\mathbb{R}. \tag{3.3}\]
Given \(u=(u_{i})_{i\in\mathbb{Z}^{N}}\in\ell^{2}\), denote by \(f(u)=(f(u_{i}))_{i\in\mathbb{Z}^{N}}\). By (3.3) we find that for every \(R>0\), there exists a constant \(L_{R}>0\) such that for all \(u,v\in\ell^{2}\) with \(\|u\|\leq R\) and \(\|v\|\leq R\),
\[\|f(u)-f(v)\|\leq L_{R}\|u-v\|. \tag{3.4}\]
Moreover,
\[(f(u)-f(v),u-v)\geq 0\ \ \text{for all}\ \ u,\ v\in\ell. \tag{3.5}\]
For the nonlinear diffusion term, we assume that for every \(i\in\mathbb{Z}^{N}\) and \(k\in\mathbb{N}\), \(\sigma_{k,i}:\mathbb{R}^{+}\times\mathbb{R}\to\mathbb{R}\) satisfies
\[\sigma_{k,i}(t,s)=h_{k,i}(t)+\delta_{k,i}\sigma_{k}^{0}(s),\quad\forall\ t\in \mathbb{R}^{+},\ s\in\mathbb{R}, \tag{3.6}\]
where \(\sigma_{k}^{0}:\mathbb{R}\to\mathbb{R}\) is locally Lipschitz continuous in the sense that for any bounded interval \(I\), there exists a constant \(L_{I}>0\) such that
\[|\sigma_{k}^{0}(s_{1})-\sigma_{k}^{0}(s_{2})|\leq L_{I}|s_{1}-s_{2}|,\ \ \text{ for all}\ \ s_{1},s_{2}\in I,\ k\in\mathbb{N}. \tag{3.7}\]
We further assume that there exists \(\alpha>0\) such that
\[|\sigma_{k}^{0}(s)|\leq\alpha(1+|s|),\ \ \text{ for all}\ \ s\in\mathbb{R}\ \ \ \text{and}\ \ \ k\in\mathbb{N}. \tag{3.8}\]
For the sequence \(\delta=(\delta_{k,i})_{k\in\mathbb{N},i\in\mathbb{Z}^{N}}\) in (3.6) we assume
\[\|\delta\|^{2}=\sum_{k\in\mathbb{N}}\sum_{i\in\mathbb{Z}^{N}}|\delta_{k,i}|^{2 }<\infty. \tag{3.9}\]
For each \(k\in\mathbb{N}\), define an operator \(\sigma_{k}:\ell^{2}\to\ell^{2}\) by
\[\sigma_{k}(u)=(\delta_{k,i}\sigma_{k}^{0}(u_{i}))_{i\in\mathbb{Z}^{N}},\ \ \text{ for all}\ \ u=(u_{i})_{i\in\mathbb{Z}^{N}}\in\ell^{2}. \tag{3.10}\]
Then by (3.8), (3.9) and (3.10) we see that
\[\sum_{k\in\mathbb{N}}\|\sigma_{k}(u)\|^{2}\leq 2\alpha^{2}\|\delta\|^{2}(1+\|u |^{2}),\ \ \ \text{ for all}\ u\in\ell^{2}. \tag{3.11}\]
Furthermore, by (3.7), we find that for every \(R>0\), there exists \(L_{R}>0\) such that for all \(u,v\in\ell^{2}\) with \(\|u\|\leqslant R\) and \(\|v\|\leqslant R\),
\[\sum_{k\in\mathbb{N}}|\sigma_{k}(u)-\sigma_{k}(v)|^{2}\leqslant L_{R}\|\delta \|^{2}\|u-v\|^{2}. \tag{3.12}\]
For the sequences \(g(t)=(g_{i}(t))_{i\in\mathbb{Z}^{N}}\) in (3.1) and \(h_{k}(t)=(h_{k,i}(t))_{i\in\mathbb{Z}^{N}}\) in (3.6), we assume that for every \(T>0\),
\[\int_{0}^{T}\|g(t)\|^{2}dt<\infty\quad\text{and}\quad\sum_{k=1}^{\infty}\|h_{k }\|_{L^{\infty}(0,T;\ell^{2})}^{2}<\infty. \tag{3.13}\]
Note that system (1.1)-(1.2) is equivalent to the Ito stochastic equation for \(u^{\varepsilon}=(u_{i}^{\varepsilon})_{i\in\mathbb{Z}^{N}}\) in \(\ell^{2}\):
\[du^{\varepsilon}(t)+\nu Au^{\varepsilon}(t)dt+f(u^{\varepsilon}(t))dt+\gamma u ^{\varepsilon}(t)dt=g(t)dt+\sqrt{\varepsilon}\sum_{k=1}^{\infty}\left(h_{k}(t )+\sigma_{k}(u^{\varepsilon}(t))\right)dW_{k}(t), \tag{3.14}\]
with initial data
\[u^{\varepsilon}(0)=u_{0}\in\ell^{2}. \tag{3.15}\]
Under conditions (3.2), (3.7)-(3.9) and (3.13), one can show that system (3.14)-(3.15) is well-posed in \(\ell^{2}\) for every \(\varepsilon\in(0,1)\) (see, e.g., [66, Theorem 2.5]); more precisely, for every \(u_{0}\in L^{2}(\Omega,\mathcal{F}_{0};\ell^{2})\), there exists a unique continuous \(\ell^{2}\)-valued \(\mathcal{F}_{t}\)-adapted stochastic process \(u^{\varepsilon}\) such that \(u^{\varepsilon}\in L^{2}(\Omega,C([0,T],\ell^{2}))\) for all \(T>0\), and for almost all \(\omega\in\Omega\),
\[u^{\varepsilon}(t)=u_{0}+\int_{0}^{t}\left(-\nu Au^{\varepsilon}(s)-f(u^{ \varepsilon}(s))-\gamma u^{\varepsilon}(s)+g(s)\right)ds+\sqrt{\varepsilon} \sum_{k=1}^{\infty}\int_{0}^{t}\left(h_{k}(s)+\sigma_{k}(u^{\varepsilon}(s)) \right)dW_{k}\]
in \(\ell^{2}\) for all \(t\geqslant 0\). Moreover, for every \(T>0\), there exists a positive number \(C=C(T)\) independent of \(u_{0}\) and \(\varepsilon\in(0,1)\) such that
\[\mathbb{E}\left(\|u^{\varepsilon}\|_{C([0,T],\ell^{2})}^{2}\right)\leqslant C \left(\mathbb{E}(\|u_{0}\|^{2})+1+\int_{0}^{T}(\|g(s)\|^{2}+\sum_{k=1}^{ \infty}\|h_{k}(s)\|^{2})ds\right). \tag{3.16}\]
For convenience, we set
\[H=\{u=(u_{j})_{j=1}^{\infty}:\sum_{j=1}^{\infty}|u_{j}|^{2}<\infty\}.\]
For every \(k\in\mathbb{N}\), let \(e_{k}=(\delta_{k,j})_{j=1}^{\infty}\) with \(\delta_{k,j}=1\) for \(j=k\) and \(\delta_{k,j}=0\) otherwise. Then \(\{e_{k}\}_{k=1}^{\infty}\) is an orthonormal basis of \(H\). Let \(I\) be the identity operator on \(H\) and \(W\) be the cylindrical Wiener process in \(H\) with covariance operator \(I\) as given by
\[W(t)=\sum_{k=1}^{\infty}W_{k}(t)e_{k},\quad t\in\mathbb{R}^{+},\]
where the series converges in \(L^{2}(\Omega,\mathcal{F};C([0,T],U))\) for every \(T>0\) with \(U\) being a separable Hilbert space such that the embedding \(H\hookrightarrow U\) is a Hilbert-Schmidt operator.
Given \(u\in\ell^{2}\) and \(t\geq 0\), define \(\sigma(t,u):H\to\ell^{2}\) by
\[\sigma(t,u)(v)=\sum_{k=1}^{\infty}\left(h_{k}(t)+\sigma_{k}(u)\right)v_{k}, \quad\forall\ v=(v_{k})_{k=1}^{\infty}\in H. \tag{3.17}\]
Note that the series in (3.17) is convergent in \(\ell^{2}\) by (3.11) and (3.13). Moreover, the operator \(\sigma(t,u):H\to\ell^{2}\) is Hilbert-Schmidt and
\[\|\sigma(t,u)\|_{L(H,\ell^{2})}\leq\|\sigma(t,u)\|_{L_{2}(H,\ell^{2})}=\left( \sum_{k=1}^{\infty}\|h_{k}(t)+\sigma_{k}(u)\|^{2}\right)^{\frac{1}{2}}<\infty. \tag{3.18}\]
Hereafter, we use \(L(H,\ell^{2})\) to denote the space of bounded linear operators from \(H\) to \(\ell^{2}\) with norm \(\|\cdot\|_{L(H,\ell^{2})}\), and use \(L_{2}(H,\ell^{2})\) to denote the space of Hilbert-Schmidt operators from \(H\) to \(\ell^{2}\) with norm \(\|\cdot\|_{L_{2}(H,\ell^{2})}\).
With above notation, system (3.14)-(3.15) can be reformulated as
\[du^{\varepsilon}(t)+\nu Au^{\varepsilon}(t)dt+f(u^{\varepsilon}(t))dt+\gamma u ^{\varepsilon}(t)dt=g(t)dt+\sqrt{\varepsilon}\sigma(t,u^{\varepsilon}(t))dW( t), \tag{3.19}\]
with initial data
\[u^{\varepsilon}(0)=u_{0}\in\ell^{2}. \tag{3.20}\]
In the next section, we examine the large deviation principle of (3.19)-(3.20).
## 4 Large deviation principles of lattice systems
In this section, we prove the large deviation principle of the family of solutions \(\{u^{\varepsilon}\}\) of (3.19)-(3.20) as \(\varepsilon\to 0\) which is stated below.
**Theorem 4.1**.: _Suppose that (3.2), (3.7)-(3.9) and (3.13) hold, and \(u^{\varepsilon}\) is the solution of (3.19)-(3.20). Then the family \(\{u^{\varepsilon}\}\), as \(\varepsilon\to 0\), satisfies the large deviation principle in \(C([0,T],\ell^{2})\) with the good rate function as given by (4.73)._
The rest of the paper is devoted to the proof of Theorem 4.1. Note that for every \(\varepsilon\in(0,1)\) and \(T>0\), by the existence and uniqueness of solution to (3.19)-(3.20) in \(C([0,T],\ell^{2})\), there exists a Borel-measurable map \(\mathcal{G}^{\varepsilon}:C([0,T],U)\to C([0,T],\ell^{2})\) such that
\[u^{\varepsilon}=\mathcal{G}^{\varepsilon}(W),\quad\text{P-almost surely}.\]
To study the large deviation principle of \(\{u^{\varepsilon}\}\) as \(\varepsilon\to 0\), we introduce a deterministic control system corresponding to (3.19). Given a control \(v\in L^{2}(0,T;H)\), solve for \(u_{v}\) in terms of the controlled equation:
\[\frac{du_{v}(t)}{dt}=-\nu Au_{v}(t)-f(u_{v}(t))-\gamma u_{v}(t)+g(t)+\sigma(t,u _{v}(t))v(t), \tag{4.21}\]
with initial data
\[u_{v}(0)=u_{0}\in\ell^{2}. \tag{4.22}\]
As usual, by a solution \(u_{v}\) to (4.21)-(4.22) on \([0,T]\), we mean \(u_{v}\in C([0,T],\ell^{2})\) such that for all \(t\in[0,T]\),
\[u_{v}(t)=u_{0}+\int_{0}^{t}\left(-\nu Au_{v}(s)-f(u_{v}(s))-\gamma u_{v}(s)+g( s)+\sigma(s,u_{v}(s))v(s)\right)ds. \tag{4.23}\]
We first prove the existence and uniqueness of solutions to (4.21)-(4.22) in the sense of (4.23).
**Lemma 4.2**.: _Suppose that (3.2), (3.7)-(3.9) and (3.13) hold. Then for every \(v\in L^{2}(0,T;H)\), problem (4.21)-(4.22) has a unique solution \(u_{v}\in C([0,T],\ell^{2})\)._
_Furthermore, for each \(R_{1}>0\) and \(R_{2}>0\), there exists \(C_{1}=C_{1}(R_{1},R_{2},T)>0\) such that for any \(u_{0,1},u_{0,2}\in\ell^{2}\) with \(|u_{0,1}|\leq R_{1},\|u_{0,2}|\leq R_{1}\), and any \(v_{1},v_{2}\in L^{2}(0,T;H)\) with \(\|v_{1}\|_{L^{2}(0,T;H)}\leq R_{2}\) and \(\|v_{2}\|_{L^{2}(0,T;H)}\leq R_{2}\), the solutions \(u_{v_{1}}\) and \(u_{v_{2}}\) of (4.21)-(4.22) with initial data \(u_{0,1}\) and \(u_{0,2}\), respectively, satisfy_
\[\|u_{v_{1}}-u_{v_{2}}\|_{C([0,T],\ell^{2})}^{2}\leq C_{1}\left(\|u_{0,1}-u_{0, 2}\|^{2}+\|v_{1}-v_{2}\|_{L^{2}(0,T;H)}^{2}\right), \tag{4.24}\]
_and_
\[\|u_{v_{1}}\|_{C([0,T],\ell^{2})}^{2}\leq C_{1}. \tag{4.25}\]
Proof.: Let \(v\in L^{2}(0,T;H)\) be given. We first prove the existence and uniqueness of solution to system (4.21)-(4.22). By dropping the subscript \(v\), system (4.21)-(4.22) can be written as
\[\frac{du(t)}{dt}=G(t,u(t)),\quad u(0)=u_{0}, \tag{4.26}\]
where
\[G(t,u)=-\nu Au-f(u)-\gamma u+g(t)+\sigma(t,u)v(t). \tag{4.27}\]
Since \(v\in L^{2}(0,T;H)\), by (3.3), (3.4) and (3.11) we find from (4.27) that for every \(R>0\), there exists a constant \(c_{1}=c_{1}(R)>0\) such that for all \(t\in[0,T]\) and \(u\in\ell^{2}\) with \(\|u\|\leq R\),
\[\|G(t,u)\|^{2}\leq c_{1}\left(1+\|g(t)\|^{2}+\|u\|^{2}\right)+c_{1}\left(1+ \sum_{k=1}^{\infty}\|h_{k}(t)\|^{2}+\|u\|^{2}\right)\|v(t)\|_{H}^{2}. \tag{4.28}\]
Similarly, by (3.4) and (3.12) we find that there exists \(c_{2}=c_{2}(R)>0\) such that for all \(t\in[0,T]\) and \(u_{1},u_{2}\in\ell^{2}\) with \(\|u_{1}\|\leq R\) and \(\|u_{2}\|\leq R\),
\[\|G(t,u_{1})-G(t,u_{2})\|^{2}\leq c_{2}\left(1+\|v(t)\|_{H}^{2}\right)\|u_{1}-u_ {2}\|^{2}. \tag{4.29}\]
It follows from (4.28)-(4.29) that for each \(u_{0}\in\ell^{2}\), equation (4.26) has a unique local maximal solution \(u\in C([0,T_{0}),\ell^{2})\) for some \(0<T_{0}\leq T\). Next, we show this solution is actually defined on the entire interval \([0,T]\) by uniform estimates of solutions.
By (4.26) we have for all \(t\in(0,T_{0})\),
\[\frac{1}{2}\frac{d}{dt}\|u(t)\|^{2}=-\nu\|Bu(t)\|^{2}-(f(u(t)),u(t))-\gamma\|u (t)\|^{2}\]
\[+(g(t),u(t))+(\sigma(t,u(t))v(t),u(t)),\]
which along with (3.3) and (3.5) implies that
\[\frac{d}{dt}\|u(t)\|^{2}\leq-2\gamma\|u(t)\|^{2}+2(g(t),u(t))+2(\sigma(t,u(t) )v(t),u(t)). \tag{4.30}\]
By Young's inequality we have
\[2|(g(t),u(t))|\leq\|u(t)\|^{2}+\|g(t)\|^{2}. \tag{4.31}\]
For the last term on the right-hand side of (4.30), by (3.18) and (3.11) we get
\[2|(\sigma(t,u(t))v(t),u(t))|\leq\|\sigma(t,u(t))v(t)\|^{2}+\|u(t)\|^{2}\]
\[\leq\|\sigma(t,u(t))\|_{L(H,\ell^{2})}^{2}\|v(t)\|_{H}^{2}+\|u(t)\|^{2}\]
\[\leq\sum_{k=1}^{\infty}\|h_{k}(t)+\sigma_{k}(u(t))\|^{2}\|v(t)\|_{H}^{2}+\|u( t)\|^{2}\]
\[\leq 2|v(t)|_{H}^{2}\sum_{k=1}^{\infty}\|h_{k}(t)\|^{2}+2\|v(t)\|_{H}^{2}\sum_{ k=1}^{\infty}\|\sigma_{k}(u(t))\|^{2}+\|u(t)\|^{2}\]
\[\leq 2|v(t)|_{H}^{2}\sum_{k=1}^{\infty}\|h_{k}(t)\|^{2}+4\alpha^{2}\|\delta\|^ {2}(1+\|u(t)\|^{2})\|v(t)\|_{H}^{2}+\|u(t)\|^{2}. \tag{4.32}\]
By (4.30)-(4.32) we get for all \(t\in(0,T_{0})\),
\[\frac{d}{dt}\|u(t)\|^{2}\leq\left(2-2\gamma+4\alpha^{2}\|\delta\|^{2}\|v(t)\|_ {H}^{2}\right)\|u(t)\|^{2}\]
\[+4\alpha^{2}\|\delta\|^{2}\|v(t)\|_{H}^{2}+\|g(t)\|^{2}+2\|v(t)\|_{H}^{2}\sum_ {k=1}^{\infty}\|h_{k}(t)\|^{2}. \tag{4.33}\]
By (4.33) we find that for all \(t\in[0,T_{0})\),
\[\|u(t)\|^{2}\leqslant e^{\int_{0}^{t}(2-2\gamma+4\alpha^{2}|\delta|^{2}\,\|v(r)|_ {H}^{2})dr}\|u_{0}\|^{2}\]
\[+4\alpha^{2}\|\delta\|^{2}\int_{0}^{t}e^{\int_{s}^{t}(2-2\gamma+4\alpha^{2}| \delta|^{2}\,\|v(r)|_{H}^{2})dr}\|v(s)\|_{H}^{2}ds\]
\[+\int_{0}^{t}e^{\int_{s}^{t}(2-2\gamma+4\alpha^{2}|\delta|^{2}\,\|v(r)|_{H}^{2 })dr}\|g(s)\|^{2}ds\]
\[+2\sum_{k=1}^{\infty}\|h_{k}\|_{L^{\infty}(0,T;\ell^{2})}^{2}\int_{0}^{t}e^{ \int_{s}^{t}(2-2\gamma+4\alpha^{2}|\delta|^{2}\,\|v(r)|_{H}^{2})dr}\|v(s)\|_{H }^{2}ds\]
\[\leqslant e^{(2-2\gamma)T+4\alpha^{2}|\delta|^{2}\,\int_{0}^{T}\|v(r)|_{H}^{ 2}dr}\|u_{0}\|^{2}\]
\[+4\alpha^{2}\|\delta\|^{2}e^{(2-2\gamma)T+4\alpha^{2}|\delta|^{2}\,\int_{0}^{T }\|v(r)\|_{H}^{2}dr}\int_{0}^{T}\|v(s)\|_{H}^{2}ds\]
\[+e^{(2-2\gamma)T+4\alpha^{2}|\delta|^{2}\,\int_{0}^{T}\|v(r)\|_{H}^{2}dr}\int_ {0}^{T}\|g(s)\|^{2}ds\]
\[+2e^{(2-2\gamma)T+4\alpha^{2}|\delta|^{2}\,\int_{0}^{T}\|v(r)\|_{H}^{2}dr}\sum _{k=1}^{\infty}\|h_{k}\|_{L^{\infty}(0,T;\ell^{2})}^{2}\int_{0}^{T}\|v(s)\|_{ H}^{2}ds.\]
By (3.13) and (4.34) we infer that for each \(R_{1}>0\) and \(R_{2}>0\), there exists \(c_{3}=c_{3}(R_{1},R_{2},T)>0\) such that for any \(u_{0}\in\ell^{2}\) with \(\|u_{0}\|\leqslant R_{1}\) and any \(v\in L^{2}(0,T;H)\) with \(\|v\|_{L^{2}(0,T;H)}\leqslant R_{2}\), the solution \(u\) satisfies
\[\|u(t)\|^{2}\leqslant c_{3},\quad\forall\ t\in[0,T_{0}),\]
which implies that \(T_{0}=T\) and hence the solution \(u\) of (4.26) is defined on the entire interval \([0,T]\).
Next, we prove (4.24). Let \(v_{1},v_{2}\) be given in \(L^{2}(0,T;H)\), and denote by
\[u_{1}=u_{v_{1}}\quad\mbox{ and }\quad u_{2}=u_{v_{2}}.\]
Suppose \(\|u_{0,1}\|\leqslant R_{1}\), \(\|u_{0,2}\|\leqslant R_{1}\), \(\|v_{1}\|_{L^{2}(0,T;H)}\leqslant R_{2}\) and \(\|v_{2}\|_{L^{2}(0,T;H)}\leqslant R_{2}\). Then by (4.35) we have
\[\|u_{1}(t)\|+\|u_{2}(t)\|\leqslant c_{4},\quad\forall\ t\in[0,T],\]
where \(c_{4}=c_{4}(R_{1},R_{2},T)>0\). Note that (4.36) implies (4.25).
By (4.26)-(4.27) we have
\[\frac{d}{dt}\|u_{1}(t)-u_{2}(t)\|^{2}=-2\nu\|B(u_{1}(t)-u_{2}(t))\|^{2}\]
\[-2(f(u_{1}(t))-f(u_{2}(t)),u_{1}(t)-u_{2}(t))-2\gamma\|u_{1}(t)-u_{2}(t)\|^{2}\]
\[+2\left(\sigma(t,u_{1}(t))v_{1}(t)-\sigma(t,u_{2}(t))v_{2}(t),\ u_{1}(t)-u_{2}(t) \right),\]
which together with (3.5) gives
\[\frac{d}{dt}\|u_{1}(t)-u_{2}(t)\|^{2}\leq-2\gamma\|u_{1}(t)-u_{2}(t)\|^{2}\]
\[+2\left(\sigma(t,u_{1}(t))v_{1}(t)-\sigma(t,u_{2}(t))v_{2}(t),\ u_{1}(t)-u_{2}(t )\right). \tag{4.37}\]
For the last term in (4.36), by (3.17) we get
\[2\left(\sigma(t,u_{1}(t))v_{1}(t)-\sigma(t,u_{2}(t))v_{2}(t),\ u_{1}(t)-u_{2}(t )\right)\]
\[\leq 2\|\sigma(t,u_{1}(t))v_{1}(t)-\sigma(t,u_{2}(t))v_{2}(t)\|\|u_{1}(t)-u_{2}( t)\|\]
\[\leq 2\|\sum_{k=1}^{\infty}(\sigma_{k}(u_{1}(t))v_{1,k}(t)-\sigma_{k}(u_{2}(t ))v_{2,k}(t))\|\|u_{1}(t)-u_{2}(t)\|\]
\[+2|\sum_{k=1}^{\infty}h_{k}(t)(v_{1,k}(t)-v_{2,k}(t))\|u_{1}(t)-u_{2}(t)\|\]
\[\leq 2|\sum_{k=1}^{\infty}(\sigma_{k}(u_{1}(t))-\sigma_{k}(u_{2}(t)))v_{1,k}(t )\|u_{1}(t)-u_{2}(t)\|\]
\[+2|\sum_{k=1}^{\infty}\sigma_{k}(u_{2}(t))(v_{1,k}(t)-v_{2,k}(t))\|u_{1}(t)-u_ {2}(t)\|\]
\[+2|\sum_{k=1}^{\infty}h_{k}(t)(v_{1,k}(t)-v_{2,k}(t))\|u_{1}(t)-u_{2}(t)\|\]
\[\leq 2\left(\sum_{k=1}^{\infty}\|\sigma_{k}(u_{1}(t))-\sigma_{k}(u_{2}(t))\|^{ 2}\right)^{\frac{1}{2}}\|v_{1}(t)\|_{H}\|u_{1}(t)-u_{2}(t)\|\]
\[+2\left(\sum_{k=1}^{\infty}\|\sigma_{k}(u_{2}(t))\|^{2}\right)^{\frac{1}{2}} \|v_{1}(t)-v_{2}(t)\|_{H}\|u_{1}(t)-u_{2}(t)\|\]
\[+2\left(\sum_{k=1}^{\infty}\|h_{k}(t)\|^{2}\right)^{\frac{1}{2}}\|v_{1}(t)-v_{ 2}(t)\|_{H}\|u_{1}(t)-u_{2}(t)\|. \tag{4.38}\]
By (3.12) and (4.36) we see that there exists \(c_{5}=c_{5}(R_{1},R_{2},T)>0\) such that for all \(t\in[0,T]\),
\[\left(\sum_{k=1}^{\infty}\|\sigma_{k}(u_{1}(t))-\sigma_{k}(u_{2}(t))\|^{2} \right)^{\frac{1}{2}}\leq c_{5}\|\delta\|u_{1}(t)-u_{2}(t)\|. \tag{4.39}\]
On the other hand, by (3.11) and (4.36) we know that there exists \(c_{6}=c_{6}(R_{1},R_{2},T)>0\) such that for all \(t\in[0,T]\),
\[\left(\sum_{k=1}^{\infty}\|\sigma_{k}(u_{2}(t))\|^{2}\right)^{\frac{1}{2}}\leq \alpha\|\delta\|c_{6}. \tag{4.40}\]
By (4.38)-(4.40) we get for all \(t\in[0,T]\),
\[2\left(\sigma(t,u_{1}(t))v_{1}(t)-\sigma(t,u_{2}(t))v_{2}(t),\ u_{1}(t)-u_{2}( t)\right)\]
\[\leq 2c_{5}\|\delta\|\|v_{1}(t)\|_{H}\|u_{1}(t)-u_{2}(t)\|^{2}+2c_{6}\alpha\| \delta\|\|v_{1}(t)-v_{2}(t)\|_{H}\|u_{1}(t)-u_{2}(t)\|\]
\[\leq(2+2c_{5}\|\delta\|v_{1}(t)\|_{H})\,\|u_{1}(t)-u_{2}(t)\|^{2}\]
\[+\left(c_{6}^{2}\alpha^{2}\|\delta\|^{2}+\sum_{k=1}^{\infty}\|h_{k}(t)\|^{2} \right)\|v_{1}(t)-v_{2}(t)\|_{H}^{2}. \tag{4.41}\]
By (4.37) and (4.41) we obtain for all \(t\in(0,T]\),
\[\frac{d}{dt}\|u_{1}(t)-u_{2}(t)\|^{2}\leq(2-2\gamma+2c_{5}\|\delta\|\|v_{1}(t )\|_{H})\,\|u_{1}(t)-u_{2}(t)\|^{2}\]
\[+\left(c_{6}^{2}\alpha^{2}\|\delta\|^{2}+\sum_{k=1}^{\infty}\|h_{k}(t)\|^{2} \right)\|v_{1}(t)-v_{2}(t)\|_{H}^{2}, \tag{4.42}\]
from which we get for all \(t\in[0,T]\),
\[\|u_{1}(t)-u_{2}(t)\|^{2}\leq\mathrm{e}^{\int_{0}^{t}(2-2\gamma+2c_{5}\| \delta\|\|v_{1}(r)\|_{H})dr}\|u_{0,1}-u_{0,2}\|^{2}\]
\[+\left(c_{6}^{2}\alpha^{2}\|\delta\|^{2}+\sum_{k=1}^{\infty}\|h_{k}\|_{L^{ \infty}(0,T;\ell^{2})}^{2}\right)\int_{0}^{t}\mathrm{e}^{\int_{s}^{t}(2-2 \gamma+2c_{5}\|\delta\|\|v_{1}(r)\|_{H})dr}\|v_{1}(s)-v_{2}(s)\|_{H}^{2}ds\]
\[\leq e^{(2-2\gamma)T+2c_{5}\sqrt{T}R_{2}|\delta\|}\|u_{0,1}-u_{0,2}\|^{2}\]
\[+\left(c_{6}^{2}\alpha^{2}\|\delta\|^{2}+\sum_{k=1}^{\infty}\|h_{k}\|_{L^{ \infty}(0,T;\ell^{2})}^{2}\right)e^{(2-2\gamma)T+2c_{5}\sqrt{T}R_{2}\|\delta \|}\|v_{1}-v_{2}\|_{L^{2}(0,T;H)}^{2}. \tag{4.43}\]
Then (4.24) follows from (4.43) immediately.
As a consequence of Lemma 4.2, we find that the solution \(u_{v}\) of (4.21)-(4.22) is continuous in \(C([0,T],\ell^{2})\) with respect to initial data \(u_{0}\) in \(\ell^{2}\) and control \(v\) in \(L^{2}(0,T;H)\). In the sequel, we will further prove the continuity of \(u_{v}\) in \(C([0,T],\ell^{2})\) with respect to \(v\in L^{2}(0,T;H)\) in the weak topology of \(L^{2}(0,T;H)\), which is related to condition (**H2**) for the Laplace principle of the solutions of (3.19)-(3.20). As a necessary step, we first prove the following convergence.
**Lemma 4.3**.: _Suppose that (3.7)-(3.9) and (3.13) hold. For a fixed \(\xi\in L^{\infty}(0,T;\ell^{2})\), define an operator \(\mathcal{T}:L^{2}(0,T;H)\to C([0,T],\ell^{2})\) by_
\[\mathcal{T}(v)(t)=\int_{0}^{t}\sigma(s,\xi(s))v(s)ds,\quad\forall\ v\in L^{2}( 0,T;H). \tag{4.44}\]
_Then we have:_
* \(\mathcal{T}\) _is continuous from the weak topology of_ \(L^{2}(0,T;H)\) _to the strong topology of_ \(C([0,T],\ell^{2})\)_._
* \(\mathcal{T}:L^{2}(0,T;H)\to C([0,T],\ell^{2})\) _is compact with respect to the strong topology of_ \(C([0,T],\ell^{2})\)_._
Proof.: (i). Note that the operator \(\mathcal{T}:L^{2}(0,T;H)\to C([0,T],\ell^{2})\) is well defined. Indeed, by (3.11) and (3.18) we have, for every \(v\in L^{2}(0,T;H)\),
\[\int_{0}^{T}\|\sigma(s,\xi(s))v(s)\|^{2}ds\leqslant\int_{0}^{T}\|\sigma(s,\xi (s))\|^{2}_{L(H,\ell^{2})}\|v(s)\|^{2}_{H}ds\]
\[\leqslant 2\int_{0}^{T}\sum_{k=1}^{\infty}\left(\|h_{k}(s)\|^{2}+\|\sigma_{k}( \xi(s))\|^{2}\right)\|v(s)\|^{2}_{H}ds\]
\[\leqslant\int_{0}^{T}\left(2\sum_{k=1}^{\infty}\|h_{k}(s)\|^{2}+4\alpha^{2}\| \delta\|^{2}(1+\|\xi(s)\|^{2})\right)\|v(s)\|^{2}_{H}ds\]
\[\leqslant\left(2\sum_{k=1}^{\infty}\|h_{k}\|^{2}_{L^{\infty}(0,T;\ell^{2})}+4 \alpha^{2}|\delta|^{2}(1+\|\xi\|^{2}_{L^{\infty}(0,T;\ell^{2})})\right)\int_{ 0}^{T}\|v(s)\|^{2}_{H}ds<\infty, \tag{4.45}\]
which implies that \(\mathcal{T}(v)\) as given by (4.44) belongs to \(C([0,T],\ell^{2})\) for all \(v\in L^{2}(0,T;H)\).
It is evident that \(\mathcal{T}:L^{2}(0,T;H)\to C([0,T],\ell^{2})\) is linear. On the other hand, By (4.44) we have for all \(v\in L^{2}(0,T;H)\),
\[\|\mathcal{T}(v)\|^{2}_{C([0,T],\ell^{2})}\leqslant\left(\int_{0}^{T}\|\sigma (s,\xi(s))v(s)\|ds\right)^{2}\leqslant T\int_{0}^{T}\|\sigma(s,\xi(s))v(s)\|^ {2}ds,\]
which along with (4.45) shows that \(\mathcal{T}:L^{2}(0,T;H)\to C([0,T],\ell^{2})\) is bounded.
Since \(\mathcal{T}:L^{2}(0,T;H)\to C([0,T],\ell^{2})\) is linear and continuous in the strong topology, we know that \(\mathcal{T}:L^{2}(0,T;H)\to C([0,T],\ell^{2})\) is also continuous in the weak topology; that is, if \(v_{n}\to v\) weakly in \(L^{2}(0,T;H)\), then \(\mathcal{T}(v_{n})\to\mathcal{T}(v)\) weakly in \(C([0,T],\ell^{2})\). Next, we prove actually \(\mathcal{T}(v_{n})\to\mathcal{T}(v)\) strongly in \(C([0,T],\ell^{2})\) for which we need to verify:
* For every \(t\in[0,T]\), the set \(\{\mathcal{T}(v_{n})(t):n\in\mathbb{N}\}\) is precompact in \(\ell^{2}\).
* The sequence \(\{\mathcal{T}(v_{n})\}_{n=1}^{\infty}\) is equicontinuous on \([0,T]\).
The equicontinuity of \(\{\mathcal{T}(v_{n})\}_{n=1}^{\infty}\) follows from (4.45) and the boundedness of \(\{v_{n}\}_{n=1}^{\infty}\) in \(L^{2}(0,T;H)\) due to the fact that \(v_{n}\to v\) weakly in \(L^{2}(0,T;H)\). It remains to show (a) for which we will prove the set \(\{\mathcal{T}(v_{n})(t):n\in\mathbb{N}\}\) is totally bounded in \(\ell^{2}\).
For every \(j\in\mathbb{Z}^{N}\), let \(e_{j}=(\delta_{j,k})_{k\in\mathbb{Z}^{N}}\). Then \(\{e_{j}\}_{j\in\mathbb{Z}^{N}}\) is an orthonormal basis of \(\ell^{2}\). Given \(m\in\mathbb{N}\), let \(P_{m}:\ell^{2}\to\ \text{span}\{e_{j}:|j|\leq m\}\) be the projection operator and \(Q_{m}=I-P_{m}\). Given \(t\in[0,T]\), since \(\sigma(t,\xi(t)):H\to\ell^{2}\) is Hilbert-Schmidt we get
\[\lim_{m\to\infty}\left\|Q_{m}\sigma(t,\xi(t))\right\|_{L_{2}(H,\ell^{2})}^{2}=0,\]
which along with the dominated convergence theorem implies that for every \(t\in[0,T]\),
\[\lim_{m\to\infty}\int_{0}^{t}\left\|Q_{m}\sigma(s,\xi(s))\right\|_{L_{2}(H, \ell^{2})}^{2}ds=0. \tag{4.46}\]
On the other hand, by (4.44) we have
\[\left\|\mathcal{T}(v_{n})(t)\right\|\leq \int_{0}^{t}\left\|\sigma(s,\xi(s))v_{n}(s)\right\|ds\leq\int_{0}^ {t}\left\|\sigma(s,\xi(s))\right\|_{L_{2}(H,\ell^{2})}\|v_{n}(s)\|_{H}ds\] \[\leq \left(\int_{0}^{t}\left\|\sigma(s,\xi(s))\right\|_{L_{2}(H,\ell^ {2})}^{2}ds\right)^{\frac{1}{2}}\left(\int_{0}^{t}\|v_{n}(s)\|_{H}^{2}ds \right)^{\frac{1}{2}}. \tag{4.47}\]
Similarly, for every \(m\in\mathbb{N}\), we have
\[\left\|Q_{m}\mathcal{T}(v_{n})(t)\right\|\leq\int_{0}^{t}\left\|Q_{m}\sigma(s, \xi(s))v_{n}(s)\right\|ds\]
\[\leq \left(\int_{0}^{t}\left\|Q_{m}\sigma(s,\xi(s))\right\|_{L_{2}(H,\ell^{2})} ^{2}ds\right)^{\frac{1}{2}}\left(\int_{0}^{t}\|v_{n}(s)\|_{H}^{2}ds\right)^{ \frac{1}{2}}. \tag{4.48}\]
Since \(\{v_{n}\}_{n=1}^{\infty}\) is bounded in \(L^{2}(0,T;H)\), we see from (4.47)-(4.48) that there exists a positive number \(c_{1}\) independent of \(n,m\in\mathbb{N}\) such that
\[\left\|\mathcal{T}(v_{n})(t)\right\|\leq c_{1},\quad\forall\ n\in\mathbb{N}, \tag{4.49}\]
and
\[\left\|Q_{m}\mathcal{T}(v_{n})(t)\right\|\leq c_{1}\left(\int_{0}^{t}\left\|Q _{m}\sigma(s,\xi(s))\right\|_{L_{2}(H,\ell^{2})}^{2}ds\right)^{\frac{1}{2}}, \quad\forall\ n,m\in\mathbb{N}. \tag{4.50}\]
By (4.46) we find that the right-hand side of (4.50) converges to zero as \(m\to\infty\), and hence, for every \(\eta>0\), there exists \(m_{0}\in\mathbb{N}\) such that for all \(n\in\mathbb{N}\) and \(m\geq m_{0}\),
\[\left\|Q_{m}\mathcal{T}(v_{n})(t)\right\|<\frac{1}{4}\eta. \tag{4.51}\]
By (4.49) we see that \(\{P_{m_{0}}(\mathcal{T}(v_{n})(t))\}_{n=1}^{\infty}\) is bounded in a \((2m_{0}+1)\)-dimensional space, and hence it is precompact. Consequently, \(\{P_{m_{0}}(\mathcal{T}(v_{n})(t))\}_{n=1}^{\infty}\) has a finite open cover of radius \(\frac{1}{4}\eta\), which along with (4.51) shows that the sequence \(\{\mathcal{T}(v_{n})(t)\}_{n=1}^{\infty}\) has a finite open cover of radius \(\eta\). In other words, the sequence \(\{\mathcal{T}(v_{n})(t)\}_{n=1}^{\infty}\) is totally bounded and hence precompact in \(\ell^{2}\).
Then by (a) and (b) we infer that there exists a subsequence \(\{v_{n_{k}}\}_{k=1}^{\infty}\) of \(\{v_{n}\}_{n=1}^{\infty}\) such that \(\mathcal{T}(v_{n_{k}})\to\mathcal{T}(v)\) strongly in \(C([0,T],\ell^{2})\). By a contradiction argument, we conclude that the entire sequence \(\mathcal{T}(v_{n})\to\mathcal{T}(v)\) strongly in \(C([0,T],\ell^{2})\).
(ii). Let \(\{v_{n}\}_{n=1}^{\infty}\) be a bounded sequence in \(L^{2}(0,T;H)\). We will prove the sequence \(\{\mathcal{T}(v_{n})\}_{n=1}^{\infty}\) is precompact in \(C([0,T],\ell^{2})\). Since \(\{v_{n}\}_{n=1}^{\infty}\) is bounded, there exists \(v\in L^{2}(0,T;H)\) and a subsequence \(\{v_{n_{k}}\}_{k=1}^{\infty}\) such that \(v_{n_{k}}\to v\) weakly in \(L^{2}(0,T;H)\). Then by (i) we find that \(\mathcal{T}(v_{n_{k}})\to\mathcal{T}(v)\) strongly in \(C([0,T],\ell^{2})\), which completes the proof.
**Lemma 4.4**.: _Suppose that (3.2), (3.7)-(3.9) and (3.13) hold. Let \(v,v_{n}\in L^{2}(0,T;H)\) for all \(n\in\mathbb{N}\) and \(u_{v}\), \(u_{v_{n}}\) be the solutions of (4.21)-(4.22) corresponding to \(v\) and \(v_{n}\), respectively. If \(v_{n}\to v\) weakly in \(L^{2}(0,T;H)\), then \(u_{v_{n}}\to u_{v}\) strongly in \(C([0,T],\ell^{2})\)._
Proof.: Suppose \(v_{n}\to v\) weakly in \(L^{2}(0,T;H)\). Then \(\{v_{n}\}_{n=1}^{\infty}\) is bounded in \(L^{2}(0,T;H)\). Similar to (4.36), we find that there exists \(c_{1}=c_{1}(T)>0\) such that
\[\sup_{0\leqslant t\leqslant T}\left(\|u_{v_{n}}(t)\|+\|u_{v}(t)\|\right) \leqslant c_{1},\quad\forall\ n\in\mathbb{N}. \tag{4.52}\]
By (4.21)-(4.22) we get
\[\frac{d}{dt}(u_{v_{n}}-u_{v})=-\nu A(u_{v_{n}}-u_{v})-(f(u_{v_{n}})-f(u_{v}))\]
\[-\gamma(u_{v_{n}}-u_{v})+\sigma(t,u_{v_{n}})v_{n}-\sigma(t,u_{v})v. \tag{4.53}\]
By (3.4), (3.11), (3.13), (3.17) and (4.52) we infer that
\[\|\frac{d}{dt}(u_{v_{n}}(t)-u_{v}(t))\|\leqslant c_{2}(1+\|v_{n}(t)\|_{H}+\|v (t)\|_{H}),\quad\forall\ n\in\mathbb{N}, \tag{4.54}\]
where \(c_{2}=c_{2}(T)>0\) is a constant independent of \(n\in\mathbb{N}\). Similar to (4.37), by (4.53) we have
\[\frac{d}{dt}\|u_{v_{n}}-u_{v}\|^{2}\leqslant-2\gamma\|u_{v_{n}}-u_{v}\|^{2}+2 \left(\sigma(t,u_{v_{n}})v_{n}-\sigma(t,u_{v})v,u_{v_{n}}-u_{v}\right). \tag{4.55}\]
For the last term in (4.55), by (3.17) we have
\[2\left(\sigma(t,u_{v_{n}})v_{n}-\sigma(t,u_{v})v,u_{v_{n}}-u_{v}\right)\]
\[=2\left((\sigma(t,u_{v_{n}})-\sigma(t,u_{v}))v_{n},u_{v_{n}}-u_{v}\right)+2 \left(\sigma(t,u_{v})(v_{n}-v),u_{v_{n}}-u_{v}\right)\]
\[=2\left(\sum_{k=1}^{\infty}\left(\sigma_{k}(u_{v_{n}})-\sigma_{k}(u_{v})\right)v _{n,k},\ u_{v_{n}}-u_{v}\right)\]
\[+2\left(\sum_{k=1}^{\infty}\left(h_{k}(t)+\sigma_{k}(u_{v})\right)(v_{n,k}-v_{k} ),\ u_{v_{n}}-u_{v}\right). \tag{4.56}\]
For each \(n\in\mathbb{N}\) and \(t\in[0,T]\), set
\[\psi_{n}(t)=\int_{0}^{t}\sum_{k=1}^{\infty}\left(h_{k}(s)+\sigma_{k}(u_{v}(s)) \right)(v_{n,k}(s)-v_{k}(s))ds=\int_{0}^{t}\sigma(s,u_{v}(s))(v_{n}(s)-v(s))ds. \tag{4.57}\]
Since \(v_{n}\to v\) weakly in \(L^{2}(0,T;H)\), by Lemma 4.3 we get
\[\psi_{n}\to 0\ \ \text{in}\ \ C([0,T],\ell^{2})\ \ \text{as}\ \ n\to\infty, \tag{4.58}\]
Note that
\[2\left(\sum_{k=1}^{\infty}\left(h_{k}(t)+\sigma_{k}(u_{v})\right)(v_{n,k}-v_{ k}),\ u_{v_{n}}-u_{v}\right)=2\left(\frac{d}{dt}\psi_{n},\ u_{v_{n}}-u_{v}\right)\]
\[=2\frac{d}{dt}\left(\psi_{n}(t),\ u_{v_{n}}(t)-u_{v}(t)\right)-2\left(\psi_{n }(t),\ \frac{d}{dt}(u_{v_{n}}(t)-u_{v}(t))\right). \tag{4.59}\]
By (4.56) and (4.59) we obtain
\[2\left(\sigma(t,u_{v_{n}})v_{n}-\sigma(t,u_{v})v,u_{v_{n}}-u_{v}\right)\]
\[=2\left(\sum_{k=1}^{\infty}\left(\sigma_{k}(u_{v_{n}})-\sigma_{k}(u_{v}) \right)v_{n,k},\ u_{v_{n}}-u_{v}\right)\]
\[+2\frac{d}{dt}\left(\psi_{n}(t),\ u_{v_{n}}(t)-u_{v}(t)\right)-2\left(\psi_{n }(t),\ \frac{d}{dt}(u_{v_{n}}(t)-u_{v}(t))\right). \tag{4.60}\]
It follows from (4.55) and (4.60) that
\[\frac{d}{dt}\|u_{v_{n}}-u_{v}\|^{2}\leq-2\gamma\|u_{v_{n}}-u_{v}\|^{2}+2\left( \sum_{k=1}^{\infty}\left(\sigma_{k}(u_{v_{n}})-\sigma_{k}(u_{v})\right)v_{n,k },\ u_{v_{n}}-u_{v}\right)\]
\[+2\frac{d}{dt}\left(\psi_{n}(t),\ u_{v_{n}}(t)-u_{v}(t)\right)-2\left(\psi_{n }(t),\ \frac{d}{dt}(u_{v_{n}}(t)-u_{v}(t))\right). \tag{4.61}\]
We now deal with the right-hand side of (4.61). For the second term on the right-hand side of (4.61), by (3.12) and (4.52) we get
\[2\left(\sum_{k=1}^{\infty}\left(\sigma_{k}(u_{v_{n}})-\sigma_{k}(u_{v}) \right)v_{n,k},\ u_{v_{n}}-u_{v}\right)\]
\[\leqslant 2\left(\sum_{k=1}^{\infty}\|\sigma_{k}(u_{v_{n}})-\sigma_{k}(u_{v}) \|^{2}\right)^{\frac{1}{2}}\|v_{n}\|_{H}\|u_{v_{n}}-u_{v}\|\leqslant c_{3}\| \delta\|\|v_{n}\|_{H}\|u_{v_{n}}-u_{v}\|^{2}, \tag{4.62}\]
where \(c_{3}=c_{3}(T)>0\) is a constant independent of \(n\in\mathbb{N}\).
For the last term on the right-hand side of (4.61), by (4.54) we have
\[2\Big{|}\left(\psi_{n}(t),\ \frac{d}{dt}(u_{v_{n}}(t)-u_{v}(t))\right)\Big{|} \leqslant 2c_{2}(1+\|v_{n}(t)\|_{H}+\|v(t)\|_{H})\|\psi_{n}(t)\|. \tag{4.63}\]
It follows from (4.61)-(4.63) that
\[\frac{d}{dt}\|u_{v_{n}}(t)-u_{v}(t)\|^{2}\leqslant(-2\gamma+c_{3}\|\delta\|v _{n}(t)\|_{H})\|u_{v_{n}}(t)-u_{v}(t)\|^{2}\]
\[+2\frac{d}{dt}\left(\psi_{n}(t),\ u_{v_{n}}(t)-u_{v}(t)\right)+2c_{2}(1+\|v_{n }(t)\|_{H}+\|v(t)\|_{H})\|\psi_{n}(t)\|. \tag{4.64}\]
Due to \(u_{v_{n}}(0)=u_{v}(0)=u_{0}\), by integrating (4.64) on \((0,t)\) we get
\[\|u_{v_{n}}(t)-u_{v}(t)\|^{2}\leqslant\int_{0}^{t}(-2\gamma+c_{3}\|\delta\| \|v_{n}(s)\|_{H})\|u_{v_{n}}(s)-u_{v}(s)\|^{2}ds\]
\[+2\left(\psi_{n}(t),\ u_{v_{n}}(t)-u_{v}(t)\right)+2c_{2}\int_{0}^{t}(1+\|v_{n }(s)\|_{H}+\|v(s)\|_{H})\|\psi_{n}(s)\|ds,\]
which shows that for all \(t\in[0,T]\),
\[\sup_{0\leqslant r\leqslant t}\|u_{v_{n}}(r)-u_{v}(r)\|^{2}\leqslant\int_{0}^ {t}(-2\gamma+c_{3}\|\delta\|\|v_{n}(s)\|_{H})\|u_{v_{n}}(s)-u_{v}(s)\|^{2}ds\]
\[+2\sup_{0\leqslant r\leqslant t}(\psi_{n}(r),\ u_{v_{n}}(r)-u_{v}(r))+2c_{2} \int_{0}^{t}(1+\|v_{n}(s)\|_{H}+\|v(s)\|_{H})\|\psi_{n}(s)\|ds. \tag{4.65}\]
For the first term on the right-hand side of (4.65) we have
\[\int_{0}^{t}(-2\gamma+c_{3}\|\delta\|\|v_{n}(s)\|_{H})\|u_{v_{n}}(s)-u_{v}(s) \|^{2}ds\]
\[\leqslant\int_{0}^{t}(-2\gamma+c_{3}\|\delta\|\|v_{n}(s)\|_{H})\sup_{0 \leqslant r\leqslant s}\|u_{v_{n}}(r)-u_{v}(r)\|^{2}ds. \tag{4.66}\]
For the second term on the right-hand side of (4.65) we get
\[2\sup_{0\leqslant r\leqslant t}\left(\psi_{n}(r),\ u_{v_{n}}(r)-u_{v}(r) \right)\leqslant 2\|\psi_{n}\|_{C([0,T],\ell^{2})}\sup_{0\leqslant r\leqslant t} \|u_{v_{n}}(r)-u_{v}(r)\|\]
\[\leqslant\frac{1}{2}\sup_{0\leqslant r\leqslant t}\|u_{v_{n}}(r)-u_{v}(r)\|^{2 }+2\|\psi_{n}\|_{C([0,T],\ell^{2})}^{2}. \tag{4.67}\]
For the last term on the right-hand side of (4.65) we have, for \(t\in[0,T]\),
\[2c_{2}\int_{0}^{t}(1+\|v_{n}(s)\|_{H}+\|v(s)\|_{H})\|\psi_{n}(s)\|ds\]
\[\leqslant 2c_{2}T^{\frac{1}{2}}\|\psi_{n}\|_{C([0,T],\ell^{2})}\left(T^{\frac{1}{ 2}}+\|v_{n}\|_{L^{2}(0,T;H)}+\|v\|_{L^{2}(0,T;H)}\right). \tag{4.68}\]
It follows from (4.65)-(4.68) that for all \(t\in[0,T]\),
\[\sup_{0\leqslant r\leqslant t}\|u_{v_{n}}(r)-u_{v}(r)\|^{2}\leqslant 2\int_{ 0}^{t}(-2\gamma+c_{3}\|\delta\|\|v_{n}(s)\|_{H})\sup_{0\leqslant r\leqslant s} \|u_{v_{n}}(r)-u_{v}(r)\|^{2}ds\]
\[+4\|\psi_{n}\|_{C([0,T],\ell^{2})}^{2}+4c_{2}T^{\frac{1}{2}}\|\psi_{n}\|_{C([ 0,T],\ell^{2})}\left(T^{\frac{1}{2}}+\|v_{n}\|_{L^{2}(0,T;H)}+\|v\|_{L^{2}(0,T ;H)}\right). \tag{4.69}\]
By (4.69) and Gronwall's lemma we obtain, for all \(t\in[0,T]\),
\[\sup_{0\leqslant r\leqslant t}\|u_{v_{n}}(r)-u_{v}(r)\|^{2}\leqslant 4\|\psi_{n} \|_{C([0,T],\ell^{2})}^{2}e^{\int_{0}^{t}(-4\gamma+2c_{3}\|\delta\|\|v_{n}(s)\| _{H})ds}\]
\[+4c_{2}T^{\frac{1}{2}}\|\psi_{n}\|_{C([0,T],\ell^{2})}\left(T^{\frac{1}{2}}+ \|v_{n}\|_{L^{2}(0,T;H)}+\|v\|_{L^{2}(0,T;H)}\right)e^{\int_{0}^{t}(-4\gamma+2 c_{3}\|\delta\|\|v_{n}(s)\|_{H})ds},\]
and hence
\[\sup_{0\leqslant r\leqslant T}\|u_{v_{n}}(r)-u_{v}(r)\|^{2}\leqslant 4\|\psi_{n} \|_{C([0,T],\ell^{2})}^{2}e^{(-4\gamma T+2c_{3}T^{\frac{1}{2}}\|\delta\|\|v_{ n}\|_{L^{2}(0,T;H)}}\]
\[+4c_{2}T^{\frac{1}{2}}\|\psi_{n}\|_{C([0,T],\ell^{2})}\left(T^{\frac{1}{2}}+ \|v_{n}\|_{L^{2}(0,T;H)}+\|v\|_{L^{2}(0,T;H)}\right)e^{(-4\gamma T+2c_{3}T^{ \frac{1}{2}}\|\delta\|v_{n}\|_{L^{2}(0,T;H)}}. \tag{4.70}\]
Since \(\{v_{n}\}_{n=1}^{\infty}\) is bounded in \(L^{2}(0,T;H)\), by (4.70) we infer that there exists a positive number \(c_{4}=c_{4}(T)\) independent of \(n\in\mathbb{N}\) such that
\[\sup_{t\in[0,T]}\|u_{v_{n}}(t)-u_{v}(t)\|^{2}\leqslant c_{4}\left(\|\psi_{n} \|_{C([0,T],\ell^{2})}^{2}+\|\psi_{n}\|_{C([0,T],\ell^{2})}\right). \tag{4.71}\]
It follows from (4.58) and (4.71) that
\[\sup_{t\in[0,T]}\|u_{v_{n}}(t)-u_{v}(t)\|^{2}\to 0\ \ \text{as}\ \ n\to\infty,\]
which concludes the proof.
We now define \(\mathcal{G}^{0}:C([0,T],U)\to C([0,T],\ell^{2})\) by, for every \(\xi\in C([0,T],U)\),
\[\mathcal{G}^{0}(\xi)=\left\{\begin{array}{ll}u_{v}&\text{ if }\xi=\vec{\succ}_{0}v(t)dt \ \ \text{for some }v\in L^{2}(0,T;H);\\ 0,&\text{ otherwise,}\end{array}\right. \tag{4.72}\]
where \(u_{v}\) is the solution of (4.21)-(4.22).
Given \(\phi\in C([0,T],\ell^{2})\), denote by
\[I(\phi)=\inf\left\{\frac{1}{2}\int_{0}^{T}\|v(s)\|_{H}^{2}ds:\ v\in L^{2}(0,T;H), \ u_{v}=\phi\right\}, \tag{4.73}\]
where \(u_{v}\) is the solution of (4.21)-(4.22). Again, by default, the infimum of the empty set is taken to be \(\infty\).
We will prove that the family \(\{u^{\varepsilon}\}\) satisfies the Laplace principle in \(C([0,T],\ell^{2})\) with the rate function as given by (4.73). To that end, we need to show \(\mathcal{G}^{\varepsilon}\) and \(\mathcal{G}^{0}\) fulfill conditions (**H1**) and (**H2**) in terms of Proposition 2.4. The following lemma confirms that condition (**H2**) is satisfied.
**Lemma 4.5**.: _Suppose that (3.2), (3.7)-(3.9) and (3.13) hold. Then for every \(N<\infty\), the set_
\[K_{N}=\left\{\mathcal{G}^{0}\left(\int_{0}v(t)dt\right):\ v\in S_{N}\right\} \tag{4.74}\]
_is a compact subset of \(C([0,T],\ell^{2})\), where \(S_{N}\) is the set as defined by (2.4)._
Proof.: By (4.72) and (4.74) we see that
\[K_{N}=\{u_{v}:\ v\in S_{N}\}=\left\{u_{v}:\ v\in L^{2}(0,T;H),\ \int_{0}^{T}\|v(t)\|_{H}^{2}dt\leq N\right\},\]
where \(u_{v}\) is the solution of (4.21)-(4.22).
Let \(\{u_{v_{n}}\}_{n=1}^{\infty}\) be a sequence in \(K_{N}\). Then \(v_{n}\in L^{2}(0,T;H)\) and \(\int_{0}^{T}\|v_{n}(t)\|_{H}^{2}dt\leq N\), which shows that there exists \(v\in S_{N}\) and a subsequence \(\{v_{n_{k}}\}_{k=1}^{\infty}\) such that \(v_{n_{k}}\to v\) weakly in \(L^{2}(0,T;H)\). Then by Lemma 4.4 we find that \(u_{v_{n_{k}}}\to u_{v}\) strongly in \(C([0,T],\ell^{2})\), as desired.
In order to prove (**H1**), we need the following property of the measurable map \(\mathcal{G}^{\varepsilon}\).
**Lemma 4.6**.: _Suppose that (3.2), (3.7)-(3.9) and (3.13) hold, and \(v\in\mathcal{A}_{N}\) for some \(N<\infty\). If \(u_{v}^{\varepsilon}=\mathcal{G}^{\varepsilon}\left(W+\varepsilon^{-\frac{1}{2 }}\,\widehat{\mathbb{S}_{0}}\,v(t)dt\right)\), then \(u_{v}^{\varepsilon}\) is the unique solution to_
\[du_{v}^{\varepsilon}+\left(\nu Au_{v}^{\varepsilon}+f(u_{v}^{\varepsilon})+ \gamma u_{v}^{\varepsilon}\right)dt=\left(g(t)+\sigma(t,u_{v}^{\varepsilon})v \right)dt+\sqrt{\varepsilon}\sigma(t,u_{v}^{\varepsilon})dW, \tag{4.75}\]
_with initial condition \(u_{v}^{\varepsilon}(0)=u_{0}\in\ell^{2}\)._
_Furthermore, for each \(R>0\) there exists \(C_{2}=C_{2}(R,T,N)>0\) such that for any \(u_{0}\in\ell^{2}\) with \(\|u_{0}\|\leq R\) and any \(v\in\mathcal{A}_{N}\), the solution \(u_{v}^{\varepsilon}\) satisfies for all \(\varepsilon\in(0,1)\),_
\[\mathbb{E}\left(\|u_{v}^{\varepsilon}\|_{C([0,T],\ell^{2})}^{2}\right)\leq C_ {2}. \tag{4.76}\]
Proof.: Given \(\varepsilon>0\), since \(v\in\mathcal{A}_{N}\), by Girsanov's theorem we know that \(\widehat{W}=W+\varepsilon^{-\frac{1}{2}}\int_{0}v(t)dt\) is a cylindrical Wiener process with identity covariance operator under the probability \(\widehat{P}^{\varepsilon}_{v}\) as given by
\[\frac{d\widehat{P}^{\varepsilon}_{v}}{dP}=\exp\left\{-\varepsilon^{-\frac{1}{2 }}\int_{0}^{T}v(t)dW-\frac{1}{2}\varepsilon^{-1}\int_{0}^{T}\|v(t)\|_{H}^{2} dt\right\},\]
which implies that \(u^{\varepsilon}_{v}=\mathcal{G}^{\varepsilon}(\widehat{W})=\mathcal{G}^{ \varepsilon}\left(W+\varepsilon^{-\frac{1}{2}}\int_{0}^{\cdot}v(t)dt\right)\) is the unique solution of (3.19)-(3.20) with \(W\) replaced by \(\widehat{W}\). In other words, \(u^{\varepsilon}_{v}\) is the unique solution of (4.75) with initial condition \(u^{\varepsilon}_{v}(0)=u_{0}\). It remains to show (4.76).
By (4.75) and Ito's formula, we have for all \(t\in[0,T]\), \(P\)-almost surely,
\[\|u^{\varepsilon}_{v}(t)\|^{2}+2\nu\int_{0}^{t}\|Bu^{\varepsilon}_{v}(s)\|^{2 }ds+2\int_{0}^{t}(f(u^{\varepsilon}_{v}(s)),u^{\varepsilon}_{v}(s))ds+2\gamma \,\int_{0}^{t}\|u^{\varepsilon}_{v}(s)\|^{2}ds\]
\[=\|u_{0}\|^{2}+2\int_{0}^{t}(g(s),u^{\varepsilon}_{v}(s))ds+2\int_{0}^{t}( \sigma(s,u^{\varepsilon}_{v}(s))v(s),u^{\varepsilon}_{v}(s))ds\]
\[+\varepsilon\int_{0}^{t}\|\sigma(s,u^{\varepsilon}_{v}(s))\|_{L^{2}(H, \ell^{2})}^{2}ds+2\sqrt{\varepsilon}\int_{0}^{t}(u^{\varepsilon}_{v}(s), \sigma(s,u^{\varepsilon}_{v}(s))dW). \tag{4.77}\]
By (3.3) and (3.5) we know
\[2\nu\int_{0}^{t}\|Bu^{\varepsilon}_{v}(s)\|^{2}ds+2\int_{0}^{t}(f(u^{ \varepsilon}_{v}(s)),u^{\varepsilon}_{v}(s))ds\geqslant 0. \tag{4.78}\]
We also have
\[2\int_{0}^{t}|(g(s),u^{\varepsilon}_{v}(s))|ds\leqslant\int_{0}^{t}\|u^{ \varepsilon}_{v}(s)\|^{2}ds+\int_{0}^{t}\|g(s)\|^{2}ds. \tag{4.79}\]
By (3.18) and (3.11) we obtain
\[2\int_{0}^{t}|(\sigma(s,u^{\varepsilon}_{v}(s))v(s),u^{\varepsilon}_{v}(s))| ds\leqslant 2\int_{0}^{t}\|\sigma(s,u^{\varepsilon}_{v}(s))\|_{L(H,\ell^{2})} \|u^{\varepsilon}_{v}(s)\|\|v(s)\|_{H}ds\]
\[\leqslant 2\left(\int_{0}^{t}\|\sigma(s,u^{\varepsilon}_{v}(s))\|_{L(H,\ell^{2 })}^{2}\|u^{\varepsilon}_{v}(s)\|^{2}ds\right)^{\frac{1}{2}}\left(\int_{0}^{t }\|v(s)\|_{H}^{2}ds\right)^{\frac{1}{2}}\]
\[\leqslant 2N^{\frac{1}{2}}\sup_{0\leqslant s\leqslant t}\|u^{\varepsilon}_{v}(s) \|\left(\int_{0}^{t}\|\sigma(s,u^{\varepsilon}_{v}(s))\|_{L(H,\ell^{2})}^{2} ds\right)^{\frac{1}{2}}\]
\[\leqslant\frac{1}{4}\sup_{0\leqslant s\leqslant t}\|u^{\varepsilon}_{v}(s)\|^ {2}+4N\int_{0}^{t}\|\sigma(s,u^{\varepsilon}_{v}(s))\|_{L(H,\ell^{2})}^{2}ds\]
\[\leqslant\frac{1}{4}\sup_{0\leqslant s\leqslant t}\|u^{\varepsilon}_{v}(s)\|^ {2}+4N\sum_{k=1}^{\infty}\int_{0}^{t}\|h_{k}(s)+\sigma_{k}(u^{\varepsilon}_{v}( s))\|^{2}ds\]
\[\leqslant\frac{1}{4}\sup_{0\leqslant s\leqslant t}\|u^{\varepsilon}_{v}(s) \|^{2}+8NT\sum_{k=1}^{\infty}\|h_{k}\|_{L^{\infty}(0,T;\ell^{2})}^{2}+8N\sum_{k= 1}^{\infty}\int_{0}^{t}\|\sigma_{k}(u^{\varepsilon}_{v}(s))\|^{2}ds\]
\[\leq\frac{1}{4}\sup_{0\leq s\leq t}\|u_{v}^{\varepsilon}(s)\|^{2}+8NT\sum_{k=1}^{ \infty}\|h_{k}\|_{L^{\infty}(0,T;\ell^{2})}^{2}\]
\[+16N\alpha^{2}\|\delta\|^{2}T+16N\alpha^{2}\|\delta\|^{2}\int_{0}^{t}\|u_{v}^{ \varepsilon}(s)\|^{2}ds. \tag{4.80}\]
Similarly, by (3.18) and (3.11) we have for all \(\varepsilon\in(0,1)\),
\[\varepsilon\int_{0}^{t}\|\sigma(s,u_{v}^{\varepsilon}(s))\|_{L^{2}(H,\ell^{2} )}^{2}ds\leq 2T\sum_{k=1}^{\infty}\|h_{k}\|_{L^{\infty}(0,T;\ell^{2})}^{2}+4 \alpha^{2}\|\delta\|^{2}T+4\alpha^{2}\|\delta\|^{2}\int_{0}^{t}\|u_{v}^{ \varepsilon}(s)\|^{2}ds. \tag{4.81}\]
It follows from (4.77)-(4.81) that for all \(t\in[0,T]\), \(P\)-almost surely,
\[\|u_{v}^{\varepsilon}(t)\|^{2}\leq\|u_{0}\|^{2}+\left(1-2\gamma+4\alpha^{2}\| \delta\|^{2}(1+4N)\right)\int_{0}^{t}\|u_{v}^{\varepsilon}(s)\|^{2}ds\]
\[+\frac{1}{4}\sup_{0\leq s\leq t}\|u_{v}^{\varepsilon}(s)\|^{2}+4\alpha^{2}\| \delta\|^{2}T(1+4N)+2T(1+4N)\sum_{k=1}^{\infty}\|h_{k}\|_{L^{\infty}(0,T;\ell^{ 2})}^{2}\]
\[+\int_{0}^{t}\|g(s)\|^{2}ds+2\sqrt{\varepsilon}\int_{0}^{t}(u_{v}^{\varepsilon }(s),\sigma(s,u_{v}^{\varepsilon}(s))dW),\]
which implies that for all \(t\in[0,T]\),
\[\frac{3}{4}\mathbb{E}\left(\sup_{0\leq s\leq t}\|u_{v}^{\varepsilon}(s)\|^{2} \right)\leq\|u_{0}\|^{2}+\left(1-2\gamma+4\alpha^{2}\|\delta\|^{2}(1+4N)\right) \mathbb{E}\left(\int_{0}^{t}\|u_{v}^{\varepsilon}(s)\|^{2}ds\right)\]
\[+4\alpha^{2}\|\delta\|^{2}T(1+4N)+2T(1+4N)\sum_{k=1}^{\infty}\|h_{k}\|_{L^{ \infty}(0,T;\ell^{2})}^{2}\]
\[+\int_{0}^{t}\|g(s)\|^{2}ds+\mathbb{E}\left(\sup_{0\leq s\leq t}\left|\int_{0 }^{r}2\sqrt{\varepsilon}(u_{v}^{\varepsilon}(s),\sigma(s,u_{v}^{\varepsilon}( s))dW(s))\right|\right). \tag{4.82}\]
For the last term in (4.82), by the Burkholder inequality we get for \(\varepsilon\in(0,1)\),
\[\mathbb{E}\left(\sup_{0\leq r\leq t}\left|\int_{0}^{r}2\sqrt{\varepsilon}(u_{ v}^{\varepsilon}(s),\sigma(s,u_{v}^{\varepsilon}(s))dW(s))\right|\right)\]
\[\leq 6\mathbb{E}\left(\left(\int_{0}^{t}\|u_{v}^{\varepsilon}(s)\|^{2}\|\sigma( s,u_{v}^{\varepsilon}(s))\|_{L^{2}(H,\ell^{2})}^{2}ds\right)^{\frac{1}{2}}\right)\]
\[\leq\frac{1}{4}\mathbb{E}\left(\sup_{0\leq s\leq t}\|u_{v}^{\varepsilon}(s)\| ^{2}\right)+36\mathbb{E}\left(\int_{0}^{t}\|\sigma(s,u_{v}^{\varepsilon}(s)) \|_{L^{2}(H,\ell^{2})}^{2}ds\right),\]
which along with (4.81) shows that
\[\mathbb{E}\left(\sup_{0\leq s\leq t}\left|\int_{0}^{r}2\sqrt{\varepsilon}(u_ {v}^{\varepsilon}(s),\sigma(s,u_{v}^{\varepsilon}(s))dW(s))\right|\right)\leq \frac{1}{4}\mathbb{E}\left(\sup_{0\leq s\leq t}\|u_{v}^{\varepsilon}(s)\|^{2}\right)\]
\[+72T\sum_{k=1}^{\infty}\|h_{k}\|_{L^{\infty}(0,T;\ell^{2})}^{2}+144\alpha^{2}\| \delta\|^{2}T+144\alpha^{2}\|\delta\|^{2}\mathbb{E}\left(\int_{0}^{t}\|u_{v}^{ \varepsilon}(s)\|^{2}ds\right). \tag{4.83}\]
By (4.82) and (4.83) we get for all \(t\in[0,T]\),
\[\mathbb{E}\left(\sup_{0\leq s\leq t}\|u_{v}^{\varepsilon}(s)\|^{2}\right)\leq 2 \|u_{0}\|^{2}+2\left(1-2\gamma+4\alpha^{2}\|\delta\|^{2}(37+4N)\right)\int_{0}^ {t}\mathbb{E}\left(\sup_{0\leq r\leq s}\|u_{v}^{\varepsilon}(r)\|^{2}\right)dr\]
\[+8\alpha^{2}\|\delta\|^{2}T(37+4N)+4T(37+4N)\sum_{k=1}^{\infty}\|h_{k}\|_{L^{ \infty}(0,T;\ell^{2})}^{2}+2\int_{0}^{T}\|g(s)\|^{2}ds. \tag{4.84}\]
By (4.84) and Gronwall's lemma we obtain for all \(t\in[0,T]\),
\[\mathbb{E}\left(\sup_{0\leq s\leq t}\|u_{v}^{\varepsilon}(s)\|^{2}\right)\leq c _{1}e^{c_{2}t}, \tag{4.85}\]
where
\[c_{1}=2\|u_{0}\|^{2}+8\alpha^{2}\|\delta\|^{2}T(37+4N)+4T(37+4N)\sum_{k=1}^{ \infty}\|h_{k}\|_{L^{\infty}(0,T;\ell^{2})}^{2}+2\int_{0}^{T}\|g(s)\|^{2}ds,\]
and
\[c_{2}=2\left(1-2\gamma+4\alpha^{2}\|\delta\|^{2}(37+4N)\right).\]
Then (4.76) follows from (4.85) for \(t=T\).
We now prove \(\mathcal{G}^{\varepsilon}\) and \(\mathcal{G}^{0}\) satisfy (**H1**).
**Lemma 4.7**.: _Suppose that (3.2), (3.7)-(3.9) and (3.13) hold, and \(\{v^{\varepsilon}\}\subseteq\mathcal{A}_{N}\) for some \(N<\infty\). If \(\{v^{\varepsilon}\}\) converges in distribution to \(v\) as \(S_{N}\)-valued random variables, then \(\mathcal{G}^{\varepsilon}\left(W+\varepsilon^{-\frac{1}{2}}\int_{0}^{ \varepsilon}v^{\varepsilon}(t)dt\right)\) converges to \(\mathcal{G}^{0}\left(\int_{0}^{\varepsilon}v(t)dt\right)\) in \(C([0,T],\ell^{2})\) in distribution._
Proof.: Let \(u_{v^{\varepsilon}}^{\varepsilon}=\mathcal{G}^{\varepsilon}\left(W+ \varepsilon^{-\frac{1}{2}}\int_{0}^{\varepsilon}v^{\varepsilon}(t)dt\right)\). Then by Lemma 4.6 we see that \(u_{v^{\varepsilon}}^{\varepsilon}\) is the solution to the equation:
\[du_{v^{\varepsilon}}^{\varepsilon}+\left(\nu Au_{v^{\varepsilon}}^{ \varepsilon}+f(u_{v^{\varepsilon}}^{\varepsilon})+\gamma u_{v^{\varepsilon}} ^{\varepsilon}\right)dt=\left(g(t)+\sigma(t,u_{v^{\varepsilon}}^{\varepsilon} )v^{\varepsilon}\right)dt+\sqrt{\varepsilon}\sigma(t,u_{v^{\varepsilon}}^{ \varepsilon})dW, \tag{4.86}\]
with initial data
\[u_{v^{\varepsilon}}^{\varepsilon}(0)=u_{0}\in\ell^{2}. \tag{4.87}\]
Let \(u_{v}=\mathcal{G}^{0}\left(\int_{0}^{\varepsilon}v(t)dt\right)\). Then \(u_{v}\) is the solution to (4.21)-(4.22). So we only need to show \(u_{v^{\varepsilon}}^{\varepsilon}\) converges to \(u_{v}\) in \(C([0,T],\ell^{2})\) in distribution as \(\varepsilon\to 0\). To that end, we first establish the convergence of \(u_{v^{\varepsilon}}^{\varepsilon}-u_{v^{\varepsilon}}\) with \(u_{v^{\varepsilon}}=\mathcal{G}^{0}\left(\int_{0}^{\varepsilon}v^{\varepsilon} (t)dt\right)\) as in [11].
By (4.21)-(4.22) we have
\[\frac{du_{v^{\varepsilon}}}{dt}=-\nu Au_{v^{\varepsilon}}-f(u_{v^{\varepsilon}})- \gamma u_{v^{\varepsilon}}+g(t)+\sigma(t,u_{v^{\varepsilon}})v^{\varepsilon}, \tag{4.88}\]
with initial data
\[u_{v^{\varepsilon}}(0)=u_{0}\in\ell^{2}. \tag{4.89}\]
By (4.86)-(4.89) we have
\[d(u^{\varepsilon}_{v^{\varepsilon}}-u_{v^{\varepsilon}})+\nu A(u^{\varepsilon} _{v^{\varepsilon}}-u_{v^{\varepsilon}})dt+(f(u^{\varepsilon}_{v^{\varepsilon} })-f(u_{v^{\varepsilon}}))dt+\gamma(u^{\varepsilon}_{v^{\varepsilon}}-u_{v^{ \varepsilon}})dt\]
\[=(\sigma(t,u^{\varepsilon}_{v^{\varepsilon}})v^{\varepsilon}-\sigma(t,u_{v^{ \varepsilon}})v^{\varepsilon})\,dt+\sqrt{\varepsilon}\sigma(t,u^{\varepsilon }_{v^{\varepsilon}})dW, \tag{4.90}\]
with \(u^{\varepsilon}_{v^{\varepsilon}}(0)-u_{v^{\varepsilon}}(0)=0\). By (4.90) and Ito's formula we obtain
\[\|u^{\varepsilon}_{v^{\varepsilon}}(t)-u_{v^{\varepsilon}}(t)\|^{2}+2\nu \int_{0}^{t}\|B(u^{\varepsilon}_{v^{\varepsilon}}(s)-u_{v^{\varepsilon}}(s)) \|^{2}ds+2\int_{0}^{t}(f(u^{\varepsilon}_{v^{\varepsilon}})-f(u_{v^{ \varepsilon}}),\ u^{\varepsilon}_{v^{\varepsilon}}-u_{v^{\varepsilon}})ds\]
\[+2\gamma\int_{0}^{t}\|u^{\varepsilon}_{v^{\varepsilon}}(s)-u_{v^{\varepsilon} }(s)\|^{2}ds=2\int_{0}^{t}\left(\sigma(s,u^{\varepsilon}_{v^{\varepsilon}}(s) )v^{\varepsilon}(s)-\sigma(s,u_{v^{\varepsilon}}(s))v^{\varepsilon}(s),\ u^{ \varepsilon}_{v^{\varepsilon}}(s)-u_{v^{\varepsilon}}(s)\right)dt\]
\[+\varepsilon\int_{0}^{t}\|\sigma(s,u^{\varepsilon}_{v^{\varepsilon}}(s))\|^{2 }_{L_{2}(H,\ell^{2})}ds+2\sqrt{\varepsilon}\int_{0}^{t}\left(u^{\varepsilon}_{ v^{\varepsilon}}(s)-u_{v^{\varepsilon}}(s),\ \sigma(s,u^{\varepsilon}_{v^{\varepsilon}}(s))dW\right). \tag{4.91}\]
By (3.5) and (4.91) we obtain
\[\|u^{\varepsilon}_{v^{\varepsilon}}(t)-u_{v^{\varepsilon}}(t)\|^{2}+2\gamma \int_{0}^{t}\|u^{\varepsilon}_{v^{\varepsilon}}(s)-u_{v^{\varepsilon}}(s)\|^ {2}ds\]
\[\leq 2\int_{0}^{t}\left(\sigma(s,u^{\varepsilon}_{v^{\varepsilon}}(s))v^{ \varepsilon}(s)-\sigma(s,u_{v^{\varepsilon}}(s))v^{\varepsilon}(s),\ u^{ \varepsilon}_{v^{\varepsilon}}(s)-u_{v^{\varepsilon}}(s)\right)dt\]
\[+\varepsilon\int_{0}^{t}\|\sigma(s,u^{\varepsilon}_{v^{\varepsilon}}(s))\|^{2 }_{L_{2}(H,\ell^{2})}ds+2\sqrt{\varepsilon}\int_{0}^{t}\left(u^{\varepsilon}_ {v^{\varepsilon}}(s)-u_{v^{\varepsilon}}(s),\ \sigma(s,u^{\varepsilon}_{v^{\varepsilon}}(s))dW\right). \tag{4.92}\]
For a fixed \(M>0\), define a stopping time
\[\tau^{\varepsilon}=\inf\left\{t\geq 0:\|u^{\varepsilon}_{v^{\varepsilon}}(t)\| \geq M\right\}\wedge T. \tag{4.93}\]
As usual, the infimum of the empty set is taken to be \(\infty\). Since \(\gamma\leq 0\), by (4.92)-(4.93) we get, for all \(t\in[0,T]\),
\[\sup_{0\leq r\leq t}\|u^{\varepsilon}_{v^{\varepsilon}}(r\wedge\tau^{ \varepsilon})-u_{v^{\varepsilon}}(r\wedge\tau^{\varepsilon})\|^{2}\leq-2 \gamma\int_{0}^{t\wedge\tau^{\varepsilon}}\|u^{\varepsilon}_{v^{\varepsilon}} (s)-u_{v^{\varepsilon}}(s)\|^{2}ds\]
\[+2\int_{0}^{t\wedge\tau^{\varepsilon}}|\left(\sigma(s,u^{\varepsilon}_{v^{ \varepsilon}}(s))v^{\varepsilon}(s)-\sigma(s,u_{v^{\varepsilon}}(s))v^{ \varepsilon}(s),\ u^{\varepsilon}_{v^{\varepsilon}}(s)-u_{v^{\varepsilon}}(s) \right)|dt\]
\[+\varepsilon\int_{0}^{t_{\wedge}\tau^{\varepsilon}}\|\sigma(s,u^{\varepsilon}_{v^{ \varepsilon}}(s))\|^{2}_{L_{2}(H,\ell^{2})}ds+2\sqrt{\varepsilon}\sup_{0\leq r \leq s}\left|\int_{0}^{\tau_{\wedge}\tau^{\varepsilon}}(u^{\varepsilon}_{v^{ \varepsilon}}(s)-u_{v^{\varepsilon}}(s),\ \sigma(s,u^{\varepsilon}_{v^{ \varepsilon}}(s))dW)\right|. \tag{4.94}\]
For the first term on the right-hand side of (4.94) we have
\[-2\gamma\int_{0}^{t_{\wedge}\tau^{\varepsilon}}\|u^{\varepsilon}_{v^{ \varepsilon}}(s)-u_{v^{\varepsilon}}(s)\|^{2}ds\leq-2\gamma\int_{0}^{t}\sup_{ 0\leq r\leq s}\|u^{\varepsilon}_{v^{\varepsilon}}(r\wedge\tau^{\varepsilon})- u_{v^{\varepsilon}}(r\wedge\tau^{\varepsilon})\|^{2}ds. \tag{4.95}\]
To deal with the second term on the right-hand side of (4.94), we find from (4.25) that there exists a positive number \(c_{1}=c_{1}(N)\) such that \(P\)-almost surely,
\[\sup_{\varepsilon\in(0,1]}\ \sup_{t\in[0,T]}\|u_{v^{\varepsilon}}(t)\|\leq c_{1}. \tag{4.96}\]
Then by (3.12), (3.17), (4.93) and (4.96) we infer that there exists \(c_{2}=c_{2}(N,M)>0\) such that
\[2\int_{0}^{t_{\wedge}\tau^{\varepsilon}}|\left(\sigma(s,u^{\varepsilon}_{v^{ \varepsilon}}(s))v^{\varepsilon}(s)-\sigma(s,u_{v^{\varepsilon}}(s))v^{ \varepsilon}(s),\ u^{\varepsilon}_{v^{\varepsilon}}(s)-u_{v^{\varepsilon}}(s )\right)|ds\]
\[\leq 2\int_{0}^{t_{\wedge}\tau^{\varepsilon}}\|\sigma(s,u^{\varepsilon}_{v^{ \varepsilon}}(s))v^{\varepsilon}(s)-\sigma(s,u_{v^{\varepsilon}}(s))v^{ \varepsilon}(s)\|\|u^{\varepsilon}_{v^{\varepsilon}}(s)-u_{v^{\varepsilon}}(s )\|ds\]
\[\leq c_{2}\|\delta\|\int_{0}^{t_{\wedge}\tau^{\varepsilon}}\|v^{\varepsilon}(s )\|_{H}\|u^{\varepsilon}_{v^{\varepsilon}}(s)-u_{v^{\varepsilon}}(s)\|^{2}ds\]
\[\leq c_{2}\|\delta\|\int_{0}^{t}\|v^{\varepsilon}(s)\|_{H}\sup_{0\leq r\leq s }\|u^{\varepsilon}_{v^{\varepsilon}}(r\wedge\tau^{\varepsilon})-u_{v^{ \varepsilon}}(r\wedge\tau^{\varepsilon})\|^{2}ds. \tag{4.97}\]
It follows from (4.94), (4.95) and (4.97) that for all \(t\in[0,T]\), \(P\)-almost surely,
\[\sup_{0\leq r\leq t}\|u^{\varepsilon}_{v^{\varepsilon}}(r\wedge\tau^{ \varepsilon})-u_{v^{\varepsilon}}(r\wedge\tau^{\varepsilon})\|^{2}\]
\[\leq\int_{0}^{t}(c_{2}\|\delta\|v^{\varepsilon}(s)\|_{H}-2\gamma)\sup_{0\leq r \leq s}\|u^{\varepsilon}_{v^{\varepsilon}}(r\wedge\tau^{\varepsilon})-u_{v^{ \varepsilon}}(r\wedge\tau^{\varepsilon})\|^{2}ds\]
\[+\varepsilon\int_{0}^{T_{\wedge}\tau^{\varepsilon}}\|\sigma(s,u^{\varepsilon} _{v^{\varepsilon}}(s))\|^{2}_{L_{2}(H,\ell^{2})}ds+2\sqrt{\varepsilon}\sup_{0 \leq r\leq T}\left|\int_{0}^{\tau_{\wedge}\tau^{\varepsilon}}(u^{\varepsilon }_{v^{\varepsilon}}(s)-u_{v^{\varepsilon}}(s),\ \sigma(s,u^{\varepsilon}_{v^{\varepsilon}}(s))dW)\right|. \tag{4.98}\]
By (4.98) and Gronwall's lemma, we get for all \(t\in[0,T]\), \(P\)-almost surely,
\[\sup_{0\leq r\leq t}\|u^{\varepsilon}_{v^{\varepsilon}}(r\wedge\tau^{ \varepsilon})-u_{v^{\varepsilon}}(r\wedge\tau^{\varepsilon})\|^{2}\]
\[\leq\varepsilon c_{3}\int_{0}^{T_{\wedge}\tau^{\varepsilon}}\|\sigma(s,u^{ \varepsilon}_{v^{\varepsilon}}(s))\|^{2}_{L_{2}(H,\ell^{2})}ds+2\sqrt{ \varepsilon}c_{3}\sup_{0\leq r\leq T}\left|\int_{0}^{\tau_{\wedge}\tau^{ \varepsilon}}(u^{\varepsilon}_{v^{\varepsilon}}(s)-u_{v^{\varepsilon}}(s),\ \sigma(s,u^{ \varepsilon}_{v^{\varepsilon}}(s))dW)\right|, \tag{4.99}\]
where \(c_{3}=e^{c_{2}|\delta\|T^{\frac{1}{2}}N^{\frac{1}{2}}-2\gamma T}\).
For the first term on the right-hand side of (4.99), by (3.11) and (3.18) we get
\[\varepsilon\int_{0}^{T\wedge\tau^{\varepsilon}}\|\sigma(s,u_{v^{ \varepsilon}}^{\varepsilon}(s))\|_{L_{2}(H,\ell^{2})}^{2}ds\leq 2 \varepsilon\sum_{k=1}^{\infty}\int_{0}^{T\wedge\tau^{\varepsilon}}\left(\|h_{k }(s)\|^{2}+\|\sigma_{k}(u_{v^{\varepsilon}}^{\varepsilon}(s))\|^{2}\right)ds\] \[\qquad\qquad\leq 2\varepsilon T\sum_{k=1}^{\infty}\|h_{k}\|_{L^{ \infty}(0,T;\ell^{2})}^{2}+4\varepsilon\alpha^{2}\|\delta\|^{2}T+4\varepsilon \alpha^{2}\|\delta\|^{2}\int_{0}^{T\wedge\tau^{\varepsilon}}\|u_{v^{ \varepsilon}}^{\varepsilon}(s)\|^{2}ds\] \[\qquad\qquad\qquad\leq 2\varepsilon T\sum_{k=1}^{\infty}\|h_{k}\|_{L^ {\infty}(0,T;\ell^{2})}^{2}+4\varepsilon\alpha^{2}\|\delta\|^{2}T+4 \varepsilon\alpha^{2}\|\delta\|^{2}M^{2}T. \tag{4.100}\]
By (4.100) we see that
\[\lim_{\varepsilon\to 0}\varepsilon c_{3}\int_{0}^{T\wedge\tau^{\varepsilon}} \|\sigma(s,u_{v^{\varepsilon}}^{\varepsilon}(s))\|_{L_{2}(H,\ell^{2})}^{2} ds=0,\quad\text{P-almost surely}. \tag{4.101}\]
For the last term on the right-hand side of (4.99), by (4.96), (4.100) and Doob's maximal inequality we obtain
\[\mathbb{E}\left(\sup_{0\leq\tau\in T}\left|\int_{0}^{\tau\wedge \tau^{\varepsilon}}2\sqrt{\varepsilon}\left(u_{v^{\varepsilon}}^{\varepsilon }(s)-u_{v^{\varepsilon}}(s),\ \sigma(s,u_{v^{\varepsilon}}^{\varepsilon}(s))dW\right) \right|^{2}\right)\] \[\qquad\qquad\leq 16\varepsilon(M+c_{1})^{2}\mathbb{E}\left(\int_{0 }^{T\wedge\tau^{\varepsilon}}\|\sigma(s,u_{v^{\varepsilon}}^{\varepsilon}(s)) \|_{L_{2}(H,\ell^{2})}^{2}ds\right)\] \[\qquad\qquad\leq 32\varepsilon(M+c_{1})^{2}\left(T\sum_{k=1}^{ \infty}\|h_{k}\|_{L^{\infty}(0,T;\ell^{2})}^{2}+2\alpha^{2}\|\delta\|^{2}T+2 \alpha^{2}\|\delta\|^{2}M^{2}T\right),\]
and hence
\[\lim_{\varepsilon\to 0}\mathbb{E}\left(\sup_{0\leq\tau\in T}\left|2\sqrt{ \varepsilon}c_{3}\int_{0}^{\tau\wedge\tau^{\varepsilon}}\left(u_{v^{ \varepsilon}}^{\varepsilon}(s)-u_{v^{\varepsilon}}(s),\ \sigma(s,u_{v^{\varepsilon}}^{\varepsilon}(s))dW\right) \right|^{2}\right)=0. \tag{4.102}\]
By (4.99), (4.101) and (4.102), we get
\[\lim_{\varepsilon\to 0}\sup_{0\leq t\leq T}\|u_{v^{\varepsilon}}^{\varepsilon}(t \wedge\tau^{\varepsilon})-u_{v^{\varepsilon}}(t\wedge\tau^{\varepsilon})\|^{ 2}=0\quad\text{in probability}. \tag{4.103}\]
On the other hand, by (4.93) and (4.76) we have for all \(\varepsilon\in(0,1)\),
\[P(\tau^{\varepsilon}<T)=P\left(\sup_{t\in[0,T]}\|u_{v^{\varepsilon}}^{ \varepsilon}(t)\|>M\right)\leq\frac{1}{M^{2}}\mathbb{E}\left(\sup_{t\in[0,T]} \|u_{v^{\varepsilon}}^{\varepsilon}(t)\|^{2}\right)\leq\frac{C_{2}}{M^{2}}, \tag{4.104}\]
where \(C_{2}=C_{2}(T,N)>0\). It follows from (4.104) that for every \(\eta>0\),
\[P\left(\sup_{0\leq t\leq T}\|u_{v^{\varepsilon}}^{\varepsilon}(t)-u_{v^{ \varepsilon}}(t)\|>\eta\right)\]
\[\leqslant P\left(\sup_{0\leqslant t\leqslant T}\|u_{v^{\varepsilon}}^{ \varepsilon}(t)-u_{v^{\varepsilon}}(t)\|>\eta,\ \tau^{\varepsilon}=T\right)+P\left(\sup_{0\leqslant t\leqslant T}\|u_{v^{ \varepsilon}}^{\varepsilon}(t)-u_{v^{\varepsilon}}(t)\|>\eta,\ \tau^{\varepsilon}<T\right)\] \[\leqslant P\left(\sup_{0\leqslant t\leqslant T}\|u_{v^{ \varepsilon}}^{\varepsilon}(t\wedge\tau^{\varepsilon})-u_{v^{\varepsilon}}(t \wedge\tau^{\varepsilon})\|>\eta\right)+\frac{C_{2}}{M^{2}}.\]
First taking the limit as \(\varepsilon\to 0\), and then as \(M\to\infty\), we get from (4.103) that
\[\lim_{\varepsilon\to 0}(u_{v^{\varepsilon}}^{\varepsilon}-u_{v^{\varepsilon}})=0 \ \ \text{in}\ C([0,T],\ell^{2})\ \ \text{in probability}. \tag{4.105}\]
Since \(\{v^{\varepsilon}\}\) converges in distribution to \(v\) as \(S_{N}\)-valued random elements, by Skorokhod's theorem, there exists a probability space \((\widetilde{\Omega},\widetilde{\mathcal{F}},\widetilde{P})\) and \(S_{N}\)-valued random variables \(\widetilde{v}^{\varepsilon}\) and \(\widetilde{v}\) on \((\widetilde{\Omega},\widetilde{\mathcal{F}},\widetilde{P})\) such that \(\widetilde{v}^{\varepsilon}\) and \(\widetilde{v}\) have the same distribution laws as \(v^{\varepsilon}\) and \(v\) respectively, and \(\{\widetilde{v}^{\varepsilon}\}\) converges to \(\widetilde{v}\) almost surely in \(S_{N}\) which is equipped with the weak topology. Then by Lemma 4.4 we infer that
\[u_{\widetilde{v}^{\varepsilon}}\to u_{\widetilde{v}}\ \text{in}\ C([0,T],\ell^{2})\ \text{almost surely},\]
and hence
\[u_{\widetilde{v}^{\varepsilon}}\to u_{\widetilde{v}}\ \text{in}\ C([0,T],\ell^{2})\ \text{in distribution}. \tag{4.106}\]
By (4.106) we know
\[u_{v^{\varepsilon}}\to u_{v}\ \text{in}\ C([0,T],\ell^{2})\ \text{in distribution}. \tag{4.107}\]
By (4.105) and (4.107) we immediately get
\[u_{v^{\varepsilon}}^{\varepsilon}\to u_{v}\ \ \text{in}\ C([0,T],\ell^{2})\ \text{in distribution},\]
which concludes the proof.
As an immediate consequence of Proposition 2.4 and Lemmas 4.5 and 4.7, we finally obtain Theorem 4.1, the main result of this paper.
|
2306.00431 | Every Bit Counts in Consensus | Consensus enables n processes to agree on a common valid L-bit value, despite
t < n/3 processes being faulty and acting arbitrarily. A long line of work has
been dedicated to improving the worst-case communication complexity of
consensus in partial synchrony. This has recently culminated in the worst-case
word complexity of O(n^2). However, the worst-case bit complexity of the best
solution is still O(n^2 L + n^2 kappa) (where kappa is the security parameter),
far from the \Omega(n L + n^2) lower bound. The gap is significant given the
practical use of consensus primitives, where values typically consist of
batches of large size (L > n).
This paper shows how to narrow the aforementioned gap while achieving optimal
linear latency. Namely, we present a new algorithm, DARE (Disperse, Agree,
REtrieve), that improves upon the O(n^2 L) term via a novel dispersal
primitive. DARE achieves O(n^{1.5} L + n^{2.5} kappa) bit complexity, an
effective sqrt{n}-factor improvement over the state-of-the-art (when L > n
kappa). Moreover, we show that employing heavier cryptographic primitives,
namely STARK proofs, allows us to devise DARE-Stark, a version of DARE which
achieves the near-optimal bit complexity of O(n L + n^2 poly(kappa)). Both DARE
and DARE-Stark achieve optimal O(n) latency. | Pierre Civit, Seth Gilbert, Rachid Guerraoui, Jovan Komatovic, Matteo Monti, Manuel Vidigueira | 2023-06-01T08:18:16Z | http://arxiv.org/abs/2306.00431v2 | # Every Bit Counts in Consensus
###### Abstract
Consensus enables \(n\) processes to agree on a common valid \(L\)-bit value, despite \(t<n/3\) processes being faulty and acting arbitrarily. A long line of work has been dedicated to improving the worst-case communication complexity of consensus in partial synchrony. This has recently culminated in the worst-case _word_ complexity of \(O(n^{2})\). However, the worst-case _bit_ complexity of the best solution is still \(O(n^{2}L+n^{2}\kappa)\) (where \(\kappa\) is the security parameter), far from the \(\Omega(nL+n^{2})\) lower bound. The gap is significant given the practical use of consensus primitives, where values typically consist of batches of large size (\(L>n\)).
This paper shows how to narrow the aforementioned gap. Namely, we present a new algorithm, DARE (Disperse, Agree, REtrieve), that improves upon the \(O(n^{2}L)\) term via a novel dispersal primitive. DARE achieves \(O(n^{1.5}L+n^{2.5}\kappa)\) bit complexity, an effective \(\sqrt{n}\)-factor improvement over the state-of-the-art (when \(L>n\kappa\)). Moreover, we show that employing heavier cryptographic primitives, namely STARK proofs, allows us to devise DARE-Stark, a version of DARE which achieves the near-optimal bit complexity of \(O(nL+n^{2}\textit{poly}(\kappa))\). Both DARE and DARE-Stark achieve optimal \(O(n)\) worst-case latency.
Byzantine consensus, Bit complexity, Latency 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 20122 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 202 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2022 2022 2022 2022 2022 2022 2022 2022 2022 2022 2022 2022 2022 2022 2022 2202 22022 2022 2022 2022 2022 2022 2202 22222 2022 2222 222 2222 2222 2222 2222 2222 2222 2222 2222 2222 2222 2222 2222 2222 2222 2222 2222 2222 2222 2222 2222 2222 22222 2222 22222 22222 22222 22222 22222 22222 22222 222222 22222 222222 22222 222222 2222222 222222 222222 222222 2222222 222222 222222 2222222 222222 2222222 222222 2222222 2222222 22222222 2222222 2222222 22222222 22222222 22222222 22222222 22222222 2222222222 22222222222 222222222222 222222222222222 2
* _Agreement:_ No two correct processes decide different values.
* _Termination:_ All correct processes eventually decide.
* _(External) Validity:_ If a correct process decides a value \(v\), then \(\mathsf{valid}(v)=\textit{true}\).
Here, \(\mathsf{valid}(\cdot)\) is any predefined logical predicate that indicates whether or not a value is valid.1
Footnote 1: For traditional notions of validity, admissible values depend on the proposals of correct processes, e.g., if all correct processes start with value \(v\), then \(v\) is the only admissible decision. In this paper, we focus on external validity [25], with the observation that any other validity condition can be achieved by reduction (as shown in [37]).
This paper focuses on improving the worst-case bit complexity of deterministic Byzantine consensus in standard partial synchrony [51]. The worst-case lower bound is \(\Omega(nL+n^{2})\) exchanged bits. This considers all bits sent by correct processes from the moment the network becomes synchronous, i.e., GST (the number of messages sent by correct processes before GST is unbounded due to asynchrony [84]). The \(nL\) term comes from the fact that all \(n\) processes must receive the decided value at least once, while the \(n^{2}\) term is implied by the seminal Dolev-Reischuk lower bound [47, 84] on the number of messages. Recently, a long line of work has culminated in Byzantine consensus algorithms which achieve optimal \(O(n^{2})\) worst-case _word_ complexity, where a word is any constant number of values, signatures or hashes [36, 65]. However, to the best of our knowledge, no existing algorithm beats the \(O(n^{2}L+n^{2}\kappa)\) bound on the worst-case bit complexity, where \(\kappa\) denotes the security parameter (e.g., the number of bits per hash or signature). The \(n^{2}L\) term presents a linear gap with respect to the lower bound.
Does this gap matter? In practice, yes. In many cases, consensus protocols are used to agree on a large batch of inputs [76, 87, 27, 41, 85]. For example, a block in a blockchain amalgamates many transactions. Alternatively, imagine that \(n\) parties each propose a value, and the protocol agrees on a set of these values. (This is often known as vector consensus [14, 15, 49, 79, 48, 39].) Typically, the hope is that by batching values/transactions, we can improve the total throughput of the system. Unfortunately, with current consensus protocols, larger batches do not necessarily yield better performance when applied directly [45]. This does not mean that batches are necessarily ineffective. In fact, a recent line of work has achieved significant practical improvements to consensus throughput by entirely focusing on the efficient dissemination of large batches (i.e., large values), so-called "mempool" protocols [41, 85, 27]. While these solutions work only optimistically (they perform well in periods of synchrony and without faults), they show that a holistic focus on _bandwidth_ usage is fundamental (i.e., bit complexity, and not just word complexity).
### Contributions
We introduce DARE (Disperse, Agree, REtrieve), a new Byzantine consensus algorithm for partial synchrony with worst-case \(O(n^{1.5}L+n^{2.5}\kappa)\) bit complexity and optimal worst-case \(O(n)\) latency. Moreover, by enriching DARE with heavier cryptographic primitives, namely STARK proofs, we close the gap near-optimally using only \(O(nL+n^{2}poly(\kappa))\) bits. Notice that, if you think of \(L\) as a batch of \(n\) transactions of size \(s\), the average communication cost of agreeing on a single transaction is only \(\tilde{O}(ns)\) bits - the same as a best-effort (unsafe) broadcast [24] of that transaction!
To the best of our knowledge, DARE is the first partially synchronous algorithm to achieve \(o(n^{2}L)\) bit complexity and \(O(n)\) latency. The main idea behind DARE is to separate the problem of agreeing from the problem of retrieving an agreed-upon value (see SS1.2 for more details). Figure 1 places DARE in the context of efficient consensus algorithms.
### Technical Overview
**The "curse" of GST.** To understand the problem that DARE solves, we must first understand why existing algorithms suffer from an \(O(n^{2}L)\) term. "Leader-based" algorithms (such as the state-of-the-art [74, 36, 65]) solve consensus by organizing processes into a rotating sequence of views, each with a different leader. A view's leader broadcasts its value \(v\) and drives other processes to decide it. If all correct processes are timely and the leader is correct, \(v\) is decided.
The main issue is that, if synchrony is only guaranteed _eventually_ (partial synchrony [51]), a view might fail to reach agreement even if its leader is correct: the leader could just be slow (i.e., not yet synchronous). The inability to distinguish the two scenarios forces protocols to change views even if the current leader is merely "suspected" of being faulty. Since there can be up to \(t\) faulty leaders, there must be at least \(t+1\) different views. However, this comes at the risk of sending unnecessary messages if the suspicion proves false, which is what happens in the worst case.
Suppose that, before GST (i.e., the point in time the system becomes synchronous), the first \(t\) leaders are correct, but "go to sleep" (slow down) immediately before broadcasting their values, and receive no more messages until GST \(+\delta\) due to asynchrony (\(\delta\) is the maximum message delay after GST). Once GST is reached, all \(t\) processes wake up and broadcast their value, for a total of \(O(tnL)=O(n^{2}L)\) exchanged bits; this can happen before they have a chance to receive even a single message! This attack can be considered a "curse" of GST: the _adversarial shift_ of correct processes in time creates a (seemingly unavoidable) situation where \(\Omega(n^{2})\) messages are sent at GST (which in this case include \(L\) bits each, for a total of \(\Omega(n^{2}L)\)). Figure 2 illustrates the attack.
**DARE: Disperse, Agree, REtrieve.** In a nutshell, DARE follows three phases:
1. [leftmargin=*]
2. **Dispersal**: Processes attempt to disperse their values and obtain a _proof of dispersal_ for any value. This proof guarantees that the value is both (1) valid, and (2) retrievable.
3. **Agreement**: Processes propose a _hash_ of the value accompanied by its proof of dispersal to a Byzantine consensus algorithm for small \(L\) (e.g., \(O(\kappa)\)).
4. **Retrieval**: Using the decided hash, processes retrieve the corresponding value. The proof of dispersal ensures Retrieval will succeed and output a valid value.
This architecture is inspired by randomized asynchronous Byzantine algorithms [6, 68] which work with _expected_ bit complexity (the worst-case is unbounded in asynchrony [53]). As these algorithms work in expectation, they can rely on randomness to retrieve a value (\(\neq\bot\)) that is valid after an
Figure 1: Performance of various consensus algorithms with \(L\)-bit values and \(\kappa\)-bit security parameter. \({}^{\dagger}\) For asynchronous algorithms, we show the complexity in expectation instead of the worst-case (which is unbounded for deterministic safety guarantees due to the FLP impossibility result [53]). \({}^{\ddagger}\) Threshold Signatures (TS) are used to directly improve the original algorithm.
expected constant number of tries. However, in order to achieve the same effect (i.e., a constant number of retrievals) in the worst case in partial synchrony, DARE must guarantee that the Retrieval protocol _always_ outputs a valid value (\(\neq\bot\)) _a priori_, which shifts the difficulty of the problem almost entirely to the Dispersal phase.
**Dispersal.** To obtain a proof of dispersal, a natural solution is for the leader to broadcast the value \(v\). Correct processes check the validity of \(v\) (i.e., if \(\mathsf{valid}(v)=\mathit{true}\)), store \(v\) for the Retrieval protocol, and produce a partial signature attesting to these two facts. The leader combines the partial signatures into a \((2t+1,n)\)-threshold signature (the proof of dispersal), which is sufficient to prove that DARE's Retrieval protocol [43] will output a valid value after the Agreement phase.
However, if leaders use best-effort broadcast [24] (i.e., simultaneously send the value to all other processes), they are still vulnerable to an _adversarial shift_ causing \(O(n^{2}L)\) communication. Instead, we do the following. First, we use a _view synchronizer_[77, 20, 21] to group leaders into _views_ in a rotating sequence. A view has \(\sqrt{n}\) leaders and a sequence has \(\sqrt{n}\) views. Leaders of the current view can concurrently broadcast their values while messages of other views are ignored. Second, instead of broadcasting the value simultaneously to all processes, a leader broadcasts the value to different subgroups of \(\sqrt{n}\) processes in intervals of \(\delta\) time (i.e., broadcast to the first subgroup, wait \(\delta\) time, broadcast to the second subgroup,...) until all processes have received the value. Neither idea individually is enough to improve over the \(O(n^{2}L)\) term. However, when they are combined, it becomes possible to balance the communication cost of the synchronizer (\(O(n^{2.5}\kappa)\) bits), the maximum cost of an _adversarial shift_ attack (\(O(n^{1.5}L)\) bits), and the broadcast rate to achieve the improved \(O(n^{1.5}L+n^{2.5}\kappa)\) bit complexity with asymptotically optimal \(O(\delta n)\) latency as shown in Figure 3.
**DARE-Stark.** As we explained, the main cost of the Dispersal phase is associated with obtaining a dispersal proof that a value is valid. Specifically, it comes from the cost of having to send the entire value (\(L\) bits) in a single message.
With Succinct Transparent ARguments of Knowledge (STARKs), we can entirely avoid sending
Figure 3: Overview of DARE (Disperse, Agree, REtrieve).
Figure 2: The _adversarial shift_ attack on \(t+1\) leaders. The first line shows how leaders are optimistically ordered in time by the protocol to avoid redundant broadcasts (the blue speaker circle represents an _avoided_ redundant broadcast). The second line shows how leaders can slow down before GST and overlap at GST, making redundant broadcasts (seem) unavoidable.
the value in a single message. STARKs allow a process to compute a proof (\(O(poly(\kappa))\) bits) of a statement on some value without having to share that value. As an example, a process \(P_{i}\) can send \(\langle h,\sigma_{\texttt{STARK}}\rangle\) to a process \(P_{j}\), which can use \(\sigma_{\texttt{STARK}}\) to verify the statement "\(\exists v:\mathsf{valid}(v)=true\wedge\mathsf{hash}(v)=h\)", all without \(P_{j}\) ever receiving \(v\). As we detail in SS6, by carefully crafting a more complex statement, we can modify DARE's Dispersal and Retrieval phases to function with at most \(O(poly(\kappa))\) bit-sized messages, obtaining DARE-Stark. This yields the overall near-optimal bit complexity of \(O(nL+n^{2}poly(\kappa))\). Currently, the main drawback of STARKs is their size and computation time in practice2, which we hope will improve in the future.
Footnote 2: The associated constants hidden by the “big O” notation result in computation in the order of seconds, proofs in the hundreds of KB, and memory usage several times greater [52].
**Roadmap**. We discuss related work in SS2. In SS3, we define the system model. We give an overview of DARE in SS4. In SS5, we detail our Dispersal protocol. We go over DARE-Stark in SS6. Lastly, we conclude the paper in SS7. Detailed proofs are relegated to the optional appendix.
## 2 Related Work
We address the communication complexity of deterministic authenticated Byzantine consensus [64, 25] in partially synchronous distributed systems [51] for large inputs. Here, we discuss existing results in closely related contexts, and provide a brief overview of techniques, tools and building blocks which are often employed to tackle Byzantine consensus.3
Footnote 3: We use “consensus” and “agreement” interchangeably.
**Asynchrony**. In the asynchronous setting, Byzantine agreement is commonly known as Multi-valued Validated Byzantine Agreement, or MVBA [25]. Due to the FLP impossibility result [53], deterministic Byzantine agreement is unsolvable in asynchrony (which implies unbounded worst-case complexity). Hence, asynchronous MVBA solutions focus on expected complexity. This line of work was revitalized by HoneyBadgerBFT [72], the first practical fully asynchronous MVBA implementation. Like most other modern asynchronous MVBA protocols, it leverages randomization via a common coin [75], and it terminates in expected \(O(\log n)\) time with an expected bit complexity of \(O(n^{2}L+n^{3}\kappa\log n)\). [6] improves this to \(O(1)\) expected time and \(O(n^{2}L+n^{2}\kappa)\) expected bits, which is asymptotically optimal with \(L,\kappa\in O(1)\). Their result is later extended by [68] to large values, improving the complexity to \(O(nL+n^{2}\kappa)\) expected bits. This matches the best known lower bound [84, 3, 47], assuming \(\kappa\in O(1)\).
**Extension protocols**[78, 55, 56, 57]. An extension protocol optimizes for long inputs via a reduction to the same problem with small inputs (considered an oracle). Using extension protocols, several state-of-the-art results were achieved in the authenticated and unauthenticated models, both in synchronous and fully asynchronous settings for Byzantine consensus, Byzantine broadcast and reliable broadcast [78]. Applying the extension protocol of [78] to [74], synchronous Byzantine agreement can be implemented with optimal resiliency (\(t<n/2\)) and a bit complexity of \(O(nL+n^{2}\kappa)\). Interestingly, it has been demonstrated that synchronous Byzantine agreement can be implemented with a bit complexity of \(O(n(L+poly(\kappa)))\) using randomization [18]. The Dolev-Reischuk bound [47] is not violated in this case since the implementation tolerates a negligible (with \(\kappa\)) probability of failure, whereas the bound holds for deterministic protocols. In asynchrony, by applying the (asynchronous) extension protocol of [78] to [6], the same asymptotic result as [68] is achieved, solving asynchronous MVBA with an expected bit complexity of \(O(nL+n^{2}\kappa)\).
Unconditionally secure Byzantine agreement with large inputs has been addressed by [33, 34] under synchrony and [66] under asynchrony, assuming a common coin (implementable via unconditionally-secure Asynchronous Verifiable Secret Sharing [35]). Despite [60] utilizing erasure
codes to alleviate leader bottleneck, and the theoretical construction of [37] with exponential latency, there is, to the best of our knowledge, no viable extension protocol for Byzantine agreement in partial synchrony achieving results similar to ours (\(o(n^{2}L)\)).
**Error correction.** Coding techniques, such as erasure codes [19, 59, 10] or error-correction codes [81, 14], appear in state-of-the-art implementations of various distributed tasks: Asynchronous Verifiable Secret Sharing (AVSS) against a computationally bounded [43, 89, 83] or unbounded [35] adversary, Random Beacon [42], Atomic Broadcast in both the asynchronous [54, 61] and partially synchronous [28] settings, Information-Theoretic (IT) Asynchronous State Machine Replication (SMR) [50], Gradecast in synchrony and Reliable Broadcast in asynchrony [2], Asynchronous Distributed Key Generation (ADKG) [43, 44], Asynchronous Verifiable Information Dispersal (AVID) [9], Byzantine Storage [46, 12, 58], and MVBA [68, 78]. Coding techniques are often used to reduce the worst-case complexity by allowing a group of processes to balance and share the cost of sending a value to an individual (potentially faulty) node and are also used in combination with other techniques, such as commitment schemes [31, 68].
We now list several problems related to or used in solving Byzantine agreement.
**Asynchronous Common Subset (ACS)**. The goal in ACS [14, 15, 49] (also known as Vector Consensus [79, 48, 39]) is to agree on a subset of \(n-t\) proposals. When considering a generalization of the validity property, this problem represents the strongest variant of consensus [37]. Atomic Broadcast can be trivially reduced to ACS [48, 25, 72]. There are well-known simple asynchronous constructions that allow for the reduction of ACS to either (1) Reliable Broadcast and Binary Byzantine Agreement [15], or (2) MVBA [25] in the authenticated setting, where the validation predicate requires the output to be a vector of signed inputs from at least \(n-t\) parties. The first reduction enables the implementation of ACS with a cubic bit complexity, using the broadcast of [2]. The second reduction could be improved further with a more efficient underlying MVBA protocol, such as DARE-Stark.
**Asynchronous Verifiable Information Dispersal (AVID)**. AVID [26] is a form of "retrievable" broadcast that allows the dissemination of a value while providing a cryptographic proof that it can be retrieved. This primitive can be implemented with a total dispersal cost of \(O(L+n^{2}\kappa)\) bits exchanged and a retrieval cost of \(O(L+n\kappa)\) per node, relying only on the existence of collision-resistant hash functions [9]. AVID is similar to our Dispersal and Retrieval phases, but has two key differences. First, AVID's retrieval protocol only guarantees that a valid value will be retrieved if the original process dispersing the information was correct. Second, it is a broadcast protocol, having stricter delivery guarantees for each process. Concretely, if a correct process initiates the AVID protocol, it should eventually disperse its _own_ value. In contrast, we only require that a correct process obtains a proof of dispersal for _some_ value.
**Provable Broadcast (PB) and Asynchronous Provable Dispersal Broadcast (APDB)**. PB [7] is a primitive used to acquire a succinct proof of external validity. It is similar to our Dispersal phase, including the algorithm itself, but without the provision of a proof of dispersal (i.e., retrievability, only offering proof of validity). The total bit complexity for \(n\) PB-broadcasts from distinct processes amounts to \(O(n^{2}L)\). APDB [68] represents an advancement of AVID, drawing inspiration from PB. It sacrifices PB's validity guarantees to incorporate AVID's dissemination and retrieval properties. By leveraging the need to retrieve and validate a value a constant number of times in expectation, [68] attains optimal \(O(nL+n^{2}\kappa)\) expected complexity in asynchrony. However, this approach falls short in the worst-case scenario of a partially synchronous solution, where \(n\) reconstructions would cost \(\Omega(n^{2}L)\).
**Asynchronous Data Dissemination (ADD)**. In ADD [43], a subset of \(t+1\) correct processes initially share a common \(L\)-sized value \(v\), and the goal is to disseminate \(v\) to all correct processes,
despite the presence of up to \(t\) Byzantine processes. The approach of [43] is information-theoretically secure, tolerates up to one-third malicious nodes and has a bit complexity of \(O(nL+n^{2}\log n)\). (In DARE, we rely on ADD in a "closed-box" manner; see SS4.)
## 3 Preliminaries
Processes.We consider a static set \(\mathsf{Process}=\{P_{1},P_{2},...,P_{n}\}\) of \(n=3t+1\) processes, out of which (at most) \(t>0\) can be Byzantine and deviate arbitrarily from their prescribed protocol. A Byzantine process is said to be _faulty_; a non-faulty process is said to be _correct_. Processes communicate by exchanging messages over an authenticated point-to-point network. Furthermore, the communication network is reliable: if a correct process sends a message to a correct process, the message is eventually received. Processes have local hardware clocks. Lastly, we assume that local steps of processes take zero time, as the time needed for local computation is negligible compared to the message delays.
Partial synchrony.We consider the standard partially synchronous model [51]. For every execution, there exists an unknown Global Stabilization Time (GST) and a positive duration \(\delta\) such that the message delays are bounded by \(\delta\) after GST. We assume that \(\delta\) is known by processes. All correct processes start executing their prescribed protocol by GST. The hardware clocks of processes may drift arbitrarily before GST, but do not drift thereafter. We underline that our algorithms require minimal changes to preserve their correctness even if \(\delta\) is unknown (these modifications are specified in Appendix C.2), although their complexity might be higher.
Cryptographic primitives.Throughout the paper, \(\mathsf{hash}(\cdot)\) denotes a collision-resistant hash function. The codomain of the aforementioned \(\mathsf{hash}(\cdot)\) function is denoted by \(\mathsf{Hash\_Value}\).
Moreover, we assume a \((k,n)\)-threshold signature scheme [82], where \(k=n-t=2t+1\). In this scheme, each process holds a distinct private key, and there is a single public key. Each process \(P_{i}\) can use its private key to produce a partial signature for a message \(m\) by invoking \(\mathsf{share\_sign}_{i}(m)\). A set of partial signatures \(S\) for a message \(m\) from \(k\) distinct processes can be combined into a single threshold signature for \(m\) by invoking \(\mathsf{combine}(S)\); a threshold signature for \(m\) proves that \(k\) processes have (partially) signed \(m\). Furthermore, partial and threshold signatures can be verified: given a message \(m\) and a signature \(\Sigma_{m}\), \(\mathsf{verify\_sig}(m,\Sigma_{m})\) returns _true_ if and only if \(\Sigma_{m}\) is a valid signature for \(m\). Where appropriate, the verifications are left implicit. We denote by \(\mathsf{P\_Signature}\) and \(\mathsf{T\_Signature}\) the set of partial and threshold signatures, respectively. The size of cryptographic objects (i.e., hashes, signatures) is denoted by \(\kappa\); we assume that \(\kappa>\log n\).4
Footnote 4: For \(\kappa\leq\log n,t\in O(n)\) faulty processes would have computational power exponential in \(\kappa\), breaking cryptographic hardness assumptions.
**Reed-Solomon codes [81].** Our algorithms rely on Reed-Solomon (RS) codes [80]. Concretely, DARE utilizes (in a "closed-box" manner) an algorithm which internally builds upon error-correcting RS codes. DARE-Stark directly uses RS erasure codes (no error correction is required).
We use \(\mathsf{encode}(\cdot)\) and \(\mathsf{decode}(\cdot)\) to denote RS' encoding and decoding algorithms. In a nutshell, \(\mathsf{encode}(\cdot)\) takes a value \(v\), chunks it into the coefficients of a polynomial of degree \(t\) (the maximum number of faults), and outputs \(n\) (the total number of processes) evaluations of the polynomial (RS symbols); Symbol denotes the set of RS symbols. \(\mathsf{decode}(\cdot)\) takes a set of \(t+1\) RS symbols \(S\) and interpolates them into a polynomial of degree \(t\), whose coefficients are concatenated and output.
Complexity of Byzantine consensus.Let \(\mathsf{Consensus}\) be a partially synchronous Byzantine consensus algorithm, and let \(\mathcal{E}(\mathsf{Consensus})\) denote the set of all possible executions. Let \(\alpha\in\mathcal{E}(\mathsf{Consensus})\) be an execution, and \(t_{d}(\alpha)\) be the first time by which all correct processes have
decided in \(\alpha\). The bit complexity of \(\alpha\) is the total number of bits sent by correct processes during the time period \([\mathrm{GST},\infty)\). The latency of \(\alpha\) is \(\max(0,t_{d}(\alpha)-\mathrm{GST})\).
The _bit complexity_ of Consensus is defined as
\[\max_{\alpha\in\mathcal{E}(\mathsf{Consensus})}\bigg{\{}\text{bit complexity of }\alpha\bigg{\}}.\]
Similarly, the _latency_ of Consensus is defined as
\[\max_{\alpha\in\mathcal{E}(\mathsf{Consensus})}\bigg{\{}\text{latency of }\alpha\bigg{\}}.\]
## 4 Dare
This section presents DARE (Disperse, Agree, REtrieve), which is composed of three algorithms:
1. Disperse, which disperses the proposals;
2. Agreement, which ensures agreement on the hash of a previously dispersed proposal; and
3. Retriever, which rebuilds the proposal corresponding to the agreed-upon hash.
We start by introducing the aforementioned building blocks (SS4.1). Then, we show how they are composed into DARE (SS4.2). Finally, we prove the correctness and complexity of DARE (SS4.3).
### Building Blocks: Overview
In this subsection, we formally define the three building blocks of DARE. Concretely, we define their interface and properties, as well as their complexity.
#### 4.1.1 Disperser
**Interface & properties.**Disperser solves a problem similar to that of AVID [26]. In a nutshell, each correct process aims to disperse its value to all correct processes: eventually, all correct processes acquire a proof that a value with a certain hash has been successfully dispersed.
Concretely, Disperser exposes the following interface:
* **request**\(\mathsf{disperse}(v\in\mathsf{Value})\): a process disperses a value \(v\); each correct process invokes \(\mathsf{disperse}(v)\) exactly once and only if \(\mathsf{valid}(v)=\mathit{true}\).
* **indication**\(\mathsf{acquire}(h\in\mathsf{Hash\_Value},\Sigma_{h}\in\mathsf{T\_Signature})\): a process acquires a pair \((h,\Sigma_{h})\).
We say that a correct process _obtains_ a threshold signature (resp., a value) if and only if it stores the signature (resp., the value) in its local memory. (Obtained values can later be retrieved by all correct processes using Retriever; see SS4.1.3 and Algorithm 1.) Disperser ensures the following:
* _Integrity_: If a correct process acquires a hash-signature pair \((h,\Sigma_{h})\), then \(\mathsf{verify\_sig}(h,\Sigma_{h})=\mathit{true}\).
* _Termination_: Every correct process eventually acquires at least one hash-signature pair.
* _Redundancy_: Let a correct process obtain a threshold signature \(\Sigma_{h}\) such that \(\mathsf{verify\_sig}(h,\Sigma_{h})=\mathit{true}\), for some hash value \(h\). Then, (at least) \(t+1\) correct processes have obtained a value \(v\) such that (1) \(\mathsf{hash}(v)=h\), and (2) \(\mathsf{valid}(v)=\mathit{true}\).
Note that it is not required for all correct processes to acquire the same hash value (nor the same threshold signature). Moreover, the specification allows for multiple acquired pairs.
**Complexity.**Disperser exchanges \(O(n^{1.5}L+n^{2.5}\kappa)\) bits after GST. Moreover, it terminates in \(O(n)\) time after GST.
**Implementation.** The details on Disperser's implementation are relegated to SS5.
#### 4.1.2. Agreement
**Interface & properties.**Agreement is a Byzantine consensus algorithm.5 In Agreement, processes propose and decide pairs \((h\in\mathsf{Hash\_Value},\Sigma_{h}\in\mathsf{T\_Signature})\); moreover, \(\mathsf{valid}(h,\Sigma_{h})\equiv\mathsf{verify\_sig}(h,\Sigma_{h})\).
Footnote 5: Recall that the interface and properties of Byzantine consensus algorithms are introduced in §1.
**Complexity.**Agreement achieves \(O(n^{2}\kappa)\) bit complexity and \(O(n)\) latency.
**Implementation.**We "btorw" the implementation from (Konig et al., 2017). In brief, Agreement is a "leader-based" consensus algorithm whose computation unfolds in views. Each view has a single leader, and it employs a "leader-to-all, all-to-leader" communication pattern. Agreement's safety relies on standard techniques (Konig et al., 2017; D'Alessio et al., 2018; Konig et al., 2019; Konig et al., 2019): (1) quorum intersection (safety within a view), and (2) "locking" mechanism (safety across multiple views). As for liveness, Agreement guarantees termination once all correct processes are in the same view (for "long enough" time) with a correct leader. (For full details on Agreement, see (Konig et al., 2019).)
#### 4.1.3. Retriever
**Interface & properties.**In Retriever, each correct process starts with either (1) some value, or (2) \(\bot\). Eventually, all correct processes output the same value. Formally, Retriever exposes the following interface:
* input\((v\in\mathsf{Value}\cup\{\bot\})\): a process inputs a value or \(\bot\); each correct process invokes \(\mathsf{input}(\cdot)\) exactly once. Moreover, the following is assumed:
* No two correct processes invoke \(\mathsf{input}(v_{1}\in\mathsf{Value})\) and \(\mathsf{input}(v_{2}\in\mathsf{Value})\) with \(v_{1}\neq v_{2}\).
* At least \(t+1\) correct processes invoke \(\mathsf{input}(v\in\mathsf{Value})\) (i.e., \(v\neq\bot\)).
* **indication** output\((v^{\prime}\in\mathsf{Value})\): a process outputs a value \(v^{\prime}\).
The following properties are ensured:
* _Agreement:_ No two correct processes output different values.
* _Validity:_ Let a correct process input a value \(v\). No correct process outputs a value \(v^{\prime}\neq v\).
* _Termination:_ Every correct process eventually outputs a value.
**Complexity.**Retriever exchanges \(O(nL+n^{2}\log n)\) bits after GST (and before every correct process outputs a value). Moreover, Retriever terminates in \(O(1)\) time after GST.
**Implementation.**Retriever's implementation is "borrowed" from (Konig et al., 2019). In summary, Retriever relies on Reed-Solomon codes (Konig et al., 2019) to encode the input value \(v\neq\bot\) into \(n\) symbols. Each correct process \(Q\) which inputs \(v\neq\bot\) to Retriever encodes \(v\) into \(n\) RS symbols \(s_{1},s_{2},...,s_{n}\). \(Q\) sends each RS symbol \(s_{i}\) to the process \(P_{i}\). When \(P_{i}\) receives \(t+1\) identical RS symbols \(s_{i}\), \(P_{i}\) is sure that \(s_{i}\) is a "correct" symbol (i.e., it can be used to rebuild \(v\)) as it was computed by at least one correct process. At this moment, \(P_{i}\) broadcasts \(s_{i}\). Once each correct process \(P\) receives \(2t+1\) (or more) RS symbols, \(P\) tries to rebuild \(v\) (with some error-correction). (For full details on Retriever, see (Konig et al., 2019).)
### Pseudocode
Algorithm 1 gives DARE's pseudocode. We explain it from the perspective of a correct process \(P_{i}\). An execution of DARE consists of three phases (each of which corresponds to one building block):
1. _Dispersal:_ Process \(P_{i}\) disperses its proposal \(v_{i}\) using Disperser (line 9). Eventually, \(P_{i}\) acquires a hash-signature pair \((h_{i},\Sigma_{i})\) (line 10) due to the termination property of Disperser.
2. _Agreement_: Process \(P_{i}\) proposes the previously acquired hash-signature pair \((h_{i},\Sigma_{i})\) to Agreement (line 11). As Agreement satisfies termination and agreement, all correct processes eventually agree on a hash-signature pair \((h,\Sigma_{h})\) (line 12).
3. _Retrieval_: Once process \(P_{i}\) decides \((h,\Sigma_{h})\) from Agreement, it checks whether it has previously obtained a value \(v\) with \(\mathsf{hash}(v)=h\) (line 13). If it has, \(P_{i}\) inputs \(v\) to Retriever; otherwise, \(P_{i}\) inputs \(\bot\) (line 14). The required preconditions for Retriever are met: * No two correct processes input different non-\(\bot\) values to Retriever as \(\mathsf{hash}(\cdot)\) is collision-resistant. * At least \((t+1)\) correct processes input a value (and not \(\bot\)) to Retriever. Indeed, as \(\Sigma_{h}\) is obtained by a correct process, \(t+1\) correct processes have obtained a value \(v\neq\bot\) with \(\mathsf{hash}(v)=h\) (due to redundancy of Disperser), and all of these processes input \(v\). Therefore, all correct processes (including \(P_{i}\)) eventually output the same value \(v^{\prime}\) from Retriever (due to the termination property of Retriever; line 15), which represents the decision of DARE (line 16). Note that \(v^{\prime}=v\neq\bot\) due to the validity of Retriever.
### Proof of Correctness & Complexity
We start by proving the correctness of DARE.
DARE is correct.
Proof.: Every correct process starts the dispersal of its proposal (line 9). Due to the termination property of Disperser, every correct process eventually acquires a hash-signature pair (line 10). Hence, every correct process eventually proposes to Agreement (line 11), which implies that every correct process eventually decides the same hash-signature pair \((h,\Sigma_{h})\) from Agreement (line 12) due to the agreement and termination properties of Agreement.
As \((h,\Sigma_{h})\) is decided by all correct processes, at least \(t+1\) correct processes \(P_{i}\) have obtained a value \(v\) such that (1) \(\mathsf{hash}(v)=h\), and (2) \(\mathsf{valid}(v)=\mathit{true}\) (due to the redundancy property of Disperser). Therefore, all of these correct processes input \(v\) to Retriever (line 14). Moreover, no correct process inputs a different value (as \(\mathsf{hash}(\cdot)\) is collision-resistant). Thus, the conditions required by Retriever are met, which implies that all correct processes eventually output the same valid value (namely, \(v\)) from Retriever (line 15), and decide it (line 16).
Next, we prove the complexity of DARE.
DATE achieves \(O(n^{1.5}L+n^{2.5}\kappa)\) bit complexity and \(O(n)\) latency.
Proof.: As DARE is a sequential composition of its building blocks, its complexity is the sum of the complexities of (1) Disperser, (2) Agreement, and (3) Retriever. Hence, the bit complexity is
\[\underbrace{O(n^{1.5}L+n^{2.5}\kappa)}_{\text{Disperser}}+\underbrace{O(n^{2} \kappa)}_{\text{Acreement}}+\underbrace{O(nL+n^{2}\log n)}_{\text{Retriever}}=O(n^{1.5 }L+n^{2.5}\kappa).\]
Similarly, the latency is \(O(n)\).
## 5 Disperser: Implementation & Analysis
This section focuses on Disperser. Namely, we present its implementation (SS5.1), and (informally) analyze its correctness and complexity (SS5.2). Formal proofs are can be found in Appendix A.
### Implementation
Disperser's pseudocode is given in Algorithm 2. In essence, each execution unfolds in _views_, where each view has \(X\)_leaders_ (\(0<X\leq n\) is a generic parameter); the set of all views is denoted by View. Given a view \(V\), \(\mathsf{leaders}(V)\) denotes the \(X\)-sized set of leaders of the view \(V\). In each view, a leader disperses its value to \(Y\)-sized groups of processes (\(0<Y\leq n\) is a generic parameter) at a time (line 14), with a \(\delta\)-waiting step in between (line 15). Before we thoroughly explain the pseudocode, we introduce Sync, Disperser's view synchronization [36, 88, 65] algorithm.
**Sync.** Its responsibility is to bring all correct processes to the same view with a correct leader for (at least) \(\Delta=\delta\frac{n}{Y}+3\delta\) time. Precisely, Sync exposes the following interface:
* [noitemsep,topsep=0pt]
* **indication**\(\mathsf{advance}(V\in\mathsf{View})\): a process enters a new view \(V\). Sync guarantees _eventual synchronization_: there exists a time \(\tau_{sync}\geq\text{GST}\) (_synchronization time_) such that (1) all correct processes are in the same view \(V_{sync}\) (_synchronization view_) from time \(\tau_{sync}\) to (at least) time \(\tau_{sync}+\Delta\), and (2) \(V_{sync}\) has a correct leader. We denote by \(V_{sync}^{*}\) the smallest synchronization view, whereas \(\tau_{sync}^{*}\) denotes the first synchronization time. Similarly, \(V_{max}\) denotes the greatest view entered by a correct process before GST.6 Footnote 6: When such a view does not exist, \(V_{max}=0\)
The implementation of Sync (see Appendix A.1) is highly inspired by RareSync, a view synchronization algorithm introduced in [36]. In essence, when a process enters a new view, it stays in the view for \(O(\Delta)=O(\frac{n}{Y})\) time. Once it wishes to proceed to the next view, the process engages in an "all-to-all" communication step (which exchanges \(O(n^{2}\kappa)\) bits); this step signals the end of the current view, and the beginning of the next one. Throughout views, leaders are rotated in a round-robin manner: each process is a leader for exactly one view in any sequence of \(\frac{n}{X}\) consecutive views. As \(O(\frac{n}{X})\) views (after GST) are required to reach a correct leader, Sync exchanges \(O(\frac{n}{X})\cdot O(n^{2}\kappa)=O(\frac{n^{3}\kappa}{X})\) bits (before synchronization, i.e., before \(\tau_{sync}^{*}+\Delta\)); since each view takes \(O(\frac{n}{Y})\) time, synchronization is ensured within \(O(\frac{n}{X})\cdot O(\frac{n}{Y})=O(\frac{n^{2}}{XY})\) time.
Disperser relies on the following properties of Sync (along with eventual synchronization):
* [noitemsep,topsep=0pt]
* _Monotonicity:_ Any correct process enters monotonically increasing views.
* _Stabilization:_ Any correct process enters a view \(V\geq V_{max}\) by time \(\text{GST}+3\delta\).
* _Limited entrance:_ In the time period \([\text{GST},\text{GST}+3\delta)\), any correct process enters \(O(1)\) views.
* _Overlapping:_ For any view \(V>V_{max}\), all correct processes overlap in \(V\) for (at least) \(\Delta\) time.
* _Limited synchronization view:_\(V_{sync}^{*}-V_{max}=O(\frac{n}{X})\).
* _Complexity:_ Sync exchanges \(O(\frac{n^{3}\kappa}{X})\) bits during the time period \([\text{GST},\tau^{*}_{sync}+\Delta]\), and it synchronizes all correct processes within \(O(\frac{n^{2}}{XY})\) time after GST (\(\tau^{*}_{sync}+\Delta-\text{GST}=O(\frac{n^{2}}{XY})\)). The aforementioned properties of Sync are formally proven in Appendix A.1.
**Algorithm description.** Correct processes transit through views based on Sync's indications (line 10): when a correct process receives \(\mathsf{advance}(V)\) from Sync, it stops participating in the previous view and starts participating in \(V\).
Once a correct leader \(P_{l}\) enters a view \(V\), it disperses its proposal via dispersal messages. As already mentioned, \(P_{l}\) sends its proposal to \(Y\)-sized groups of processes (line 14) with a \(\delta\)-waiting step in between (line 15). When a correct (non-leader) process \(P_{i}\) (which participates in the view \(V\)) receives a dispersal message from \(P_{l}\), \(P_{i}\) checks whether the dispersed value is valid (line 17). If it is, \(P_{i}\) partially signs the hash of the value, and sends it back to \(P_{l}\) (line 20). When \(P_{l}\) collects \(2t+1\)ack messages, it (1) creates a threshold signature for the hash of its proposal (line 24), and (2) broadcasts the signature (along with the hash of its proposal) to all processes via a confirm message (line 25). Finally, when \(P_{l}\) (or any other correct process) receives a confirm message (line 27), it (1) acquires the received hash-signature pair (line 28), (2) disseminates the pair to "help" the other processes (line 29), and (3) stops executing Disperser (line 30).
```
1:Uses:
2:Sync,instancesync\(\triangleright\) ensures a \(\Delta=\delta\frac{n}{P}+3\delta\) overlap in a view with a correct leader
3:upon init:
4:Value\(\mathit{proposal}_{i}\leftarrow\bot\)
5:Integer\(\mathit{received\_ack}_{i}\gets 0\)
6:Map(Hash\(\_\)Value\(\rightarrow\)Value)\(\mathit{obtained\_values}_{i}\leftarrow\) empty
7:upon disperse(Value\(v_{i}\)):
8:\(\mathit{proposal}_{i}\gets v_{i}\)
9:startsync
10:upon\(sync.\mathsf{advance}(\mathsf{View}\ V)\):\(\triangleright\)\(P_{i}\) stops participating in the previous view
11:\(\triangleright\) First part of the view
12:if\(P_{i}\in\mathsf{leaders}(V)\):
13:forInteger\(k\gets 1\) to \(\frac{n}{P}\):
14:send\(\mathit{proposal}_{i}\) to \(P_{(k-1)Y+1},P_{(k-1)Y+2},...,P_{kY}\)
15:wait\(\delta\) time
16:every process:
17:upon reception of (dispersal, Value\(v_{j}\)) from process \(P_{j}\in\mathsf{leaders}(V)\) and \(\mathsf{valid}(v_{j})=\mathit{true}\):
18:Hash\(\_\)Value\(h\leftarrow\)hash\((v_{j})\)
19:obtained\(\_\)values\({}_{i}[h]\gets v_{j}\)
20:send\(\langle\mathsf{ack},\)share\(\_\)sign\({}_{i}(h)\rangle\) to \(P_{j}\)
21:\(\triangleright\) Second part of the view
22:if\(P_{i}\in\mathsf{leaders}(V)\):
23:upon existsHash\(\_\)Value\(h\) such that \(\langle\mathsf{ack},h,\cdot\rangle\) has been received from \(2t+1\) processes:
24:T\(\_\)Signature\(\Sigma_{h}\leftarrow\)combine\(\big{(}\){P\(\_\)Signature\(\ sig\ |\ sig\) is received in the \(\mathsf{ack}\) messages\(\big{)}\)
25:broadcast\(\langle\textsc{confirm},h,\Sigma_{h}\rangle\)
26:every process:
27:upon reception of (confirm, Hash\(\_\)Value\(h\), T\(\_\)Signature\(\Sigma_{h}\rangle\) and \(\mathsf{verify\_sig}(h,\Sigma_{h})=\mathit{true}\):
28:trigger\(\mathsf{acquire}(h,\Sigma_{h})\)
29:broadcast\(\langle\textsc{confirm},h,\Sigma_{h}\rangle\)
30:stop executing Disperser (and Sync)
```
**Algorithm 2**Disperser: Pseudocode (for process \(P_{i}\))
### Analysis
**Correctness.** Once all correct processes synchronize in the view \(V^{*}_{sync}\) (the smallest synchronization view), all correct processes acquire a hash-signature pair. Indeed, \(\Delta=\delta\frac{n}{P}+3\delta\) time is sufficient for
a correct leader \(P_{l}\in\mathsf{leaders}(V^{*}_{sync})\) to (1) disperse its proposal \(\mathit{proposal}_{l}\) to all processes (line 14), (2) collect \(2t+1\) partial signatures for \(h=\mathsf{hash}(\mathit{proposal}_{l})\) (line 23), and (3) disseminate a threshold signature for \(h\) (line 25). When a correct process receives the aforementioned threshold signature (line 27), it acquires the hash-signature pair (line 28) and stops executing Disperser (line 30).
**Complexity.**Disperser terminates once all correct processes are synchronized in a view with a correct leader. The synchronization is ensured in \(O(\frac{n^{2}}{XY})\) time after GST (as \(\tau^{*}_{sync}+\Delta-\mathrm{GST}=O(\frac{n^{2}}{XY})\)). Hence, Disperser terminates in \(O(\frac{n^{2}}{XY})\) time after GST.
Let us analyze the number of bits Disperser exchanges. Any execution of Disperser can be separated into two post-GST periods: (1) _unsynchronized_, from GST until GST \(+3\delta\), and (2) _synchronized_, from GST \(+3\delta\) until \(\tau^{*}_{sync}+\Delta\). First, we study the number of bits correct processes send via dispersal, ack and confirm message in the aforementioned periods:
* Unsynchronized period: Due to the \(\delta\)-waiting step (line 15), each correct process sends dispersal messages (line 14) to (at most) \(3=O(1)\)\(Y\)-sized groups. Hence, each correct process sends \(O(1)\cdot O(Y)\cdot L=O(YL)\) bits through dispersal messages. Due to the limited entrance property of Sync, each correct process enters \(O(1)\) views during the unsynchronized period. In each view, each correct process sends (at most) \(O(X)\) ack messages (one to each leader; line 20) and \(O(n)\) confirm messages (line 25). As each ack and confirm message carries \(\kappa\) bits, all correct processes send \[n\cdot\big{(}\underbrace{O(YL)}_{\text{dispersal}}+\underbrace{O(X\kappa)}_{ \text{ack}}+\underbrace{O(n\kappa)}_{\text{constrism}}\big{)}\] \[=O(nYL+n^{2}\kappa)\text{ bits via dispersal, ack and confirm messages.}\]
* Synchronized period: Recall that all correct processes acquire a hash-signature pair (and stop executing Disperser) by time \(\tau^{*}_{sync}+\Delta\), and they do so in the view \(V^{*}_{sync}\). As correct processes enter monotonically increasing views, no correct process enters a view greater than \(V^{*}_{sync}\). By the stabilization property of Sync, each correct process enters a view \(V\geq V_{max}\) by time GST \(+3\delta\). Moreover, until \(\tau^{*}_{sync}+\Delta\), each correct process enters (at most) \(O(\frac{n}{X})\) views (due to the limited synchronization view and monotonicity properties of Sync). Importantly, no correct leader exists in any view \(V\) with \(V_{max}<V<V^{*}_{sync}\); otherwise, \(V=V^{*}_{sync}\) as processes overlap for \(\Delta\) time in \(V\) (due to the overlapping property of Sync). Hence, for each view \(V\) with \(V_{max}<V<V^{*}_{sync}\), all correct processes send \(O(nX\kappa)\) bits (all through ack messages; line 20). In \(V_{max}\) and \(V^{*}_{sync}\), all correct processes send (1) \(2\cdot O(XnL)\) bits through dispersal messages (line 14), (2) \(2\cdot O(nX\kappa)\) bits through ack messages (line 20), and (3) \(2\cdot O(Xn\kappa)\) bits through confirm messages (line 25). Therefore, all correct processes send \[O(\frac{n}{X})\cdot\underbrace{O(nX\kappa)}_{\text{ack}}+ \underbrace{O(XnL)}_{\text{dispersal in }V_{max}}+\underbrace{O(nX\kappa)}_{\text{ ack in }V_{max}}+\underbrace{O(nX\kappa)}_{\text{constrism in }V_{max}}+ \underbrace{O(Xn\kappa)}_{\text{constrism in }V_{max}}\] \[=O(nXL+n^{2}\kappa)\text{ bits via dispersal, ack and confirm messages.}\]
We cannot neglect the complexity of Sync, which exchanges \(O(\frac{n^{3}\kappa}{X})\) bits during the time period \([\mathrm{GST},\tau^{*}_{sync}+\Delta]\). Hence, the total number of bits Disperser exchanges is
\[\underbrace{O(nYL+n^{2}\kappa)}_{\text{unsynchronized period}}+ \underbrace{O(nXL+n^{2}\kappa)}_{\text{synchronized period}}+\underbrace{O(\frac{n^{3}\kappa}{X})}_{ \text{Sync}}=O(nYL+nXL+\frac{n^{3}\kappa}{X}).\]
With \(X=Y=\sqrt{n}\), Disperser terminates in optimal \(O(n)\) time, and exchanges \(O(n^{1.5}L+n^{2.5}\kappa)\) bits. Our analysis is illustrated in Figure 4.
## 6 DARE-Stark
In this section, we present DARE-Stark, a variant of DARE which relies on STARK proofs. Importantly, DARE-Stark achieves \(O(nL+n^{2}\mathit{poly}(\kappa))\) bit complexity, nearly tight to the \(\Omega(nL+n^{2})\) lower bound, while preserving optimal \(O(n)\) latency.
First, we revisit Disperser, pinpointing its complexity on proving RS encoding (SS6.1). We then provide an overview on STARKs, a cryptographic primitive providing succinct proofs of knowledge (SS6.2). We finally present DARE-Stark, which uses STARKs for provable RS encoding, thus improving on DARE's complexity (SS6.3).
### Revisiting DARE: What Causes Disperser's Complexity?
Recall that Disperser exchanges \(O(n^{1.5}L+n^{2.5}\kappa)\) bits. This is due to a fundamental requirement of Retriever: at least \(t+1\) correct processes must have obtained the value \(v\) by the time Agreement decides \(h=\mathsf{hash}(v)\). Retriever leverages this requirement to prove the correct encoding of RS symbols. In brief (as explained in SS4.1.3): (1) every correct process \(P\) that obtained \(v\neq\bot\) encodes it in \(n\) RS symbols \(s_{1},\ldots,s_{n}\); (2) \(P\) sends each \(s_{i}\) to \(P_{i}\); (3) upon receiving \(t+1\) identical copies of \(s_{i}\), \(P_{i}\) can trust \(s_{i}\) to be the \(i\)-th RS symbol for \(v\) (note that \(s_{i}\) can be trusted only because it was produced by at least one correct process - nothing else proves \(s_{i}\)'s relationship to \(v\)!); (4) every correct process \(P_{i}\) disseminates \(s_{i}\), enabling the reconstruction of \(v\) by means of error-correcting decoding. In summary, DARE bottlenecks on Disperser, and Disperser's complexity is owed to the need to prove the correct encoding of RS symbols in Retriever. Succinct arguments of knowledge (such as STARKs), however, allow to publicly prove the relationship between an RS symbol and the value it encodes, eliminating the need to disperse the entire value to \(t+1\) correct processes - a dispersal of provably correct RS symbols suffices. DARE-Stark builds upon this idea.
### STARKs
First introduced in [16], STARKs are succinct, universal, transparent arguments of knowledge. For any function \(f\) (computable in polynomial time) and any (polynomially-sized) \(y\), a STARK can be used to prove the knowledge of some \(x\) such that \(f(x)=y\). Remarkably, the size of a STARK proof is \(O(\mathit{poly}(\kappa))\). At a very high level, a STARK proof is produced as follows: (1) the computation of \(f(x)\) is unfolded on an execution trace; (2) the execution trace is (RS) over-sampled for error amplification; (3) the correct computation of \(f\) is expressed as a set of algebraic constraints over the trace symbols; (4) the trace symbols are organized in a Merkle tree [71]; (5) the tree's root is used as a seed to pseudo-randomly sample the trace symbols. The resulting collection of Merkle proofs proves that, for some known (but not revealed) \(x\), \(f(x)\neq y\) only with cryptographically low probability (negligible in \(\kappa\)). STARKs are non-interactive, require no trusted setup (they are
Figure 4: Illustration of Disperser’s bit complexity.
transparent), and their security reduces to that of cryptographic hashes in the Random Oracle Model (ROM) [13].
### Implementation
**Provably correct encoding.** At its core, DARE-Stark uses STARKs to attest the correct RS encoding of values. For every \(i\in[1,n]\), we define \(\mathsf{shard}_{i}(\cdot)\) by
\[\mathsf{shard}_{i}(v\in\mathsf{Value})=\begin{cases}\big{(}\mathsf{hash}(v), \mathsf{encode}_{i}(v)\big{)},&\text{if and only if $\mathsf{valid}(v)=\mathit{true}$}\\ \bot,&\text{otherwise,}\end{cases} \tag{1}\]
where \(\mathsf{encode}_{i}(v)\) represents the \(i\)-th RS symbol obtained from \(\mathsf{encode}(v)\) (see SS3). We use \(\mathsf{proof}_{i}(v)\) to denote the STARK proving the correct computation of \(\mathsf{shard}_{i}(v)\). The design and security of DARE-Stark rests on the following theorem.
Let \(i_{1},\ldots,i_{t+1}\) be distinct indices in \([1,n]\). Let \(h\) be a hash, let \(s_{1},\ldots,s_{t+1}\) be RS symbols, let \(\mathit{stark}_{1},\ldots,\mathit{stark}_{t+1}\) be STARK proofs such that, for every \(k\in[1,t+1]\), \(\mathit{stark}_{k}\) proves knowledge of some (undisclosed) \(v_{k}\) such that \(\mathsf{shard}_{i_{k}}(v_{k})=(h,s_{k})\). We have that
\[v=\mathsf{decode}(\{s_{1},\ldots,s_{k}\})\]
satisfies \(\mathsf{valid}(v)=\mathit{true}\) and \(\mathsf{hash}(v)=h\).
Proof.: For all \(k\), by the correctness of \(\mathit{stark}_{k}\) and eq.1, we have that (1) \(h=\mathsf{hash}(v_{k})\), (2) \(s_{k}=\mathsf{encode}_{i_{k}}(v_{k})\), and (3) \(\mathsf{valid}(v_{k})=\mathit{true}\). By the collision-resistance of \(\mathsf{hash}(\cdot)\), for all \(k,k^{\prime}\), we have \(v_{k}=v_{k}^{\prime}\). By the definition of \(\mathsf{encode}(\cdot)\) and \(\mathsf{decode}(\cdot)\), we then have
\[v=\mathsf{decode}(\{s_{1},\ldots,s_{k}\})=v_{1}=\ldots=v_{t+1},\]
which implies that \(\mathsf{valid}(v)=\mathit{true}\) and \(\mathsf{hash}(v)=h\).
**Algorithm description.** The pseudocode of DARE-Stark is presented in Algorithm3 from the perspective of a correct process \(P_{i}\). Similarly to DARE, DARE-Stark unfolds in three phases:
1. _Dispersal:_ Upon proposing a value \(v_{i}\) (line 9), \(P_{i}\) sends (line 14) to each process \(P_{k}\) (1) \((h_{k},s_{k})=\mathsf{shard}_{k}(v_{i})\) (computed at line 12), and (2) \(\mathit{stark}_{k}=\mathsf{proof}_{k}(v_{i})\) (computed at line 13). In doing so (see Theorem3), \(P_{i}\) proves to \(P_{k}\) that \(h_{k}=\mathsf{hash}(v_{i})\) is the hash of a valid proposal, whose \(k\)-th RS symbol is \(\mathsf{encode}_{k}(v_{i})\). \(P_{k}\) checks \(\mathit{stark}_{k}\) against \((h_{k},s_{k})\) (line 15), stores \((s_{k},\mathit{stark}_{k})\) (line 16), and sends a partial signature for \(h_{k}\) back to \(P_{i}\) (line 17).
2. _Agreement:_ Having collected a threshold signature \(\Sigma\) for \(\mathsf{hash}(v_{i})\) (line 20), \(P_{i}\) proposes \((\mathsf{hash}(v_{i}),\Sigma)\) to Agreement (line 21).
3. _Retrieval:_ Upon deciding a hash \(h\) from Agreement (line 22), \(P_{i}\) broadcasts (if available) the \(i\)-th RS symbol for \(h\), along with the relevant proof (line 24). Upon receiving \(t+1\) symbols \(S\) for the same hash (line 28), \(P_{i}\) decides \(\mathsf{decode}(S)\) (line 30).
**Analysis.** Upon proposing a value \(v_{i}\) (line 9), a correct process \(P_{i}\) sends \(\mathsf{shard}_{k}(v_{i})\) and \(\mathsf{proof}_{k}(v_{i})\) to each process \(P_{k}\) (line 14). Checking \(\mathsf{proof}_{k}(v_{i})\) against \(\mathsf{shard}_{k}(v_{i})\) (line 15), \(P_{k}\) confirms having received the \(k\)-th RS symbol for \(v_{i}\) (note that this does not require the transmission of \(v_{i}\), just \(\mathsf{hash}(v_{i})\)). As \(2t+1\) processes are correct, \(P_{i}\) is guaranteed to eventually gather a \((2t+1)\)-threshold signature \(\Sigma\) for \(\mathsf{hash}(v_{i})\) (line 20). Upon doing so, \(P_{i}\) proposes \((\mathsf{hash}(v_{i}),\Sigma)\) to Agreement (line 21). Since every correct process eventually proposes a value to Agreement, every correct process eventually decides some hash \(h\) from Agreement (line 22). Because \(2t+1\) processes signed \(h\), at least \(t+1\) correct processes (without loss of generality, \(P_{1},\ldots,P_{t+1}\)) received a correctly encoded
RS-symbol for \(h\). More precisely, for every \(k\in[1,t+1]\), \(P_{k}\) received and stored the \(k\)-th RS symbol encoded from the pre-image \(v\) of \(h\). Upon deciding from Agreement, each process \(P_{k}\) broadcasts its RS symbol, along with the relevant proof (line 2). Because at most \(t\) processes are faulty, no correct process receives \(t+1\) RS symbols pertaining to a hash other than \(h\). As \(P_{1},\ldots,P_{t+1}\) all broadcast their symbols and proofs, eventually every correct process collects \(t+1\) (provably correct) RS symbols \(S\) pertaining to \(h\) (line 2), and decides \(\mathsf{decode}(S)\) (line 2). By Theorem 3, every correct process eventually decides the same valid value \(v\) (with \(h=\mathsf{hash}(v)\)).
Concerning bit complexity, throughout an execution of DARE-Stark, a correct process engages once in Agreement (which exchanges \(O(n^{2}\kappa)\) bits in total) and sends: (1) \(n\) dispersal messages, each of size \(O(\frac{L}{n}+\mathit{poly}(\kappa))\), (2) \(n\)ack messages, each of size \(O(\kappa)\), and (3) \(n\)retrieve messages, each of size \(O(\frac{L}{n}+\mathit{poly}(\kappa))\). Therefore, the bit complexity of DARE-Stark is \(O(nL+n^{2}\mathit{poly}(\kappa))\). As for the latency, it is \(O(n)\) (due to the linear latency of Agreement).
## 7 Concluding Remarks
This paper introduces DARE (Disperse, Agree, REtrieve), the first partially synchronous Byzantine agreement algorithm on values of \(L\) bits with better than \(O(n^{2}L)\) bit complexity and sub-exponential latency. DARE achieves \(O(n^{1.5}L+n^{2.5}\kappa)\) bit complexity (\(\kappa\) is the security parameter) and optimal \(O(n)\) latency, which is an effective \(\sqrt{n}\) factor bit-improvement for \(L\geq n\kappa\) (typical in practice).
DARE achieves its complexity in two steps. First, DARE decomposes problem of agreeing on large values (\(L\) bits) into three sub-problems: (1) value dispersal, (2) validated agreement on small values (\(O(\kappa)\)), and (3) value retrieval. (DARE effectively acts as an extension protocol for Byzantine agreement.) Second, DARE's novel dispersal algorithm solves the main challenge, value dispersal, using only \(O(n^{1.5}L)\) bits and linear latency.
Moreover, we prove that the lower bound of \(\Omega(nL+n^{2})\) is near-tight by matching it near-optimally with DARE-Stark, a modified version of DARE using STARK proofs that reaches \(O(nL+n^{2}poly(\kappa))\) bits and maintains optimal \(O(n)\) latency. We hope DARE-Stark motivates research into more efficient STARK schemes in the future, which currently have large hidden constants affecting their practical use.
|
2310.03806 | Temporal Properties of the Compressible Magnetohydrodynamic Turbulence | The temporal property of the compressible magneto-hydrodynamic (MHD)
turbulence remains a fundamental unsolved question. Recent studies based on the
spatial-temporal analysis in the global frame of reference suggest that the
majority of fluctuation power in turbulence does not follow any of the MHD wave
dispersion relations but has very low temporal frequency with finite
wavenumbers. Here, we demonstrate that the Lorentzian broadening of the
dispersion relations of the three MHD modes where the nonlinear effects act
like the damping of a harmonic oscillator can explain many salient features of
frequency spectra for all MHD modes. The low frequency fluctuations are
dominated by modes with the low parallel wavenumbers that have been broadened
by the nonlinear processes. The Lorentzian broadening widths of the three MHD
modes exhibit scaling relations to the global frame wavenumbers and are
intrinsically related to energy cascade of each mode. Our results provide a new
window to investigate the temporal properties of turbulence which offers
insights for building a comprehensive understanding of the compressible MHD
turbulence. | Ka Ho Yuen, Hui Li, Huirong Yan | 2023-10-05T18:00:13Z | http://arxiv.org/abs/2310.03806v1 | # Temporal Properties of the Compressible Magnetohydrodynamic Turbulence
###### Abstract
The temporal property of the compressible magneto-hydrodynamic (MHD) turbulence remains a fundamental unsolved question. Recent studies based on the spatial-temporal analysis in the global frame of reference suggest that the majority of fluctuation power in turbulence does not follow any of the MHD wave dispersion relations but has very low temporal frequency with finite wavenumbers. Here, we demonstrate that the Lorentzian broadening of the dispersion relations of the three MHD modes where the nonlinear effects act like the damping of a harmonic oscillator can explain many salient features of frequency spectra for all MHD modes. The low frequency fluctuations are dominated by modes with the low parallel wavenumbers that have been broadened by the nonlinear processes. The Lorentzian broadening widths of the three MHD modes exhibit scaling relations to the global frame wavenumbers and are intrinsically related to energy cascade of each mode. Our results provide a new window to investigate the temporal properties of turbulence which offers insights for building a comprehensive understanding of the compressible MHD turbulence.
## Main
There has been a long history in studying the temporal properties of MHD turbulence [1, 2, 3], particularly the origin of low frequency temporal fluctuations [4]. These low-frequency fluctuations have implications for several problems in both space physics and astrophysics [5], including the heating of the solar corona [6], the low-frequency "\(1/f\) noise" in the solar wind [7, 8], the formation and evolution of stars and molecular clouds in the interstellar medium [9], as well as the propagation and acceleration of cosmic rays [10, 11, 12]. Some of the earliest theoretical models came from the "2D plus slab" model in the nearly incompressible magnetohydrodynamics [13, 14], suggesting that a perpendicular cascade (i.e., the 2D fluctuations with global frame parallel wavenumber \(k_{\parallel}=0\)[1]) could generate nearly zero-frequency fluctuations. Alternatively, the _non-resonant_ three-wave interaction in strong Alfvenic turbulence could generate MHD fluctuations at \(k_{\parallel}\approx 0\) and \(\omega\approx 0\)[18, 19]. Different views have fueled the debate whether to treat these low frequency fluctuations as waves or nonlinear structures [20]. Several physical pictures were put forth to interpret the low-frequency fluctuations such as those from damped harmonic oscillators [21, 22], sweeping modes [23, 20], magnetosonic modes [24] and also transition from weak to strong turbulence [25].
One interesting approach is to perform spatio-temporal analysis on simulated turbulence fluctuations in order to extract their properties [7, 8, 21, 23, 24, 25, 26, 27, 28, 29] and gain unique insights on how turbulence evolves in space and time. In particular, recent numerical studies [30, 31] have quantified the spatio-temporal distribution of velocity, magnetic field and density variations, showing that they not only deviate from the simple dispersion relations for compressible MHD modes but also hold a dominant fraction in low temporal frequencies, which is also observed from satellite observations [32, 33, 34, 35].
In this paper, we explain quantitatively how the ubiquitous low frequency \(\omega\) fluctuations are physically generated in magnetized turbulence, including all the compressible modes via the mode analysis [36] in the global frame of reference [19]. We propose a new broadened Lorentzian profile that can fit the simulation results. This profile allows us to quantify the contributions by different mode groups, enabling a better understanding of the frequency properties.We also discuss the implications of this new model for the temporal behavior of MHD turbulence.
## A broadened Lorentzian model
In the linear wave theory [36, 37], small amplitude turbulence fluctuations can be viewed as simple harmonic oscillators with natural frequencies corresponding to the response frequencies of the three MHD _waves_ (see Eq. 16). The nonlinear terms \((\mathbf{v}\cdot\nabla\mathbf{v},\ \delta\mathbf{B}\cdot\nabla\delta\mathbf{B})\) can be modeled as damping terms in the equation of motion [8]. In this scenario, the turbulence system can be seen as a collection of damped harmonic oscillators (resembling an oscillating Langevin antenna in the case of incompressible Alfvenic turbulence [22]). By modeling the nonlinear term in the equation of motion as \(\omega_{nl}\mathbf{v}\), with the exact functional form of \(\omega_{nl}(\mathbf{k})\) to be discussed in detail, the spatial-temporal energy distribution function for a selected global frame wavevector \(\mathbf{k}=(k_{\parallel},k_{\perp})\) and a MHD mode is a broadened Lorentzian distribution (see Supplementary material for deviation):
\[E(\mathbf{k},\omega)=\int dte^{i\omega t}E(\mathbf{k},t)\propto\frac{\omega_{ nl}^{2}\omega_{\mathrm{wave}}}{(\omega^{2}-\omega_{\mathrm{wave}}^{2})^{2}+ \omega_{nl}^{2}(\omega+v_{A}|\mathbf{k}|\mu)^{2}} \tag{1}\]
where \(\omega_{\mathrm{wave}}\) is the wave frequency of the individual mode (see Eq. 16), \(v_{A}\) is the Alfven speed, and \(\mu=\hat{\mathbf{k}}\cdot\hat{\mathbf{B}}\). Consequently, the dispersion relations of the three MHD modes no longer follow the linear form (Eq. 16), but are modulated by the nonlinear term (see Supplementary Material).
Fig. 1 shows the \(E(\omega)-\omega\) diagram, each curve _normalized_ by its own spectral power for a given wavevector (See Eq.17). Three MHD modes at selected values of \(\mathbf{k}=(k_{\parallel},k_{\perp})\) and different plasma \(\beta\) (ratio of thermal to magnetic pressure, see Tab.2 for the definition of symbols) are plotted, where the fluctuations are separated according to the mode decomposition algorithm [36]. The mode decomposition algorithm assumes negligible contributions from the nonlinear terms, which is not true particularly when \(k_{\parallel}\) is small, decreasing the accuracy of the mode classifications (See Supplementary Material for discussions of the caveat of the mode decomposition method in the strongly nonlinear systems). In Fig.1, we intentionally include two cases for the same \(\beta\) to illustrate that higher turbulence levels (indicated by higher sonic Mach number \(M_{s}\) and Alfvenic Mach number \(M_{A}\)) produce more broadened Lorentzians (First two row of Fig.1). All \(E(\omega)\) curves are fitted by Eq.1 where we have made \(\omega_{nl}(\mathbf{k})\) a fitting variable for different wave modes. Furthermore, the peaks of the Lorentzian profiles correspond to the wave eigenfrequency of different wave modes calculated from plasma \(\beta\) and wavevector \(\mathbf{k}\) (Eq.10). The broadening behavior is generally consistent with previous literature [8, 22, 38] and numerical simulations [39, 30, 31]. The nonlinear broadening by \(\omega_{nl}(\mathbf{k})\) for each peak has a strong effect, extending the frequency distribution to both very low and high frequency limits. Notice that the analysis is performed in the global frame, how the local frame fluctuations are mapped into the global frame fluctuations measured both numerically [8, 28, 30, 7] and observationally [34, 35] will be addressed in the later section.
## Non-zero low frequency fluctuations are produced by nonlinear interactions
Fig.1 also highlights another important aspect, namely the origin of the low \(\omega\) fluctuations. For instance, it can be seen that the \(k_{\parallel}=0\) modes in all cases have an increasing power towards low \(\omega\). To quantity their contributions more clearly, Fig. 2 shows the frequency power spectra of the run A1 (See Tab.1) where we extract only the Alfven mode powers. For frequency significantly less than \(\tau_{A}^{-1}=v_{A}/L_{box}\), there are two types of contributions: those with \(k_{\parallel}>0\) and those with \(k_{\parallel}=0\). Note that we have excluded the contributions by the \(\mathbf{k}\) modes within the injection region. For the \(k_{\parallel}>0\) modes, their contribution at low frequency, e.g., \(\tau_{A}\omega=0.1\), is very small, and they are mainly from the low frequency wing of the Lorentzian broadened fluctuations. Modes with \(k_{\parallel}=0\) (e.g., \(k_{\perp}=3,4\)) dominate the low \(\omega\) power. This implies that these finite frequency fluctuations above \(\omega=0\) are a result of the nonlinear interaction since their Alfven wave frequencies are zero when \(k_{\parallel}=0\). The broadening of \(E(\omega)\) at \(k_{\parallel}=0\) is always significant as long as \(\omega_{nl}\) is non-zero. The non-stationary nature (i.e. \(E(\omega)\neq 0\) when \(\omega\neq 0\)) of the \(k_{\parallel}=0\) mode is consistent with the earlier literature that suggests the nonlinear terms (\(v\cdot\nabla v\), \(\delta B\cdot\nabla\delta B\)) require the \(k_{\parallel}=0\) mode for efficient energy transfer and the purely perpendicular 2D cascade [8, 16, 40, 18].
For finite \(k_{\parallel}\) Alfvenic fluctuations, one can further quantify the ratio of nonlinear component versus the linear (wave) component. Using the incompressible Alfvenic fluctuations as an example, the analytical model (Eq. 1) fits the simulations very well. For the ease of discussion, we will define the wave-like component as the integrated power around the linear wave frequency \(\omega_{\mathrm{wave}}(\mathbf{k})\) within \(\pm\Delta\omega(\mathbf{k})\), where \(\Delta\omega(\mathbf{k})\) is the half-width. We further denote the fluctuations below \(\omega_{\mathrm{wave}}-\Delta\omega\) and above \(\omega>\omega_{\mathrm{wave}}+\Delta\omega\) as _low_ and _high_ frequency fluctuations, respectively (Panel (a) of Fig.3). For quantitative analysis, we consider two choices of \(\Delta\omega(\mathbf{k})=0.5\) and \(1\)\(\omega_{nl}(\mathbf{k})\). Using simulation A1, we present the relative fraction of the wave-like, low and high frequency fluctuations for different combinations of non-zero \((k_{\parallel},k_{\perp})\) in Panel (b) of Fig.3. It can be seen that significant fraction resides in the low and high frequency ranges, and the wave-like fraction increases when using the larger \(\Delta\omega\). Integrating over all \(\mathbf{k}\) modes (including \(k_{\parallel}=0\) but excluding the injection scale), we plot the relative contributions from the low, wave-like, high, and \(k_{\parallel}=0\) components, respectively, in Panel (c), again for the two choices of \(\Delta\omega\). Note that the relative fraction of the \(k_{\parallel}=0\) modes is a strong function of the minimum frequency used in the analysis (which is chosen to be \(0.1\omega_{A}\) in this plot). The fraction contributed by the \(k_{\parallel}=0\) modes is expected to increase if a smaller minimum \(\omega\), i.e. a longer time series, is employed in the analysis.
Figure 1: Temporal power \(E(\omega)\) vs \(\omega\) of velocity fluctuations showing how MHD turbulence with different \(\beta\) (upper row: \(\beta\ll 1\), simulation A1; middle row \(\beta\ll 1\), A2; lower row, \(\beta\gg 1\), simulation A0; Blue: Alfvén, Red: Slow and Black: Fast mode, see Table 1) produces significant fraction of low frequency fluctuations from Lorentzian broadening (the scattered points in each panel, c.f. Eq.13) for three different regimes of \(k\). From the left: \(k_{\parallel}=0\), \(k_{\parallel}<k_{\perp}\), \(k_{\parallel}\geq k_{\perp}\). Each curve is normalized by its own fluctuation power for a particular choice of wavenumber \((k_{\parallel},k_{\perp})\) and fitted with Eq.1 (dash lines in each panel). The x-axis of each panel is normalized with respect to the Alfvénic frequency (\(\tau_{A}^{-1}\)) of the corresponding simulation.
The strength of the Lorentzian broadening is determined by the ratio of \(\omega_{nl}\omega_{\alpha^{\prime}}\) and \(\omega_{wave}^{2}\) (see Eq. 1). When \(k_{\parallel}L_{inj}\gg 1\), the wave propagation frequency dominates over the nonlinear feature, resulting in a sharper peak in the \(E(\omega)\) distribution (e.g. middle row of Fig. 1). Meanwhile, \(E(\omega\to 0)\) is a non-zero constant when \(k_{\parallel}>0\), giving
\[\frac{E(\mathbf{k},\omega\to 0)}{E(\mathbf{k},\omega=v_{A}k_{\parallel})}= \frac{4\omega_{nl}^{2}/\omega_{\alpha^{\prime}}^{2}}{1+\omega_{nl}^{2}/\omega_ {\alpha^{\prime}}^{2}}=\frac{4\chi^{2}}{1+\chi^{2}}\ \, \tag{2}\]
where \(\chi=\omega_{nl}(\mathbf{k})/\omega_{\mathcal{A}}(\mathbf{k})\). Notice that Eq.2 can be larger than 1 when \(\omega_{nl}\gg\omega_{wave}\), indicating that the low frequency fluctuations could have a higher amplitude even compared to that at wave eigenfrequencies. To verify Eq.2, we compute the numerical value of \(E(\mathbf{k},\omega\to 0)/E(\mathbf{k},\omega=v_{A}k_{\parallel})\) in simulation A1 and compare them to the predicted values using Eq.(2) in Panel (d) of Fig.3, where we have used the \(E(\omega)\) curves from simulation A1 with \(2<|\mathbf{k}|<10\). We observe a reasonable agreement between the numerical data to the theoretical prediction (Eq.2), though some data points are \(\sim 2\) larger than the theoretical values. This deviation is mainly due to the fact that the mode decomposition [36] is increasingly inaccurate as \(\chi\) increases.
### Lorentzian broadening of the compressible modes and their low-frequency contributions
The Lorentzian broadening exists in all three MHD modes but the broadening strength and behavior are different (Fig.1). For the case of \(k_{\parallel}=0\) modes, the low frequency fluctuations are mostly contributed by both Alfven and slow modes, where the exact ratio is determined by the relative energy fraction of the two modes (left column of Fig. 1). As discussed previously, the broadening width at \(k_{\parallel}=0\) only depends on the value of \(\omega_{nl}\). The similar slopes in \(\omega\) for both Alfven and slow modes' temporal power spectrum \(E(\omega)\) suggest that their \(\omega_{nl}\) is similar in magnitude (See also Fig.4).
For modes with \(k_{\parallel}>0\), slow mode contributes more low-frequency fluctuations than the other two modes due to its lower \(\omega_{wave}\) (middle and right columns of Fig. 1). This behavior is more amplified in low \(\beta\) where the slow wave speed is significantly smaller than the Alfven speed for the same \(\mathbf{k}\). Different from the case of \(k_{\parallel}=0\), the relative fraction of low frequency fluctuations for \(k_{\parallel}>0\) is governed by \(\chi\) parameter (Eq.2), which is the largest for slow modes. Therefore the Lorentzian broadening is effectively stronger for slow modes, albeit the mode fractions between slow and other two modes have to be taken into account [29]. In contrast, fast mode plays a negligible role in low-frequency fluctuations because its wave speed is significantly higher than those of Alfven and slow modes, and is always non-zero unless \(|\mathbf{k}|=0\).
## Scalings of nonlinear frequency \(\omega_{nl}\) for different modes
To quantify the nonlinear broadening for each MHD mode and their dependence on turbulence properties, we can use Eq. 1 to extract \(\omega_{nl}\) from the Lorentzian profiles. Fig. 4 shows the nonlinear frequencies of three subsonic, sub-Alfvenic simulations with various \(\beta\) as a function of \(k_{\perp}\) or \(k=|\mathbf{k}|\). The effect of the local and global reference frame [41] is _less_ significant when
Figure 2: \(E(\omega)-\omega\) curves for the frequency power spectra of the run A1 (See Tab. 1) where only Alfvénic modes are retained for different choices of \(\mathbf{k}\). The vertical dashed lines denote the wave eigen-frequencies.
applying the mode decomposition technique [36] since \(M_{A}\) is small for our simulation (Tab.1). In the case of high \(\beta\) (Left panel of Fig. 4), both Alfven and slow modes have their \(\omega_{nl}\) scale in the same way, \(\propto k_{\perp}^{2/3}\). Fast modes cascade isotropically [36], and therefore we plot the \(\omega_{nl}\) for fast modes along \(k\). We find a scaling of \(\omega_{nl}\propto k^{1}\) best fit our data for high and intermediate \(\beta\), suggesting a cascade of \(E_{k}\propto k^{-2}\) for these two cases. For low \(\beta\) (Right panel of Fig.4), the data points are too scattered to be conclusive, despite an apparent trend-line of \(k^{3/4}\) is observed for a limited range of \(k\).
How do we understand the scaling trend in Fig.4? It is commonly assumed in the case of strong Alfvenic turbulence that \(\omega_{nl}\sim k_{\perp}\delta v_{k}\)[18, 42]. However, the nonlinear time \(\tau_{nl}\sim\omega_{nl}^{-1}\) does not necessarily correspond to the cascade time. The dependence of the cascade time is actually a function of \(\chi(\mathbf{k})\), as noted in earlier literature [3, 36, 42, 43, 44]. A rigorous closure calculation by Tripathi et al. (in prep.) shows that the cascade time (\(\omega_{r}\)) has the following dependence on \(\chi\):
\[\omega_{r}\approx\frac{\omega_{nl}^{2}}{\omega_{\mathrm{wave}}(1+\chi)} \tag{3}\]
In particular, for extreme cases of \(\chi\):
\[\omega_{lr} \sim\frac{\omega_{nl}^{2}}{\omega_{\mathrm{wave}}} (\chi\ll 1) \tag{4}\] \[\omega_{lr} \sim\omega_{nl} (\chi\gg 1)\]
where the first expression is the Iroshnikov-Kraichnan cascade rate [3, 43] and the latter is commonly adopted for strong turbulence cascade [18]. The constancy of the energy cascade rate \(\delta v_{k}^{2}\omega_{r}\) allows for a quick quantification of the relation between \(\omega_{nl}\) and spectral power \(E_{k}\) as functions of \(k\). Writing:
\[\mathrm{const}\approx\begin{cases}\frac{\delta v_{k}^{2}\omega_{nl}^{2}}{ \omega_{\mathrm{wave}}}&(\chi\ll 1)\\ \delta v_{k}^{2}\omega_{nl}&(\chi\gg 1)\end{cases} \tag{5}\]
using \(E_{k}\sim\delta v_{k}^{2}/k\) gives:
\[E_{k}\approx\begin{cases}\omega_{nl}^{-2}&(\chi\ll 1)\\ k^{-1}\omega_{nl}^{-1}&(\chi\gg 1)\end{cases} \tag{6}\]
In the incompressible Alfven and high-\(\beta\) slow mode limit, where \(E_{k}\propto k_{\perp}^{-5/3}\), which implies that \(\omega_{nl}\propto k_{\perp}^{2/3}\), which are observed in Fig.4 for both Alfven and slow modes in all choices of \(\beta\). For fast modes, the Iroshnikov-Kraichnan spectrum [3, 43, 44] (\(E_{k}\propto k^{-3/2}\)) suggests \(\omega_{nl}\propto k^{3/4}\), which is only observed in the case of low \(\beta\) case. For the measured scaling \(\omega_{nl}\propto k^{1}\) in the left and middle panels of Fig. 4, Eq. 6 gives \(E_{k}\propto k^{-2}\), which is commonly proposed as the alternative scaling of fast modes [29].
## Discussion
The Lorentzian broadening effect is intrinsically caused by the nonlinear effects in turbulence. However, the results are obtained in the global frame with respect to the mean background magnetic field direction. In the case of incompressible Alfvenic turbulence, many MHD turbulence studies [18, 42, 45] have emphasized that the nonlinear fluctuations are generated perpendicular to the local magnetic fields, characterized by the local wavevector \((k_{\parallel,L},k_{\perp,L})\). These fluctuations, when viewed in a global frame, could lead to a _broadening_ effect in \(\omega\). This can be understood using the following picture: the spectral power of a particular global wavevector \((k_{\parallel},k_{\perp})\) measured in the global frame could collect many eddies of different sizes in the local frame [19]. These eddies will contribute different spectral weights, with particularly higher weights for the local eddies that have dimensions \((k_{\parallel,L}^{-1},k_{\perp,L}^{-1})\) close to \((k_{\parallel}^{-1},k_{\perp}^{-1})\). Each of these local wavevectors is projected onto the global \(k_{\parallel}\) axis, which gives a measured \(\omega\) as \(\sim k_{\parallel,L}v_{A}\), along with its contribution to the \(E(\omega)\) spectrum. The variation of \(k_{\parallel}\) due to the projections of the local wavevectors is then translated to the \(\omega\) space, resulting in a broadening of the \(E(\omega)\) spectrum.
The broadening width can be approximated as follows: The wavevector difference between the local and global frames in the case of Alfvenic turbulence is strongly correlated to the Alfvenic Mach number \(M_{A}\)[41], i.e., \(\delta k_{\parallel}/k_{\parallel}\sim M_{A}\sim\delta v/v_{A}\). As a result, the average dispersion of \(\omega\) is \(\delta\omega\sim\delta k_{\parallel}v_{A}\sim k_{\parallel}\delta v\), centered at the global frame \(\omega\sim k_{\parallel}v_{A}\). In other words, the frame transformation naturally generates a broadening in \(\omega\) as long as \(M_{A}\) is non-zero. In comparison, the dispersion relation of fast modes has only weak dependence on the wave vector direction. The projection effect for fast modes is therefore marginal, accounting partially for their weaker nonlinear behavior.
As a remark, the concept of critical balance[18] is also closely connected to the nature of non-zero low-frequency fluctuations. It has been argued that the nonlinear timescale of incompressible Alfven mode is approximately equal to its wave propagation timescale, i.e., \(\chi=1\), forming the basis of modern MHD turbulence theory[18, 36], despite dissent views remain[25, 46, 47]. Refinement of the critical balance theory has been proposed in the community in the incompressible limit[48]. Understanding the low-frequency fluctuations provides valuable insights into the nonlinear timescales, as there is a designated anisotropy scaling (\(k_{\parallel}\propto k_{\perp}^{2/3}\) in the case of Alfven modes) related to the critical balance. However, whether such balance exists for the compressible modes is still an unresolved question[44].
Figure 3: (a) The definition of low and high frequency fluctuations with respect to the wave peak given a particular half-width. In this example the half-width is \(\omega_{nl}\). (b) Relative energy fraction of Alfvén modes classified by frequencies in simulation A1. (c) Relative energy fraction for integrated temporal power, where modes with global \(k_{\parallel}=0\) are also considered. (d) A comparison between the measured \(\frac{E(\mathbf{k},\omega=\omega_{nl})}{E(\mathbf{k},\omega=\omega_{nl}k_{ \parallel})}\) and the theoretically predicted value. The red line denotes the equality in Eq.2.
Figure 4: \(\omega_{nl}\) (in units of \(\tau_{\Lambda}^{-1}\)) vs. \(k_{\perp}\) (with \(k_{\parallel}=3\)) or \(|\mathbf{k}|\) for three different modes (Blue: Alfvén, Red: Slow and Black: Fast mode). The red dash line in each panel corresponds to \(\omega_{nl}\propto k_{\perp}^{2/3}\) while the black line corresponds to \(\omega_{nl}\propto k^{1}\) for the left two panels, and \(k^{3/4}\) for the right panel.). The calculations are based on runs A0, A3 and A1, respectively.
## Data Availability
The data used in this work are listed in Tab.1. Data and the input file will be available upon request.
## Code Availability
### Numerical simulations
The numerical simulations are performed with Athena++49.We summarize the simulations in Tab. 1. Our data are time series of three-dimensional, triply periodic, isothermal MHD simulations with continuous force driving via _direct spectral injection_2 unless specified. We run our simulations for at least 10 sound crossing time (\(\tau_{s}=L_{box}/c_{s}\)) and take snapshots at \(\Delta\tau=\tau_{s}/100\) to ensure the time-axis sampling satisfies the condition that:
Footnote 2: The reason why we did not employ the Ornstein–Uhlenbeck forcing is because we would like to have more control on the injected values of \(\mathbf{k}\) and the injection frequency \(\omega_{inj}\).
\[\Delta\tau_{required}<\frac{L_{box}}{v_{fastest}} \tag{7}\]
where \(L_{box}\) is the size of the simulation domain, and \(v_{fastest}\) is the fastest speed in the numerical simulations. The typical parameters of our simulations are listed in Tab.1. The injection is performed so that we only have eddies at scales \(L_{inj}/L_{box}\geq 1/2\), which corresponds to \(|\mathbf{k}|\leq 2\). All simulations are driven solendoially. All numerical simulations are truncated in Fourier space to \(128^{3}\) regardless of its original size to save computational resources. That will not change the statistics of spatio-temporal spectrum for \(|\mathbf{k}|<128\).
### Analysis
Analysis are performed by _Julia_ with the packages available upon request.
|
2304.01156 | Quasiparticle Generation-Recombination Noise in the Limit of Low
Detector Volume | We have measured the quasiparticle generation-recombination (GR) noise in
aluminium lumped element kinetic inductors with a wide range of detector
volumes at various temperatures. The basic detector consists of meandering
inductor and interdigitated capacitor fingers. The inductor volume is varied
from 2 to 153 {\mu}m^{3} by changing the inductor width and length to maintain
a constant inductance. We started with measuring the power spectrum density
(PSD) of the detectors frequency noise which is a function of GR noise and we
clearly observed the spectrum roll off at 10 kHz which corresponds to the
quasiparticle lifetime. Using data from a temperature sweep of the resonator
frequency we convert the frequency fluctuation to quasiparticle fluctuation and
observe its strong dependence on detector volume: detectors with smaller volume
display less quasiparticle noise amplitude. Meanwhile we observe a saturated
quasiparticle density at low temperature from all detectors as the
quasiparticle life time {\tau}qp approaches a constant value at low
temperature. | J. Li, P. S. Barry, T. Cecil, C. L. Chang, K. Dibert, R. Gualtieri, M. Lisovenko, Z. Pan, V. Yefremenko, G. Wang, J. Zhang | 2023-04-03T17:23:09Z | http://arxiv.org/abs/2304.01156v1 | # Quasiparticle Generation-Recombination Noise in the Limit of Low Detector Volume
###### Abstract
We have measured the quasiparticle generation-recombination (GR) noise in aluminium lumped element kinetic inductors with a wide range of detector volumes at various temperatures. The basic detector consists of meandering inductor and interdigitated capacitor fingers. The inductor volume is varied from 2 to 153 \(\mathrm{\SIUnitSymbolMicro m}^{3}\) by changing the inductor width and length to maintain a constant inductance. We started with measuring the power spectrum density (PSD) of the detectors frequency noise which is a function of GR noise and we clearly observed the spectrum roll off at 10 kHz which corresponds to the quasiparticle lifetime. Using data from a temperature sweep of the resonator frequency we convert the frequency fluctuation to quasiparticle fluctuation and observe its strong dependence on detector volume: detectors with smaller volume display less quasiparticle noise amplitude. Meanwhile we observe a saturated quasiparticle density at low temperature from all detectors as the quasiparticle life time \(\tau_{qp}\) approaches a constant value at low temperature.
mKIDs, quasiparticle GR noise, detector volume, residual quasiparticle density
## I Introduction
With over a decade of development effort, arrays of Microwave kinetic inductance detector (mKID) are found in a broad range of applications that require high-fidelity measurement of low energy signals [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11]. The fundamental limit to the mKID detector's sensitivity is the quasiparticle generation-recombination (GR) noise, which originates from the stochastic fluctuations in quasiparticle density as shown in the cartoon in Fig. 1 (a). The measured GR noise level depends on the detector volume, and it has been observed that any residual quasiparticle density imposes a practical lower limit on the achievable detector sensitivity, and is particularly important for detectors operating under ultra-low levels of optical loading. With an array of resonators designed to probe these effects directly, we investigate the GR noise level by varying detector volume over a wide range. Our result will suggest strategies to control and mitigate GR noise in future large-format arrays of mKIDs.
We start by measuring the power spectrum density of the frequency noise of an array of lumped LC superconducting resonators with stepped inductor volume. A resonator layout is shown in Fig. 2. The resonator is coupled to the readout transmission line by means of a coupling interdigital capacitor which determines the resonator coupling quality factor \(Q_{c}\). It is designed with low value so that the resonator ring down time \(\tau_{r}\) is much shorter than the life time \(\tau_{qp}\) of the quasiparticles in the resonator. Resonator frequencies are stepped by changing the resonator capacitance. Fig. 1(b) shows the measured amplitude and phase of one resonator and change in response to quasiparticle change in the resonator. Noise from the quasiparticle fluctuation is measured with a standard homodyne detection setup and shown as red dots on the resonance circle in Fig. 1(c).
The power spectral density of quasiparticle fluctuations has a Lorentzian spectrum that is given by
\[S_{N}(\omega)=\frac{4N_{qp}\tau_{qp}}{1+(\omega\tau_{qp})^{2}} \tag{1}\]
where \(N_{qp}\) is the quasiparticle number in the detector and \(\tau_{qp}\) is quasiparticle life time. To determine \(S_{N}(\omega)\) we measure the resonator frequency fluctuation \(S_{f}(\omega)\) with a standard homodyne measurement setup as described in reference [13]. \(S_{f}(\omega)\) is related to \(S_{N}(\omega)\) by
\[S_{f}(\omega)=S_{N}(\omega)\frac{(df/dN_{qp})^{2}}{1+(\omega\tau_{r})^{2}} \tag{2}\]
where \(df/dN_{qp}=df/(Vdn_{qp})\), \(V\) is the detector volume and \(n_{qp}\) is the quasiparticle density level. \(\tau_{r}\) is the resonator ring down time give by \(\tau_{r}=\frac{Q}{\pi f_{o}}\). The test resonator array resonances were measured at multiple temperatures (Fig. 3) and \(n_{qp}\) is calculated for each temperature according to
\[n_{qp}=2N_{o}\sqrt{2\pi k_{B}T\Delta}\text{exp}(-\Delta/k_{B}T) \tag{3}\]
which is valid at \(k_{B}T<\Delta\)[14]. \(N_{o}\) is the single spin density of states at the Fermi level (\(1.72\times 10^{10}um^{-3}eV^{-1}\)), \(k_{B}\) Boltzmann's constant, T is the sample temperature and \(\Delta\) is energy gap of the superconductor. Plotting the percentage frequency shift \(\Delta f/f_{o}\) as a function of \(n_{qp}\) shows a linear relationship and \(df/dn_{qp}\) is calculated from the slope of the fitted liner regression.
## II Detector array design
In our design the overall size of the chip is 2.5 cm\(\times\)1 cm (Fig. 2) and we have five pixels with each pixel having a 'X-pol' resonator and a 'Y-pol' resonator. Within each pixel the 'X-pol' ('Y-pol') resonators have same inductance while the 'Y-pol' capacitor has slight larger capacitance than the one for 'X-pol'. The inductor line width for each pixel is varied from \(0.2\,\mathrm{to}\,\,1.8\,\mathrm{\SIUnitSymbolMicro m}\) in steps of \(0.4\,\mathrm{\SIUnitSymbolMicro m}\) across the five pixels. The average capacitance among the two resonator are stepped and the pixel with an inductor line width of \(1.8\,\mathrm{\SIUnitSymbolMicro m}\) has the largest capacitance. With our designed configuration we expect five groups of resonances corresponding to the five pixels. Within each group we expect two resonances corresponding to 'X-pol' and 'Y-pol' with the 'Y-pol' on the lower frequency side. The interdigital capacitor arrangement plays an important role in the resonator placing as it is the major parameter we use to set the resonators in the frequency domain. Details of the design parameter values can be found in table I
## III Fabrication
The detector arrays are fabricated on a \(2\)\({}^{\prime\prime}\) high intrinsic resistivity silicon wafer. To achieve the required \(400\,\mathrm{nm}\) line width resolution, the resonators are patterned using e-beam lithography with a resolution of \(10\,\mathrm{nm}\).
The fabrication process begins with vacuum baking the bare wafer at \(120\,\mathrm{\SIUnitSymbolMicro C}\) for three minutes to remove moisture. E-beam resist (PMMA 950 A4) is spun on the wafer at a speed of \(4000\,\mathrm{r}\mathrm{p}\mathrm{m}\) for 40 seconds and then baked at \(180\,\mathrm{\SIUnitSymbolMicro C}\) for three minutes afterward. The entire detector array is patterned in a single lithography layer with a dose of \(720\,\mathrm{\SIUnitSymbolMicro C}/\mathrm{cm}^{2}\). A ratio of 5:95 between optimal contrast and uniform clearing is applied in the proximity effect correction to maintain a uniform exposure across the array pattern. After patterning the wafer is developed with MIBK and IPA with ratio of 1:3 for 40 seconds, rinsed in IPA for 10 seconds and then rinsed in DI water. Following development a 20 seconds oxygen descum process is applied to remove leftover e-beam resist. A \(30\,\mathrm{nm}\)
Fig. 1: **a**, schematic of a superconductor in thermal equilibrium which is a process of generation (red arrow) and recombination (blue arrow) of quasiparticles. **b**, exaggerated amplitude (blue) and phase noise (green) induced by the resonant frequency shift from the quasiparticle number change. These correspond to the noise in the radial and tangent direction of the resonance circle in (c). Solid lines stand for the state before quasiparticle noise and the dash lines stand for the change as result of the noise. **c**, example of a noise measurement with red dots showing 10,000 samples of the resonance frequency at the highest response point of the resonance circle. The oval shape of the noise envelope indicates that the phase noise is larger than amplitude noise [12].
Fig. 3: Example of resonance dip change for temperatures between \(14\) (green color) and \(270\,\mathrm{mK}\) (orange color). The resonance dip shifts to lower frequencies at higher temperatures. The inset shows \(\Delta f/f_{o}=(f_{T}-f_{o})/f_{o}\) as a function of quasiparticle number (blue) along with a linear fit (orange)
Fig. 2: Images of the resonator array chip and components. **(a)** resonator array which has five pixels and each pixel has two resonators named ‘X-pol’ and ‘Y-pol’ because of the orthogonal direction of the two inductors. **(b)** Zoom-in of one of the pixels. Left interdigital capacitor \(C_{x}\) is connected to the horizontal (X) S-shaped inductor and the right interdigital capacitor \(C_{y}\) is connected to the vertical (Y) S-shaped inductor. **(c)** Zoom in of an interdigital capacitor \(C_{y}\). **(d)** Zoom-in of the S-shaped inductors. **(e)** the coupling capacitor that couples the detector to the readout line.
thick Al film is then deposited at a rate of \(0.17\,\mathrm{nm/second}\) via DC magnetron sputtering in Ar at a process pressure of \(3\,\mathrm{mTorr}\). The sputtering system has a base pressure of \(1.7\times 10^{-8}\,\mathrm{Torr}\) and he applied cathode voltage is \(300\,\mathrm{volt}\). The final processing step is an overnight soak in 1165 Remover for lift-off.
## IV test setup
Noise measurements of the detector array are carried out with a homodyne measurement setup as illustrated in Fig. 4. Each resonator is excited with a microwave tone generated with a synthesizer around the resonance frequency. The output signal is amplified with a cryogenic high electron mobility transistor (HEMT) amplifier mounted on the \(2.7\,\mathrm{K}\) stage followed by a room-temperature amplifier. The signal is then compared to the original signal using an IQ mixer. The output voltages I and Q of the IQ mixer carry the in-phase and quadrature amplitudes of the transmitted signal, respectively. When the carrier frequency is swept around resonant frequency of the resonators, the output I and Q trace out a resonance circle as shown in Fig. 1**(c)**. For noise measurements the signal tone is fixed to be at a frequency just lower than the resonance frequency which has the highest noise fluctuation, and the fluctuations \(\delta\)I(t) and \(\delta\)Q(t) are digitized over a \(0.5\,\mathrm{s}\) interval using a sample rate \(F_{s}\) = \(200\,\mathrm{kHz}\) for high frequency noise and a \(10\,\mathrm{s}\) interval using a sampling rate of \(F_{s}\) = \(2\,\mathrm{kHz}\) for low frequency noise.
## V test result
Fig. 5 shows the tested resonator distribution from the second version of the resonator design. The first challenge in this research project was matching the resonators with their line widths. Result from the first version of the design were very far from the simulation results and we could not identify the resonators with enough confidence. However, these results provided enough feedback to adjust the resonator capacitance to get the right distribution in the second version. Parameters for the detector array are calculated from the measurement data and listed in Table I: Width is the inductor line width for each pixel, Tin stands for the length of the interdigital capacitor finger length, Vol is the inductor volume, \(\tau_{r}\) is the resonator ring down time, \(Q_{x}\) (\(Q_{y}\)) is the resonator quality factor for 'X-Pol' ('Y-pol') detector. \(Q_{i,x}\) (\(Q_{i,y}\)) is the intrinsic quality factor for 'X-Pol' ('Y-pol') detector and \(Q_{c,x}\) (\(Q_{c,y}\)) is the coupling or external quality factor. \(f_{o}\) is the resonant frequency of the resonator, and \(\frac{\partial f}{\partial N_{op}}\) is the frequency shift with respect of quasiparticle number.
The measured quasiparticle number noise \(S_{N}\) for the two polarizations, 'X-pol' and 'Y-pol', are shown in Fig. 6. At low temperature two-level system noise will cause fluctuations of the resonator frequency which shows as a long range slope on the PSD. At low frequency there is 1/f noise fluctuation from the measurement electronics and temperature fluctuations. All the PSD curves show the roll off (the knee) at \(10\,\mathrm{kHz}\) which is the quasiparticle generation-recombination frequency. Above the roll off the dominant noise source is the HEMT amplifier and the noise level should the same for all the resonators. However, our noise measurement only goes up to \(100\,\mathrm{kHz}\) which is determined by our digitizer speed. The noise floor will eventually line up once the sample frequency is high enough. The relative plateau level of the PSD on the vertical axis indicates the relative GR noise amplitude within each detector group and both X-pol and Y-pol groups show clear dependence of GR noise amplitude on detector volume. As expected from equation (2) a smaller detector volume gives a lower noise amplitude and this matches with our measured PSD distribution. Sweep data for the detector with \(1.4\,\mathrm{\SIUnitSymbolMicro m}\) line width in the 'Y-pol' group had a significant glitch that caused
Fig. 4: A diagram of the homodyne readout system used for the noise measurement. There are 60dB of attenuation in total in the input line to reduce thermal noise coming from the higher temperature stages. Superconducting RF coaxial cable is used on the output line up to the HEMT amplifier stage to minimize resistive losses. The numbers in red color above the DUT mark the temperature of each stage.
Fig. 5: Amplitude of a probe tone frequency sweep showing the frequency distribution of the reduced volume detector array. The resonator distribution matches the designed distribution set by their respective capacitor capacitance. The missing 1.8 X resonator is likely due to a fabrication defect. The 1.8 Y resonator is not used in the data analysis to keep the symmetry of the result between the ’X-pos’ and ’Y-pos’. Identities of each resonator are indicated next to their resonance dips with the number indicating their inductor line width and the letter for their polarization. Numbers in the legends stands for VNA output power during measurement.
as error in the calculated PSD and we have removed it from the final result. Additionally, we don't see a clear roll off for all the 'Y-pol' detectors. One possible reason is that 'Y-pol' detector inductors are two pieces connected with a longer and wider arc; otherwise a bridge or cross over structure is required for the 'Y-pol' inductor to pass over the 'X-pol' inductor. This extra arc might have influenced the quasiparticle distribution and hence and the PSD. At \(278\,\mathrm{mK}\) 'X-pol' \(S_{N}\) curves have uneven spacing across detector volumes, which is absent from the 'Y-pol' detectors at the same temperature.
We applied a complete fitting of the PSD curves to equation (4) which includes the 1/f noise and the TLS noise to retrieve the quasiparticle noise amplitude and life time \(\tau_{qp}\).
\[S_{xx}(f)=\left(\frac{A+Bf^{-n}}{1+(2\pi f\tau_{qp})^{2}}+C\right) \tag{4}\]
Term \(A\) stands for the quasiparticle noise amplitude and \(Bf^{-n}\) stands for the sharp slope from the 1/f noise and the long slope from the TLS noise. The fitted line is plotted as dashed line on top of the PSD curve in Fig. 6. The amplitude of \(S_{N}\) is positively proportional to detector volumes for both groups. There is a sign of saturation of the noise amplitude at very low detector volume. Measurements with additional low detector volume is required to confirm this.
Assuming a thermal distribution of quasiparticles and photons at low temperature, the average quasiparticle life time is given by [15]
\[\tau_{qp} = \frac{\tau_{o}}{\sqrt{\pi}}\left(\frac{k_{B}T_{c}}{2\Delta} \right)^{5/2}\sqrt{\frac{T_{c}}{T}}e^{\Delta/k_{B}T} \tag{5}\] \[= \frac{\tau_{o}}{n_{qp}}\frac{N_{o}(k_{B}T_{c})^{3}}{2\Delta^{2}} \tag{6}\]
where \(T_{c}\) is the critical temperature of the superconductor and \(\tau_{o}\) stands for the characteristic electron-phonon interaction time, which is material dependent. Equation (6) predicts \(\tau_{qp}\) increases exponentially with temperature as a result of reduced quasiparticle density at lower temperature. Fig. 7 plots the calculated quasiparticle life time \(\tau_{qp}\) for each detector as a function of temperature. Each detector is readout with multiple powers which corresponds to multiple curves with the same marker. At higher temperatures all the curves show the exponential function of \(\tau_{qp}\) with temperature and agree with theoretical predication. At low temperatures \(\tau_{qp}\) shows signs of saturation which suggests residual quasiparticle density \(n_{qp}\) in the detectors from nonequilibrium quasiparticle excitation [16, 17]. This is valid with all the detectors at all the readout powers we applied in the measurement and agrees with result in reference [14]. We also observed that for a given temperature \(\tau_{qp}\) increases with larger detector volume. This is most likely related to the readout power difference across
Fig. 6: quasiparticle number noise power spectrum density \(S_{N}\) at temperatures of \(15\,\mathrm{mK}\) for detectors of ‘X-pol’ (**a**), and for detectors of ‘Y-pol’ (**b**). quasiparticle number noise power spectrum density \(S_{N}\) at temperatures of \(278\,\mathrm{mK}\) for detectors of ‘X-pol’ (**c**), and for detectors of ‘Y-pol’ **d**. The longer slope trend in the 15mK indicates the TLS noise which diminishes at temperatures above \(278\,\mathrm{mK}\). The sharp slope on the low frequency range at T= \(278\,\mathrm{mK}\) indicates the 1/f noise which is a combination of the low frequency noise in our measurement electronics and the temperature fluctuation as the cooling power of our dilution fridge was fighting against the heating power we applied at the mixing chamber stage to keep the stage at \(260\,\mathrm{mK}\). **(c)**, Amplitude of fitted \(S_{N}\) as function of detector volume for both ‘X-pol’ and ‘Y-pol’ detectors at temperature of \(15\,\mathrm{mK}\) and \(278\,\mathrm{mK}\).
the detectors. Detectors tend to have higher responsivity with higher readout power and we have been using readout power below and close to the resonator bifurcation point for readout. Detectors with a very narrow inductor line width have a much lower bifurcation power so there is a broad readout power range across detectors.
## VI Conclusion
Our test results have validated the theory that quasiparticle G-R noise strongly depends on detector volume and GR noise can be reduced by using smaller detector volume. For very low detector volumes the contribution from other noise sources will be significant and additional measure methods will be needed to distinguish the individual noise sources. We have been demonstrated that higher readout power could induce excess quasiparticle density rising [18] and a low noise first stage amplifier is necessary to reduce the readout power. Also we have demonstrated saturation of the quasiparticle lifetime due to the residual quasiparticle density within each detector.
|
2310.09495 | Learning In-between Imagery Dynamics via Physical Latent Spaces | We present a framework designed to learn the underlying dynamics between two
images observed at consecutive time steps. The complex nature of image data and
the lack of temporal information pose significant challenges in capturing the
unique evolving patterns. Our proposed method focuses on estimating the
intermediary stages of image evolution, allowing for interpretability through
latent dynamics while preserving spatial correlations with the image. By
incorporating a latent variable that follows a physical model expressed in
partial differential equations (PDEs), our approach ensures the
interpretability of the learned model and provides insight into corresponding
image dynamics. We demonstrate the robustness and effectiveness of our learning
framework through a series of numerical tests using geoscientific imagery data. | Jihun Han, Yoonsang Lee, Anne Gelb | 2023-10-14T05:14:51Z | http://arxiv.org/abs/2310.09495v1 | # Learning In-between Imagery Dynamics via Physical Latent Spaces
###### Abstract
We present a framework designed to learn the underlying dynamics between two images observed at consecutive time steps. The complex nature of image data and the lack of temporal information pose significant challenges in capturing the unique evolving patterns. Our proposed method focuses on estimating the intermediary stages of image evolution, allowing for interpretability through latent dynamics while preserving spatial correlations with the image. By incorporating a latent variable that follows a physical model expressed in partial differential equations (PDEs), our approach ensures the interpretability of the learned model and provides insight into corresponding image dynamics. We demonstrate the robustness and effectiveness of our learning framework through a series of numerical tests using geoscientific imagery data.
**MSC codes.** 37M05, 62F99, 68T45
## 1 Introduction
Understanding image dynamics from a set of complex measurement data is important in many applications, from the diagnosis or monitoring of a disease done by analyzing a series of medical (e.g. MRI or ultrasound) images, [28], to the interpretation of a sequence of satellite images used to study climate changes, natural disaster, or environmental conditions [2]. Here an "image" refers to a high-dimensional data frame that contains complex and condensed information within each pixel where these pixels are also spatially correlated. To understand the underlying dynamics between sequential images, therefore, it is essential to simultaneously decipher the intertwined relationship among their spatial and temporal features.
A common approach for understanding such spatio-temporal dynamics involves the employment of physical models such as differential equations (DEs). By using the observed data to estimate the parameters in these corresponding DEs, it is possible to gain physical insights regarding their evolution [12, 20]. However, directly applying such techniques to image dynamics is of limited use due to the intricate description that would be required by a suitable prior model, the highly nonlinear relationship among pixels, and the computational complexities arising from the high dimensionality of the images.
Deep learning methods explore the data representation that distills meaningful information, often referred to as features or latent variables, in order to efficiently and effectively address supervised tasks. Data embedding into the latent spaces (i.e., the spaces of latent variables) is acquired through the training of neural networks, the design of which is tailored to the specific data types or objectives. Methods employing convolutional neural networks (CNNs) have been shown to be capable of extracting informative spatial features from images, while recurrent neural networks (RNNs) such as LSTM (Long Short-Term Memory) [14] or GRU [4] are widely used for identifying temporal patterns. One well-known method that combines these ideas is the Convolutional LSTM [23]. New techniques have improved upon this model by prescribing different combinations [26]. Some alternative approaches have also been introduced to address the uncertainty of future frames by including stochastic models [3, 7] or generative adversarial network (GAN) models [17, 21].
While these aformentioned methods incorporate standard feature extraction akin to those used in standard RNNs or GANs, it is possible to explore more organized feature or latent spaces that take into account the fundamental elements of image dynamics. This approach is commonly referred to as the disentanglement of visual representation. It includes the factorization of each image into a stationary and temporally varying component [8], the elements for extrapolation that seamlessly propagate within frames and elements for generation that account for occlusion effects [10], and the separation of motion and contents, each independently capturing the essential spatial layout of an image and salient objects, along with their corresponding temporal dynamics [25]. A linear disentanglement with physical dynamics and residual components approach in which the learnable differential equation governs the physical dynamics was developed in [11].
Although significant progress has been made in learning image dynamics when abundant temporal information is available for training, it is much more difficult to learn these same dynamics when the observable data are more limited. In practice, however, it can be hard to acquire a smooth time-lapse image sequence, particularly when dealing with satellite imagery of polar regions. These images often possess high spatial resolution but are temporally sparse due to the disparity in the time scale between sea ice deformation and the satellite's measurement cycle. Consequently, a compelling problem to address is the estimation of the intermediate dynamics between consecutive images, commonly known as temporal up-sampling, which serves to motivate the current investigation. Some other applications include video interpolation for frame rate conversion, action recognition and motion tracking for surveillance and gesture recognition, image morphing in visual effects, and various face manipulation applications [29].
In this study we present a novel approach to estimate the intermediate stage of scientific imagery. Our proposed method focuses on spatially-gridded measurement data, which heavily relies on various environmental state variables, including temperature, wind speed, and depth of snow, among others. We note that our framework is not limited to specific types of images and is generalizable to other applications. In particular we propose a new machine learning framework to estimate the intermediate evolution stages of consecutive images. Motivated by the complex nature of images as measurements of environmental states, our goal is to uncover their hidden physical evolutions by transforming the observable images into relevant latent space variables. Despite limited temporal information from the data, we are able to utilize partial differential equation (PDE) models to learn latent dynamics that match both the initial and terminal states. Specifically, rather than directly attempting to learn image dynamics as in [6], we anatomize the data to obtain a simpler, more tractable, and explainable representations of the image dynamics. This approach is analogous to the method of characteristics for solving PDEs, which reduces them to simple ODE problems
[9]. It is also closely related to Koopman operator theory, which employs linear approximations of strongly nonlinear dynamic systems [16]. The effectiveness of incorporating PDE models in machine learning has been validated in sequence-to-sequence problems [10, 6]. In [6], the dynamics of SST (sea surface temperature) follow the advection-diffusion equation, and the advection vector fields for forward prediction are learned from historical sequences. In [10], a recurrent model is designed to disentangle physical dynamics as driven by a learnable PDE model, thus effectively learning temporal dynamics for extrapolation.
Our new approach, which we will refer to as _latent space dynamics_, designs the latent space to preserve spatial correlations between images and latent variables. The resulting latent dynamics are driven by the appropriately chosen PDEs and can be utilized to understand the original image dynamics. Our method efficiently scans spatial information from the images through continuously-sliding patches and feeds this information into the neural network components of our algorithms, effectively extracting spatial features and temporal information to drive the PDE models. Our model takes into account not only local observations but also global features through the common neural network architecture. Moreover, it takes advantage of adapting PDE models, which offer flexibility in incorporating prior knowledge of dynamics through effective regularization terms.
The remainder of the paper is organized as follows. Section 2 provides some preliminary information and motivation regarding the problem setup for in-between image dynamics. Section 3 introduces the learning framework utilizing physical latent spaces, providing a detailed description of the training strategy. To validate the effectiveness of the proposed method, Section 4 presents numerical experiments conducted with geoscientific imagery data. Finally, in Section 5 we conclude with discussion about the limitations of the current study and outline potential directions for future research.
## 2 Learning in-between image dynamics
Because it can provide insights into temporal dynamics and motion patterns of visual content, learning image dynamics from sequential imaging data is becoming more paramount in a variety of scientific disciplines. For example, analyzing diverse and representative time-sequential data such as satellite measurements or synthetic aperture radar (SAR) imagery can reveal the complex dynamics of natural phenomena like weather patterns, ocean currents, or land use changes, and further allow us to make robust and reliable predictions that are adaptable to different scenarios.
Collecting or measuring image sequences with sufficiently dense time steps is not always feasible, however, particularly in fields like geoscience where data acquisition is often constrained by satellite orbiting schedules or data transmission limitations. These limitations hinder our ability to understand the full temporal dynamics of the visual content. Furthermore, even with more frequently observed imaging sequences, it can still be difficult to interpret the captured movement within the data, making it all the more problematic to understand the underlying implications of the visual transformations. To address these issues, this investigation develops a technique designed to capture the temporal changes occurring between two images at different times while also prioritizing interpretability. To achieve this goal, we harness the power of deep learning techniques enriched with physical models. This synergistic approach allows us to attain a deeper understanding of image dynamics and extract valuable insights.
### Problem setup
The problem of interest is defined as one in which we seek to accurately predict the intermediary evolutionary stages between two given images. More specifically, let \(\mathbf{X}_{t_{0}},\mathbf{X}_{t_{1}}\in\mathbb{R}^{H\times W\times C}\) be two images measured at successive time \(t_{0}\) and \(t_{1}\), where \(H,W\) and \(C\) respectively denote the height, width, and the number of channels. For example, \(\mathbf{X}_{t}\) represents a SAR image at a specific geographic location obtained from a moving satellite within a \(14\sim 16\) day period, with the channel representing the number of measurement polarizations such as \(HH\) or \(HV\). Our objective is to learn the dynamical map of the image \(\Phi_{t}:\mathbb{R}^{H\times W\times C}\rightarrow\mathbb{R}^{H\times W\times C}\), \(t\in[0,t_{1}-t_{0}]\) that satisfies the initial and terminal conditions
\[\Phi_{0}(\mathbf{X}_{t_{0}})=\mathbf{X}_{t_{0}},\quad\Phi_{t_{1}-t_{0}}(\mathbf{X}_{t_{0}}) =\mathbf{X}_{t_{1}}. \tag{1}\]
In essence, the map \(\Phi_{t}\), \(t\in(0,t_{1}-t_{0})\) captures the in-between dynamics of \(\mathbf{X}_{t}\) during an unobserved time period. Consequently, \(\{\Phi_{t}(\mathbf{X}_{t_{0}}):t\in(0,t_{1}-t_{0})\}\) represents the evolution of image \(\mathbf{X}_{t}\) over that time period. We note that this problem differs from the standard frame prediction, where a model learns the map \(\Phi:\mathbb{R}^{N\times H\times W\times C}\rightarrow\mathbb{R}^{H\times W \times C}\), using historical sequences of length \(N\), with \(N>1\), and sufficient temporal data. In our case, we assume limited temporal information is available, but we have ample spatial information in the given data.
It is crucial to acknowledge the intrinsic limitations of the problems associated with the uniqueness of the dynamics. The complexity of image dynamics may prevent a clear and deterministic mapping of one frame to another as variations can arise due to factors such as object movement or inherent randomness in physical phenomena. Accordingly, we aim to propose a learning framework with the capability to quantitatively interpret the resulting dynamics, along with the flexibility to incorporate both prior knowledge as well constraints regarding the dynamics of interest.
### Previous efforts using optimal transport theory
One approach utilized for learning a dynamical map given a temporal sequence of image data involves the use of optimal transport (OT) theory, and is essentially focused on finding the most efficient way to transport one probability distribution to another. This efficiency is quantified by the Wasserstein distance, a measure of the transport cost in terms of mass and distance. Formally, when dealing with 1D discrete probability distributions \(\mu_{1}\) and \(\mu_{2}\) of dimension \(N\), the Wassertein distance is defined through the optimization problem
\[W(\mu_{1},\mu_{2})=\min_{\pi\in\Pi(\mu_{1},\mu_{2})}\sum_{i,j=1}^{N}c_{ij}\pi _{ij}. \tag{2}\]
Here, \(c_{ij}\) is the ground cost from the location \(i\) to \(j\) (typically the squared Euclidean distance in practice) and \(\Pi(\mu_{1},\mu_{2})\) is the set of all positive \(N\times N\) matrices with marginal distributions (i.e., column and row sums) equal to \(\mu_{1}\) and \(\mu_{2}\), respectively. The formulation in (2) is readily extended to multiple dimensions, and its direct calculation for 2D images has computational cost \(\mathcal{O}(N^{3})\), where \(N\) represents the total number of pixels in the image. Recent developments, such as entropy-regularized formulations, have yielded more efficient numerical methods for approximating (2) [5], rendering it computationally tractable. These advancements have led to widespread applications in various fields, among those fluid dynamics [1, 22] and material sciences [27]. OT is particularly
well-suited for scenarios where the total mass or pixel intensity remains conserved throughout the evolution, as it accounts for transformations within probability distributions.
Although OT provides a guiding principle to help characterize smooth evolutionary dynamics in a variety of applications, it also presents certain limitations in specific scenarios. Notably, the standard OT framework formulated in Euclidean space is not well suited to capture rotational movement. Moreover, when the total mass undergoes changes due to the generation or disappearance of mass within the image, or when the acquired data are noisy, the normalization pre-processing step required for the method may introduce artifacts or inaccuracies into the resulting dynamics. Finally, we point out that OT primarily addresses the redistribution of mass or pixel density within the image framework, and this may not adequately describe scenarios involving objects moving in or out of the domain through the image boundaries.
In contrast to the applications using OT, which are constrained by presuming fundamental principles regarding the governing image dynamics, our method adopts a more flexible approach. Specifically, we impose a marginal assumption regarding the evolution of the _image features_, as opposed to the images themselves. Moreover, rather than pre-define these features, we enable our framework to autonomously extract and explore various features and their corresponding dynamics, utilizing the image statistics as part of our machine learning approach. This results in diverse dynamic patterns in image representation, and is closely aligned with real-world scenarios. Such flexibility enables us to overcome the limitations that are potentially imposed by a pre-defined governing principle.
## 3 Learning through physical latent spaces
Unlike continuous physical models that are regularly used to describe evolutionary dynamics, images capture a snapshot of multiple physical quantities simultaneously and represent them with pixel values. In other words, an image can be interpreted as a collection of spatially-gridded measurement data that incorporates several environmental state variables, such as temperature and wind speed, and in the case of SAR images on arctic regions, the depth of snow. Deciphering how each of the underlying physical quantities evolve and interact solely based on pixel value image data is non-trivial, as it requires a deep understanding of how variables within the image are related and represented. Inspired by this intuitive understanding between sequential image data and the underpinning physical quantities that are present in each image, we propose a learning framework aimed to uncover the evolution of images by considering the dynamics of the underlying variables, i.e., the relevant latent space variables, that follow established physical models. In so doing, we can better interpret how images change over time.
### Model flow framework
We design the model for the objective map \(\Phi_{t}\) with three components as
\[\Phi_{t}\;:\;\mathcal{I}\;\rightarrow\;(\mathcal{P}\times \mathcal{Q})\;\rightarrow\;\mathcal{P}\;\rightarrow\;\mathcal{I},\;\;\text{ as}\;\;\Phi_{t}=\psi\circ P_{t}\circ(\phi,\eta),\;t\in[0,t_{1}-t_{0}], \tag{3a}\] \[\mathcal{I}:\text{the image space for}\;\mathbf{X}_{t},t\in[t_{0},t_{1}],\] (3b) \[\mathcal{P}:\text{the latent space for}\;\mathbf{Z}_{t},t\in[t_{0},t_{1}],\] (3c) \[\mathcal{Q}:\text{the space for physical model parameters}\;\mathbf{W}_{t},t\in[0,t_{1}-t_{0}]. \tag{3d}\]
Here \(\phi\) and \(\psi\) are the identification maps, also respectively referred to as the encoding and decoding maps between the image and latent space. The map \(P_{t}\) represents the evolution of \(\mathbf{Z}_{t}\) in the latent space and is driven by a predefined physical model as \(\mathbf{Z}_{t_{0}+t}=P_{t}(\mathbf{Z}_{t_{0}})\) or \(\mathbf{Z}_{t_{0}+t}=P_{t}(\mathbf{Z}_{t_{0}},\{\mathbf{W}_{s}\}_{0\leq s\leq t_{1}-t_{0}})\) along with the physical parameters \(\mathbf{W}_{t}\) from the map \(\eta\). The map \(\phi\) plays a role in extracting the features \(\mathbf{Z}_{t_{0}}\) from the initial image \(\mathbf{X}_{t_{0}}\), which evolve to the latent state \(\mathbf{Z}_{t_{1}}\) corresponding to the destination image \(\mathbf{X}_{t_{1}}\). The intermediary stages of image \(\mathbf{X}_{t}\), \(t\in(0,t_{1}-t_{0})\) are generated by pulling back from the corresponding latent variable \(\mathbf{Z}_{t}\) through the decoding map \(\psi\) as \(\mathbf{X}_{t}=\psi(\mathbf{Z}_{t})\).
The main objective of the model is to search for an appropriate transformation from the image space to a feature space, such that the original problem in the image space can be effectively addressed within the framework of the predefined physical model. To achieve this, we learn the transformation by employing neural networks \(\phi=\phi(\cdot;\mathbf{\theta}_{1})\) and \(\psi=\psi(\cdot;\mathbf{\theta}_{2})\) along with \(\eta=\eta(\cdot;\mathbf{\theta}_{3})\) under the supervision with given image data \(\mathbf{X}_{t_{0}}\) and \(\mathbf{X}_{t_{1}}\) as
\[\Phi_{t_{1}-t_{0}}(\mathbf{X}_{t_{0}};\{\mathbf{\theta}_{1},\mathbf{\theta}_{2},\mathbf{ \theta}_{3}\})=\psi(P_{t_{1}-t_{0}}(\phi(\mathbf{X}_{t_{0}};\mathbf{\theta}_{1}),\eta( \mathbf{X}_{t_{0}};\mathbf{\theta}_{3}));\mathbf{\theta}_{2})=\mathbf{X}_{t_{1}}. \tag{4}\]
To interpret the resulting dynamics of the image through latent correspondence, we make sure to design the transformation \(\phi\) so that, at least to some extent, the spatial correlation between the image and latent variables is preserved. This can be achieved by designing the latent space \(\mathcal{P}\) with the same or comparable spatial dimensionality as in the original image. Further, the neural network is constructed using only convolution operators, that is, without including dense layers that are used in fully convolutional neural networks. Details regarding the network architectures and training are discussed in Section 3.3.
It is important to point out that our approach differs from one that incorporates PDEs in sequence-to-sequence models. Specifically, we avoid having to model constraints that arise when attempting to directly model a PDE in the image space [6], which can become particularly arduous when considering intricate measurements for various physical attributes. In particular, autonomously learning the form of PDEs on abstract lower-dimensional latent spaces [11] may introduce further limitations as the learned PDE may be too complex to interpret. Also, the relationship between the latent and image dynamics are unclear.
### Physical model in latent spaces
We utilize partial differential equations (PDEs) to model latent dynamics. In particular, we assume that the dynamics in the latent space can effectively explain changes in image space, and that we have the flexibility to select an appropriate PDE model that aligns with our specific interest in image dynamics. For example, diffusion equations accurately represent temperature changes in a system, while advection equations capture wind patterns. Occasionally, a combination of advection and diffusion equations may be necessary to encompass the intricacies of the phenomenon.
In this work we focus on the advection equation as the primary physical model and aim to better understand the dynamics associated with geoscientific images. The advection vector fields provide detailed descriptions of the flow or motion of physical variables. Consequently these fields offer valuable information about sea ice movement as well as its contributions to possible crack formation. By additionally incorporating information regarding the advection of buoys placed within the region of the SAR imagery, we can also increase our understanding of the ocean currents.
For ease of presentation, we consider the latent space \(\mathcal{P}\) with the same spatial dimensionality as the image space \(\mathcal{I}\), i.e. \(\mathbf{Z}_{t}\in\mathcal{P}\subset\mathbb{R}^{H\times W\times 1}\) in (3), where \(H\) and \(W\) are the height and width
dimensions (first described in Section 2.1). The temporal discretization of \([t_{0},t_{1}]\) is given by
\[T_{0:N}=\{t_{0}+j\Delta t\}_{j=0}^{N}\]
where \(\Delta t=\frac{t_{1}-t_{0}}{N}\), while the map \(\eta\), as defined in (3a), provides the \(N\) vector fields \(\mathbf{W}_{t}\in\mathcal{Q}\subset\mathbb{R}^{H\times W\times 2}\) to drive the advection dynamics of \(\mathbf{Z}_{t}\) over the time domain. For numerical computation we consider the advection equation on a rectangular domain \(\Omega\subset\mathbb{R}^{2}\) given by
\[z:\Omega\times[t_{0},t_{1}]\to\mathbb{R},\ \ \ \ \frac{\partial z}{ \partial t}+\mathbf{w}(\mathbf{x},t)\cdot\nabla z=0, \tag{5}\]
where \(\mathbf{w}:\Omega\times[t_{0},t_{1}]\to\mathbb{R}^{2}\) are the advection vector fields. By conservation, the evolution of \(z\) over the small time period \(\Delta t\) is approximated by
\[z(\mathbf{x},t+\Delta t)=z(\mathbf{x}-\mathbf{w}(\mathbf{x},t)\Delta t,t). \tag{6}\]
The evolution of \(\mathbf{Z}_{t}\) driven by \(\mathbf{W}_{t}\) is considered to be the spatially-discretized dynamics of the continuous quantity \(z\) with respect to \(\mathbf{w}\). To evaluate the evolution, we use interpolation on a discretized domain which can be described as follows:
* Define \(\Omega_{D}\) to be the uniform discretization of \(\Omega\) with the same dimension \(H\times W\) as the latent variable \(\mathbf{Z}_{t}\) and \(\Omega_{D}(\mathbf{w})=\{\mathbf{x}-\mathbf{w}\Delta t:\mathbf{x}\in\Omega_{D}\}\) to be the shifted grid points by the vector field \(\mathbf{w}\).
* Compute \(\mathbf{Z}_{t+\Delta t}\) by interpolating \(\mathbf{Z}_{t}\) on \(\Omega_{D}(\mathbf{w}(x,t))=\Omega(\mathbf{W}_{t})\) at the uniform grid \(\Omega_{D}\), which we simply write as \[\mathbf{Z}_{t+\Delta t}=P_{\Delta t}(\mathbf{Z}_{t};\mathbf{W}_{t})=\texttt{Interpolation} \left(\Omega_{D};(\mathbf{Z}_{t},\Omega_{D}(\mathbf{W}_{t}))\right).\]
* The overall advection of \(\mathbf{Z}_{t}\) over the time period \([t_{0},t_{1}]\) then consists of multiple stepping given by \[\mathbf{Z}_{t_{1}}=P_{t_{1}-t_{0}}(\mathbf{Z}_{t_{0}};\{\mathbf{W}_{s}:s\in T_{0:N-1}\}) =\left(\mathop{\bigcirc}\limits_{j=0}^{N-1}P_{\Delta t}(\cdot;\mathbf{W}_{t+j \Delta t})\right)(\mathbf{Z}_{t_{0}}).\]
Table 1 and Figure 1 summarize the evolutions of the original model \(\Phi_{t}=\psi\circ P_{t}\circ(\phi,\eta)\) along with the corresponding latent space advection model.
### Training the model
#### 3.3.1 Learning patch-to-patch dynamics
Simultaneously learning the diverse features and evolution of an image can be computationally challenging and inefficient, especially for large images. To address this issue, we allow the neural
networks to locally learn the image through smaller windows (patches), which are then combined to obtain the overall image dynamics. Specifically, we consider patches of size \(H_{p}\times W_{p}\), \(H_{p}<H\), \(W_{p}<W\), and continuously scan the entire images \(\mathbf{X}_{t_{0}}\) and \(\mathbf{X}_{t_{1}}\) to generate a training dataset consisting of initial and final patches \(\left\{\left(\mathbf{X}_{t_{0}}^{(i)},\mathbf{X}_{t_{1}}^{(i)}\right):\mathbf{X}_{t_{0}}^{( i)}\in\mathbb{R}^{H_{p}\times W_{p}\times C},\mathbf{X}_{t_{1}}^{(i)}\in\mathbb{R}^{H_{p} \times W_{p}\times C}\right\}\), where \(i=1,\ldots,N\) denotes the index of patch. The objective of our model is to learn the dynamics between patches across all training data, and then combine these patch-to-patch dynamics to capture the overall dynamics of the entire image. It is important to note that while each patch focuses on a local region, the neural network implicitly considers global features of the entire image due to the continuous scanning of the patches, thus ensuring that information from various regions in the image is incorporated into the learning process.
#### 3.3.2 Neural networks
We learn the physical latent space (the encoding \(\phi\) and the decoding \(\psi\)) and the advection field (the map \(\eta\)) through neural networks, with the aim to design the architecture for the encoding map \(\phi(\cdot;\mathbf{\theta}_{1})\) to preserve the spatial correlation between the image patch and corresponding latent variable. To achieve this goal, we stack the convolution layers as illustrated in Figure 2(a). Each pixel in the latent variable is determined solely by the local receptive field, i.e., the neighborhood around the corresponding location in the image, which is defined by the kernel size of convolution.1 Note that this approach contrasts with the inclusion of pooling layers or dense layers, which incorporate information from the entire image and result in non-transitive local correlations between the image
Figure 1: Schematic diagram of the model to learn the dynamics of in-between images \(\mathbf{X}_{t_{0}}\) and \(\mathbf{X}_{t_{1}}\) (red borders). The model learns the latent space (through encoding \(\phi\) and decoding \(\psi\)) where the latent variable \(\mathbf{Z}_{t}\) (yellow borders) physically evolves during the time period \([t_{0},t_{1}]\) and reaches to the state \(\mathbf{Z}_{t_{1}}\) corresponding to the image \(\mathbf{X}_{t_{1}}\). The intermediary images (blue borders) are recovered by decoding latent correspondence.
and the variable, albeit with more condensed information from the overall image. For the decoding map \(\psi(\cdot;\mathbf{\theta}_{2})\) and advection field extraction \(\eta(\cdot;\mathbf{\theta}_{3})\), we employ the U-net [19] architecture shown in Figure 2(b). This U-net structure comprises a downsampling stage (also known as a feature extractor or encoder) and an upsampling stage (also known as a decoder), which are interconnected by skip connections that transfer information from the downsampled features to the upsampled correspondence. The original U-net, initially designed for biomedical image segmentation, employs deconvolutional upsampling followed by concatenation with the corresponding cropped features from the downsampling. In our experiments, however, we observed the presence of chessboard artifacts in the decoded images, as also documented in [18], as well as a contaminated advection field. To mitigate the presence of these artifacts during the upsampling process, we replace the deconvolution operators with resize-convolution [18]. Additionally, to prevent the loss of information from the downsampled features, we replace the cropping operation with resizing, which allows us to concatenate the upsampled features while preserving the available information.
#### 3.3.3 Loss functions
We design the loss functions for the model to learn the evolution from \(\mathbf{X}_{t_{0}}\) to \(\mathbf{X}_{t_{1}}\) while also providing the intermediary stages. To this end we first construct the loss, \(\mathcal{L}_{\text{dynamics}}\), to measure the discrepancy of the model output at final time \(t_{1}\) from the given destination image \(\mathbf{X}_{t_{1}}\) as
\[\mathcal{L}_{\text{dynamics}}(\mathbf{\theta}_{1},\mathbf{\theta}_{2},\mathbf{\theta}_{3} )=\sum_{i}\left|\psi\left(P_{t_{1}-t_{0}}\left(\phi\left(\mathbf{X}_{t_{0}}^{(i)}; \mathbf{\theta}_{1}\right);\eta\left(\mathbf{X}_{t_{0}}^{(i)};\mathbf{\theta}_{3}\right) \right);\mathbf{\theta}_{2}\right)-\mathbf{X}_{t_{1}}^{(i)}\right|^{2}, \tag{7}\]
where the latent states at the initial and terminal times correspond respectively to
\[\mathbf{Z}_{t_{0}}^{(i)}=\phi\left(\mathbf{X}_{t_{0}}^{(i)};\mathbf{\theta}_{1}\right), \qquad\mathbf{Z}_{t_{1}}^{(i)}=P_{t_{1}-t_{0}}\left(\mathbf{Z}_{t_{0}}^{(i)};\eta \left(\mathbf{X}_{t_{0}}^{(i)};\mathbf{\theta}_{3}\right)\right).\]
We note that (7) is not sufficient to drive the latent dynamics of \(\mathbf{Z}_{t}\) through a non-trivial vector field as it focuses on the reconstruction of the destination image \(\mathbf{X}_{t_{1}}\) from the latent variable \(\mathbf{Z}_{t_{1}}\)
Figure 2: Architectures of neural network ingredients in the proposed model. (a) the encoding map \(\phi(\cdot;\mathbf{\theta}_{1})\); stacks of only convolutional layers (b) the decoding map \(\psi(\cdot;\mathbf{\theta}_{2})\) and the advection field extraction map \(\eta(\cdot;\mathbf{\theta}_{3})\); U-net structure particularly with resize-deconvolutions and skip-connections by resizing-concatenation.
To distinguish the latent variables \(\mathbf{Z}_{t_{0}}\) and \(\mathbf{Z}_{t_{1}}\) corresponding to the different images \(\mathbf{X}_{t_{0}}\) and \(\mathbf{X}_{t_{1}}\), the maps \(\phi(\cdot;\mathbf{\theta}_{1})\) and \(\psi(\cdot;\mathbf{\theta}_{2})\) should play role in the auto-encoder of the image space. This is possible under the reasonable assumption that the two images share the same latent space. We define the loss function for the auto-encoder as
\[\mathcal{L}_{\text{AE}}(\mathbf{\theta}_{1},\mathbf{\theta}_{2})=\sum_{i}\left|\mathbf{X} _{t_{0}}^{(i)}-\psi\left(\mathbf{Z}_{t_{0}}^{(i)};\mathbf{\theta}_{2}\right)\right|^{2 }+\left|\mathbf{Z}_{t_{1}}-\phi\left(\mathbf{X}_{t_{1}}^{(i)};\mathbf{\theta}_{1}\right) \right|^{2}. \tag{8}\]
We can also impose the conditions on the advection fields \(\mathbf{W}_{t}\) for stabilizing the learning process or incorporating prior knowledge through the regularization loss term. Here we use \(\ell_{2}\)-regularization, which both stabilizes the learning process as well as smooths the spatial and temporal results, respectively as
\[\mathcal{L}_{\text{magnitude}}(\mathbf{\theta}_{3})=\sum_{i}\sum_{t\in T_{0:N}} \left|\mathbf{W}_{t}^{(i)}\right|^{2},\qquad\mathcal{L}_{\text{smooth}}(\mathbf{ \theta}_{3})=\sum_{i}\sum_{t\in T_{1:N}}\left|\mathbf{W}_{t}^{(i)}-\mathbf{W}_{t- \Delta t}^{(i)}\right|^{2}.\]
In sum, we train our model using a weighted combination of loss functions given by
\[\mathcal{L}(\mathbf{\theta}_{1},\mathbf{\theta}_{2},\mathbf{\theta}_{3})=\mathcal{L}_{ \text{dynamics}}+\lambda_{\text{AE}}\mathcal{L}_{\text{AE}}+\lambda_{\text{ magnitude}}\mathcal{L}_{\text{magnitude}}+\lambda_{\text{smooth}}\mathcal{L}_{\text{ smooth}}, \tag{9}\]
which is optimized by a gradient descent method
\[\mathbf{\theta}^{(n)}=\mathbf{\theta}^{(n-1)}-\alpha\nabla_{\mathbf{\theta}}\mathcal{L} \left(\mathbf{\theta}^{(n-1)}\right),\ \mathbf{\theta}=\{\mathbf{\theta}_{1},\mathbf{\theta}_{2},\mathbf{\theta}_{3}\},\ \ \alpha>0, \tag{10}\]
where \(\alpha\) is a learning rate.
## 4 Numerical results
We now validate the robustness and efficacy of our new latent space dynamics learning framework that is designed to capture intricate inter-imagery dynamics from an initial and destination image using the model framework described by (3). Here we consider a suite of polar region SAR images from which we are specifically interested in investigating sea ice behavior. The environments we analyze involve macro-scaled sea ice deformations, and encompass combinations of translation, rotation, and fracture. Our primary goal is to quantify these deformations using vector fields derived from a physical model, while also offering qualitative insights regarding the underlying scenes.
For our data collection we access SAR image data from the Copernicus Open Access Hub. Each of these images includes multiple polarization values and is georeferenced in longitude-latitude coordinates. As a preprocessing step we convert the \(HH\)-polarized data into a rectangular coordinate system using projection techniques. We then rescale the data, ensuring that each pixel corresponds to a substantial geographical distance of \(0.5\sim 1\) kilometers.
In all of our experiments we fix the spatial dimension of the patch as dimensions \(256\times 256\). The image is systematically scanned using consecutive overlapping patches with stride parameters denoted by \(S_{H}\) (height) and \(S_{W}\) (width),2 which we will specify later, to constitute our training dataset. To help facilitate input to the neural networks, we normalize each patch to a range of
\([-1,1]\). When exploring latent patch-to-patch dynamics, we define the latent domain \(\Omega\) as \([0,1]^{2}\), and align its discretization to be consistent with the patch dimensions (\(256\times 256\)). Finally, we introduce \(N_{\text{evolution}}\) stages for the dynamics, each evolving over a time step \(\Delta t=0.1\).
Our neural network architecture, as detailed in Section 3.3.2, is structured as follows:
* The encoding map (\(\phi\)) comprises six hidden convolutional layers, each using \(3\times 3\) kernel and applying a leaky ReLU activation function with a parameter of \(0.2\).
* Decoding (\(\psi\)) and field extraction (\(\eta\)) maps adopt U-net structures with three downsampling and upsampling stages. Each stage incorporates two hidden convolutional layers, followed by a leaky ReLU activation function with a parameter of \(0.2\). We use \(3\times 3\) convolutional kernels for decoding and \(5\times 5\) for field extraction. Max-pooling is applied during each downsampling step.
We use the stochastic gradient descent (SGD) method along with the Adam optimizer to perform optimization for the neural networks [15]. The learning parameters \(\beta_{1}\) and \(\beta_{2}\) are set to \(0.9\) and \(0.999\), respectively, for efficient convergence. We adopt an exponential decay learning rate, initially defined as \(\alpha\), with a decay rate of \(\gamma\) applied every \(10000\) training iterations. The specific hyperparameters pertaining to our learning framework will be detailed for each individual test problem.
Finally, to ensure a comprehensive evaluation of our framework's performance, we compare the dynamics generated by our approach with the results obtained through two alternative methods: (1) an optimal transport approach [30]; and (2) a direct application of the PDE model in the image space without passing through latent spaces (i.e., \(\mathcal{P}=\mathcal{I}\) with the identity encoding and decoding \(\phi(\mathbf{X})=\mathbf{X}\) and \(\psi(\mathbf{Z})=\mathbf{Z}\) in our method).3
Footnote 3: Animated visualization of the estimated dynamics can also be found at
[https://sites.google.com/view/jihun-han/home/research-gallery/learning-dynamical-systems](https://sites.google.com/view/jihun-han/home/research-gallery/learning-dynamical-systems).
### Transition dynamics
As our first test case, we apply our new method to learn transition dynamics (subsequently referred to as **Example 1**). Figure 3 shows image data captured at a \(10\) day interval. These images reveal the progression of sea ice conditions, and in particular we observe changes along the shoreline in the lower right quadrant adjacent to the unchanging land visible in the upper left quadrant.
We model the intermediate stages of evolution between these images by approximating change on a daily basis, that is using \(N_{\text{evolution}}=10\). We scan the images measuring \(256\times 1024\) pixels
Figure 3: Initial (left) and terminal (right) images for Example 1 captured at a \(10\)-day interval. The colored bounding boxes indicate the regions of interest.
using patches with stride \(1\times 1\), resulting in 768 pairs of overlapping patches, which are then used to train our model with the hyperparameters provided in Table 2. The trained model is subsequently employed to infer dynamics from patch to patch at the intermediate stages for prescribed non-overlapping areas, yielding comprehensive coverage of the entire images.4
Footnote 4: Indeed, the desired dynamics information can be prescribed at pixel locations using any number of patch to patch combinations. Here for simplicity we use the non-overlapping patches to cover the domain.
The estimated dynamics are illustrated in the first (left-most) column of Figure 4 (depicted as images with blue dashed borders) and the selected regions enclosed by the colored boxes are detailed in the first row of Figure 5. The model effectively distinguishes shifting sea ice from stationary land. We observe in particular that the land boundaries maintain constant values even as the ice begins to deform. The sea ice transitions diagonally across each image, as highlighted by yellow boxes, preserving intricate details like ice cracks (see Figure 5(a)). Interestingly, the sea ice smoothly recedes from the image frame, particularly toward the left boundary, as depicted in the region enclosed by each white bounding box. Additionally, novel features emerge along the lower horizontal boundary, seamlessly integrating into the image (see Figure 5(b)). This integration is a result of extrapolated latent features from the prior time step near the boundary.
Our proposed latent space dynamics approach does not solely consider interior image features, but also accounts for external factors, thereby enabling an exploration of potential transformation scenarios leading to terminal states. As illustrated in the second (middle) column of Figure 4 and the second row of Figure 5, this characteristic distinguishes our method from those employing optimal transport (OT), for which the goal is to reallocate pixel density within images while conserving mass over time. In this example, the OT is limited to simulate rigid transitions both within the image's interior and along its boundaries. In particular, the portion highlighted within the yellow boxes in each image demonstrates the smooth generation and disappearance of two deep cracks, as opposed to a rigid transition of the initial-state crack (see Figure 5(a)). Similarly, the region near the boundary corresponding to the white boxes does not exhibit a natural transition beyond the boundary, but rather smoothly interpolates between the initial and terminal states (see Figure 5(b)), which is seemingly unphysical.
\begin{table}
\begin{tabular}{c||c|c|c|c} & encoder \(\phi(\cdot;\boldsymbol{\theta}_{1})\) & decoder \(\psi(\cdot;\boldsymbol{\theta}_{2})\) & field extraction \(\eta(\cdot;\boldsymbol{\theta}_{3})\) \\ \hline \hline hidden layers & \(32,64,128,64,32,16,8\) &. &. \\ \hline input layers &. & 16 & 16 \\ downsampling &. & \(16(2),32(2),64(2)\) & \(16(2),32(2),64(2)\) \\ bottleneck &. & 128 & 128 \\ upsampling &. & \(64(2),32(2),16(2)\) & \(64(2),32(2),16(2)\) \\ output layers &. & 16 & 16 \\ \end{tabular}
\begin{tabular}{c|c|c|c|c|c} \(N_{\text{evolution}}\) & \(S_{H}\times S_{W}\) & \(\lambda_{\text{AE}}\) & \(\lambda_{\text{magnitude}}\) & \(\lambda_{\text{smooth}}\) & \(\alpha\) & \(\gamma\) \\ \hline \hline
10 & \(1\times 1\) & 1. & 0.01 & 0.001 & 0.8 \\ \end{tabular}
\end{table}
Table 2: Hyperparameters for training the proposed model in Example 1. (Top) the numbers of convolutional filters in neural networks; (Bottom) number of evolution steps, stride for scanning the patch over the whole image, regularization parameters, and learning parameters.
Figure 4: The estimation for intermediate stages (blue dashed borders) of transition-dominant dynamics given initial and terminal states (red borders) for Example 1. Each row marks a different time \(t=j\Delta t\), \(j=0,\ldots,10\). (Left) our new latent space dynamics approach; (middle) OT [30]; and (right) direct application of PDE model in the image space.
We also conduct an experiment involving a direct search for advection fields on the images themselves, similar to what is done in [6]. As demonstrated in the third column of Figure 4 and the third row of Figure 5, while adapting an advection field to image dynamics captures the overall transition trend, it is not as effective in capturing fine details. Moreover, new unphysical features are generated near the boundary (see Figure 5(b)). This outcome could potentially be attributed to the fact that the proposed method is able to capture fine details of the advection field.
Figure 5: The dynamics of the selected regions enclosed by the (a) yellow and (b) white bounding boxes in Figure 4. Each column marks a different time \(t=j\Delta t\), \(j=0,\dots,10\). (Top) our new latent space dynamics approach; (middle) OT [30]; and (bottom) direct application of PDE model in the image space.
to the intricate nature of vector fields in the image space or the inadequacy of the PDE model.
We quantify image evolution by leveraging dynamic vector fields in the corresponding latent space, as depicted in Figure 6. Spatial correlations between latent variables and images allow us to indirectly estimate image dynamics through these vector fields. The green arrows illustrate tem
Figure 6: The latent dynamics of transition-dominant image dynamics (Example 1) corresponding to the first column of Figure 4 along with vector fields (green) and streamlines (pink).
poral feature evolution, with each denoting the direction of a feature transition for the subsequent time step. Corresponding streamlines are represented by pink lines. The vector field effectively shows the nuanced dynamics of sea ice, including its movement toward the diagonal direction and its interaction with the lower horizontal boundary and left vertical boundary (see the direction of vector fields near the boundaries, inward and outward, respectively), while static portions of sea ice and land remain unchanged. This level of detail closely aligns with the dynamics observed in the image space.
### Rotational dynamics
Our next scenario (Example 2) concerns rotational dynamics. The initial and terminal images are shown in Figure 7. Observe the fracture of sea ice resulting in the formation of new cracks (red and green reference bounding boxes) as well as the translation of sea ice causing cracks to merge (yellow bounding box). The data are moreover subject to measurement errors yielding low image quality, as visualized both by the vertical streaks of bright artifacts and the minor coregistration misalignments. We analyze the dynamics through eight intermediate stages with the goal to explore how sea ice movement influences the evolution and generation of cracks.
Each image is comprised of \(1024\times 1024\) pixels. The images undergo a similar patch scanning as was done for the transition dynamics problem (Example 1), with a stride of \(10\times 10\), yielding 5776 pairs of patches. Table 3 provides the hyperparameter information used for training.
As was done for Example 1, the model's inference is then employed in non-overlapping areas to cover the entire image, leading to the estimation of the intermediate stage dynamics. Figure 8 is split into upper (\([0,4\Delta t]\)) and lower (\([5\Delta t,8\Delta t]\)) temporal sequences of images. The first row in both sequences showcases the sequential generation and disappearance of cracks at the intermediate stages, and also demonstrates the deviation from simultaneous changes. Of note, we observe that during the initial time interval \([0,3\Delta t]\), the crack within the yellow bounding box gradually merges and vanishes. Subsequently, from \([4\Delta t,6\Delta t]\), cracks emerge in the green bounding boxes, followed by the appearance of a crack within the red bounding box during \([6\Delta t,8\Delta t]\). The top row of Figure 9(a) provides additional sequential dynamics details.
Figure 7: Initial (left) and terminal (right) images for the rotational dynamics in Example 2. The colored bounding boxes indicate the regions of interest.
These results stand in contrast to those produced using OT depicted in the second row of each sequence in Figure 8, where all cracks simultaneously and gradually evolve, either forming or disappearing over the entire time span \([0,8\Delta t]\), which is also observed in the second row of Figure 9(a). Moreover, the dynamics resulting from OT do not demonstrate rigid movement or rotation. Instead, as can be seen in the portion enclosed in each white box spanning the time interval \([2\Delta t,6\Delta t]\), they appear to involve local interpolation between the initial and the terminal states, as evidenced by the simultaneous presence of both states during the evolution. This contrast is highlighted in the top and middle rows of Figure 9(b).
As also was done for Example 1, the third rows of each portion of Figure 8 and Figure 9 illustrate an attempt to learn the advection vector field directly within the image space. Similar to what was observed in the third column of Figure 4 for the transition dynamics problem, once again we see that direct application of PDEs in the image space proves ineffective in driving the image from its initial state to the terminal state. This result further substantiates the advantage of our approach in using the latent space, as it aligns well with the physical model.
To further comprehend details of the image dynamics, we refer to the quantification of vector fields in the latent space, as depicted in Figure 10. By fitting an advection PDE model to the extracted latent states, the resulting field illustrates an overarching counterclockwise rotation trend throughout the evolution period. In particular, we observe (i) A downward shift originating from the yellow bounding box compresses cracks within the box, leading to their merging. (ii) The impact then propagates through the green bounding box, fracturing the sea ice and giving rise to cracks and subsequently (iii) a combination of upward and downward impacts around the red bounding box generates the final crack.
Finally, we note that despite the presence of artifacts within the images, the learning of vector fields remains uninterrupted, even in proximity to latent space locations corresponding to the artifacts. We also emphasize that as demonstrated in the middle row of Figure 8 (both sections), such rotational dynamics are challenging for OT methods.
\begin{table}
\begin{tabular}{c||c|c|c|c} & encoder \(\phi(\cdot;\boldsymbol{\theta}_{1})\) & decoder \(\psi(\cdot;\boldsymbol{\theta}_{2})\) & field extraction \(\eta(\cdot;\boldsymbol{\theta}_{3})\) \\ \hline \hline hidden layers & \(16,32,64,32,16,8\) & \(\cdot\) & \(\cdot\) \\ \hline input layers & \(\cdot\) & \(32\) & \(32\) \\ downsampling & \(\cdot\) & \(32(2),64(2),128(2)\) & \(32(2),64(2),128(2)\) \\ bottleneck & \(\cdot\) & \(256\) & \(256\) \\ upsampling & \(\cdot\) & \(128(2),64(2),32(2)\) & \(128(2),64(2),32(2)\) \\ output layers & \(\cdot\) & \(32\) & \(32\) \\ \end{tabular}
\begin{tabular}{c|c|c|c|c|c} \(N_{\text{evolution}}\) & \(S_{H}\times S_{W}\) & \(\lambda_{\text{AE}}\) & \(\lambda_{\text{magnitude}}\) & \(\lambda_{\text{smooth}}\) & \(\alpha\) & \(\gamma\) \\ \hline \hline
8 & \(10\times 10\) & \(1\). & \(0.001\) & \(0.06\) & \(0.0001\) & \(0.9\) \\ \end{tabular}
\end{table}
Table 3: Hyperparameters for training the proposed model in Example 2. (Top) the numbers of convolutional filters in neural networks and (bottom) number of evolution steps, stride for scanning the patch over the whole image, regularization parameters, and learning parameters
Figure 8: Intermediate stages (blue dashed borders) of rotational dynamics given initial and terminal states in Figure 7 (red borders). (Upper) temporal sequence of images in \([0,4\Delta t]\); (lower) temporal sequence of images in \([5\Delta t,8\Delta t]\). In each portion: (top) our new latent space dynamics method; (middle) OT [30]; and (bottom) direct application of PDE model in the image space.
Figure 9: The dynamics of the (a) crack regions and (b) rotation enclosed by the colored bounding boxes in Figure 8. (top) our latent space dynamic model; (middle) OT [30]; and (bottom) direct application of PDE model in the image space.
### Complex dynamics with new feature generation
Figure 11: Initial (left) and terminal (right) images for Example 3.
Figure 10: The latent dynamics of rotational image dynamics corresponding to the first column of Figure 8 along with vector fields (green) and streamlines (pink).
Figure 11 provides a final prototype (Example 3) to test our latent space dynamics approach. The figure depicts a physical structure resembling a valley in the right part of the image. The sea ice distribution initially appears to be smooth with cracks subsequently emerging, notably at the valley entrance as well as across the valley. In contrast to the previous two examples, where existing features were essentially relocated, this scenario involves the generation of entirely new features at the terminal states. Our objective is then to understand how the model adapts latent space extraction and corresponding vector fields to generate these new features.
Our model is trained with the hyperparameters detailed in Table 4, and employs pairs of patches extracted from images measuring \(256\times 512\) pixels. The first and third columns of Figure 12 showcase the evolution of crack generation at intermediate times \(t\in[\Delta t,10\Delta t]\), where the non-overlapping inference is derived from the latent space dynamics trained model. We observe that over this time period, the left portion in each of the images, that is, the region outside the valley, gradually undergoes crack formation. These cracks exhibit a tendency to move rightward toward the valley. Additionally, starting at \(t=4\Delta t\), cracks begin to appear and propagate into the valley. As was done in our previous examples, we compare these results to those obtained using OT, whch are shown in the second and fourth columns of Figure 12. In this case we observe that all cracks across the image, spanning from the valley's entrance to the valley itself, gradually and concurrently become more distinct, rather than sequentially through dynamic movements. Additionally, due to disparities in total pixel density between the initial and terminal images, the normalized images exhibit variations in stationary valley regions (see bottom right corner of each frame).
Figure 13 provides additional insight into the evolution in latent space, offering detailed vector fields. These fields reveal a dynamic shift roughly counterclockwise over the span of \([0,6\Delta t]\). The remaining period, \([6\Delta t,10\Delta t]\), displays discernible streamlines penetrating into the valley.
## 5 Concluding remarks
This work introduces a new latent space dynamics neural network approach for estimating in-between imagery dynamics. Given the absence of temporal information in the provided data and the inherent complexity of images, addressing the unique evolution remains a significant challenge. To address this, our new method employs a PDE model to indirectly drive image evolution through
\begin{table}
\begin{tabular}{c||c|c|c|c} & encoder \(\phi(\cdot;\boldsymbol{\theta}_{1})\) & decoder \(\psi(\cdot;\boldsymbol{\theta}_{2})\) & field extraction \(\eta(\cdot;\boldsymbol{\theta}_{3})\) \\ \hline \hline hidden layers & \(16,32,64,32,16,8\) & \(\cdot\) & \(\cdot\) \\ \hline input layers & \(\cdot\) & \(32\) & \(32\) \\ downsampling & \(\cdot\) & \(32(2),64(2),128(2)\) & \(32(2),64(2),128(2)\) \\ bottleneck & \(\cdot\) & \(256\) & \(256\) \\ upsampling & \(\cdot\) & \(128(2),64(2),32(2)\) & \(128(2),64(2),32(2)\) \\ output layers & \(\cdot\) & \(32\) & \(32\) \\ \end{tabular}
\begin{tabular}{c|c|c|c|c|c} \(N_{\text{evolution}}\) & \(S_{H}\times S_{W}\) & \(\lambda_{\text{AE}}\) & \(\lambda_{\text{magnitude}}\) & \(\lambda_{\text{smooth}}\) & \(\alpha\) & \(\gamma\) \\ \hline \hline
10 & \(10\times 10\) & 1. & 0.01 & 0.001 & 0.8 \\ \end{tabular}
\end{table}
Table 4: Hyperparameters for training the proposed model in Example 3. (Top) the numbers of convolutional filters in neural networks; (bottom) number of evolution steps, stride for scanning the patch over the whole image, regularization parameters, and learning parameters
a physically informed latent space. This approach is beneficial since latent dynamics can be more comprehensible when framed within PDEs. Specifically, our framework employs the advection equation to model latent evolution while maintaining spatial correlation between image and latent space. This correlation enables us to interpret image dynamics both quantitatively and qualitatively through learned advection vector fields.
We substantiate the effectiveness of our proposed method through the SAR imagery in sea ice investigation. Our method compares favorably to more commonly used OT approaches. In particular, it transcends limitations tied to total pixel conservation, allowing for interpolation beyond image boundaries to generate inflow and outflow features. Moreover, it effectively simulates new feature emergence even when the initial and terminal states exhibit discrepancies in total pixel density. Finally, our approach effectively captures rotational dynamics, a well-known challenge for OT methods [24].
Our work presents opportunities for approximating in-between imagery dynamics in other application domains. Specifically, we have demonstrated the power of learning patch-to-patch dynamics through continuous image scanning, implicitly accounting for global-scale features through neural networks. One possible extension might be to directly consider different scales with various patch
Figure 12: The estimation for intermediate stages (blue dashed borders) of the dynamics given initial and terminal states in Figure 11 (red borders). (First and third columns) our new latent space dynamics method; (second and fourth columns) OT approach [30].
sizes, thereby effectively capturing the multiscale nature of image features. This is akin to the principles underlying multigrid methods. A recent study [13] shows that hierarchical training based on the different scale components of the multiscale features can expedite the training process while improving the prediction skills. Thus, it is natural to investigate whether such hierarchical learning can be applied in learning multiscale dynamics.
There are also interesting opportunities in exploring the use of other physical operators (beyond advection) within latent dynamics and in assessing their contributions to image dynamics. Different types of PDEs may exhibit varying levels of trainability. For example, evolution driven by diffusion is intertwined through neighborhood values, and is distinct from the advection case.
Finally, incorporating any additional available information about the intermediate time periods may significantly reduce scenario uncertainty. Strategies including regularization and/or data assimilation during the encoding and decoding process could prove valuable in this context. Our proposed method provides a strong foundation for any of these future research directions.
Figure 13: The latent dynamics of complex image dynamics corresponding to the first column of Fig. 12 along with vector fields (green) and streamlines (pink).
## Acknowledgments
All authors are supported by the DoD MURI grant ONR # N00014-20-1-2595. AG is also supported by the AFOSR grant # FA9550-22-1-0411 and the NSF grant DMS # 1912685.
|
2306.03138 | Effective field theory for radiative corrections to charged-current
processes I: Vector coupling | We study radiative corrections to low-energy charged-current processes
involving nucleons, such as neutron beta decay and (anti)neutrino-nucleon
scattering within a top-down effective-field-theory approach. We first match
the Standard Model to the low-energy effective theory valid below the weak
scale and, using renormalization group equations with anomalous dimensions of
$\mathcal{O}(\alpha, \alpha \alpha_s, \alpha^2)$, evolve the resulting
effective coupling down to the hadronic scale. Here, we first match to
heavy-baryon chiral perturbation theory and subsequently, below the pion-mass
scale, to a pionless effective theory, evolving the effective vector coupling
with anomalous dimensions of $\mathcal{O}(\alpha, \alpha^2)$ all the way down
to the scale of the electron mass, relevant for beta decays. We thus provide a
new evaluation of the ``inner" radiative corrections to the vector coupling
constant and to the neutron decay rate, discussing differences with the
previous literature. Using our new result for the radiative corrections, we
update the extraction of the Cabibbo-Kobayashi-Maskawa matrix element $V_{ud}$
from the neutron decay. | Vincenzo Cirigliano, Wouter Dekens, Emanuele Mereghetti, Oleksandr Tomalak | 2023-06-05T18:00:05Z | http://arxiv.org/abs/2306.03138v2 | # Effective field theory for radiative corrections
###### Abstract
We study radiative corrections to low-energy charged-current processes involving nucleons, such as neutron beta decay and (anti)neutrino-nucleon scattering within a top-down effective-field-theory approach. We first match the Standard Model to the low-energy effective theory valid below the weak scale and, using renormalization group equations with anomalous dimensions of \({\cal O}(\alpha,\alpha\alpha_{s},\alpha^{2})\), evolve the resulting effective coupling down to the hadronic scale. Here, we first match to heavy-baryon chiral perturbation theory and subsequently, below the pion-mass scale, to a pionless effective theory, evolving the effective vector coupling with anomalous dimensions of \({\cal O}(\alpha,\alpha^{2})\) all the way down to the scale of the electron mass, relevant for beta decays. We thus provide a new evaluation of the "inner" radiative corrections to the vector coupling constant and to the neutron decay rate, discussing differences with the previous literature. Using our new result for the radiative corrections, we update the extraction of the Cabibbo-Kobayashi-Maskawa matrix element \(V_{ud}\) from the neutron decay.
LA-UR-22-21034, INT-PUB-23-015
###### Contents
* 1 Introduction
* 2 Statement of the problem and results
* 3 Step I: matching the Standard Model to LEFT
* 3.1 Wilson coefficient and RGE
* 3.2 External sources and spurions
* 4 Step II: matching LEFT to HBChPT
* 4.1 The Chiral Lagrangian
* 4.2 Electromagnetic coupling constant
* 4.3 Electroweak coupling constants
* 5 Corrections to \(g_{V}\)
* 5.1 Matching at the baryon-mass scale
* 5.2 Evaluation of the non-perturbative input
* 5.3 RG evolution of \(g_{V}\) below the baryon scale
* 5.4 Numerical results and uncertainty estimates
* 6 Corrections to neutron decay and impact on \(V_{ud}\)
* 6.1 "Long distance" electromagnetic corrections and differential decay rate
* 6.2 Total decay rate and extraction of \(V_{ud}\)
* 6.3 Comments on radiative corrections to nuclear decays
* 7 Conclusions and Outlook
* A Electromagnetic fine-structure constant in LEFT and \(\chi\)PT
* A.1 Charge renormalization in LEFT
* A.2 Charge renormalization in \(\chi\)PT
* B Factorization of the decay rate in the nonrelativistic limit
* C Details on the two-loop anomalous dimensions
## 1 Introduction
Low-energy processes mediated by the charged-current (CC) weak interaction provide promising ways to test the Standard Model (SM) and indirectly search for new physics, provided sufficiently high experimental and theoretical precision can be achieved. In recent years, there has been a resurgence of interest in beta decays and CC neutrino scattering on nuclei. On one hand, the study of beta decays at the sub-permille level provides a unique window into possible new physics at the multi-TeV scale. Recent analyses [1, 2, 3, 4, 5, 6, 7, 8] have uncovered a \(3\sigma\) tension with the Standard Model interpretation of these processes in terms of the unitary Cabibbo-Kobayashi-Maskawa (CKM) quark mixing matrix [7, 9]. Moreover, global analyses of beta decay observables [10, 11], including decay correlations, offer unique ways to probe nonstandard CC interactions with Lorentz structures different from the SM "\(V-A\)". On the other hand, the interest in CC neutrino scattering process stems primarily from neutrino oscillation experiments [12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24], as precise theoretical predictions are needed to calibrate the neutrino fluxes and reconstruct the neutrino energy [25, 26, 27, 28, 29, 30]. In what follows, we will focus on beta decays (\(n\to pe\bar{\nu}_{e}\)) but our results, based on a
low-energy effective theory, apply to neutrino scattering processes such as \(\bar{\nu}_{e}p\to e^{+}n\) and \(\nu_{e}n\to ep\) at low energy as well.
One of the key ingredients to achieve high theoretical precision in beta decays (sub-permille, allowing one to probe physics up to 20 TeV) is the calculation of electromagnetic radiative corrections, controlled by an expansion in \(\alpha/\pi\), where \(\alpha\approx 1/137.036\) is the fine-structure constant. The analysis of radiative corrections to beta decays has a long history, predating the formulation of the Standard Model of electroweak and strong interactions. In the early work from the 1950s [31, 32], the nucleon was treated as point-like and the weak interaction was described in terms of the \((V-A)\times(V-A)\) current-current contact operator. In the framework of the local \((V-A)\times(V-A)\) theory, two developments from the 1960s have influenced all the subsequent literature. In Ref. [33], Sirlin identified a set of ultraviolet(UV)-finite and gauge-invariant corrections to the beta spectrum and decay rate that are independent of the details of the strong interaction, the so-called universal "outer" corrections. Ref. [33] also identified a set of "inner" corrections that essentially shift the strength of the vector (Fermi) and axial-vector (Gamow-Teller) couplings at the single-nucleon level, pointing out that in principle these "inner" corrections depend on the strong-interaction dynamics. Shortly afterwards, using current-algebra techniques, the authors of Ref. [34] showed that to \(\mathcal{O}(\alpha)\) the contribution of the weak vector current \(V\) to the Fermi transition "inner" correction is calculable without knowledge of details about the strong interactions, leading to a universal UV-divergent correction. Ref. [34] also showed that the contribution to the Fermi transition due to the weak axial current \(A\) does depend on the strong-interaction details. This class of "inner" corrections was parameterized in terms of correlation functions of weak and electromagnetic hadronic currents in the nucleon state, and crudely estimated with available models of strong interactions.
The current-algebra formulation of radiative corrections was later embedded in the Standard Model by Sirlin [35, 36], who also computed the leading logarithmic corrections to \(\mathcal{O}(\alpha)\) and \(\mathcal{O}(\alpha\alpha_{s})\). Since then, the calculation of the terms that depend on the strong interactions has been performed in this framework with more sophisticated hadronic models, culminating in the 2006 prediction [37] for the "inner" correction to the vector amplitude. Large logarithms originating from both the UV (from \(M_{Z}\) to the hadronic scale of the order of nucleon mass \(m_{N}\)) and IR (from \(m_{N}\) to \(m_{e}\)) have been resummed in the leading logarithmic approximation (see Ref. [38] and references therein). Ref. [38] also includes next-to-leading logarithms in \(\alpha\) that are enhanced by the number of fermion species.
The next important development in the field has been the calculation of the non-perturbative input for the "inner" corrections using dispersive methods, pioneered by Seng et al. [1, 2]. This has led to a reduced uncertainty and an increase in the central value of the "inner" correction to the Fermi coupling, later reproduced by Refs. [3, 4, 5]. In this framework, lattice QCD has been used to supplement non-perturbative input in the meson sector [39, 40, 41, 42] and efforts to do the same for nucleon decay are underway [43, 40].
All the results described above are rooted in the current-algebra framework developed by Sirlin [35]. While this method is rigorous, it does not take full advantage of modern effective field theory (EFT) techniques, neither at the level of short-distance physics (the evolution of the interactions from the electroweak scale to the hadronic scale), nor at the level of strong interactions (chiral EFT for mesons, nucleons, and eventually nuclei). The use of EFT techniques is not a mere reformulation of the problem. EFT provides a rigorous way to connect scales and estimate uncertainties. Moreover, EFT methods can bring new insights to the problem. In fact, by providing a simple framework to analyze hadronic correlation functions, the study of neutron decay to \(\mathcal{O}(G_{F}\alpha)\) in Heavy Baryon Chiral Perturbation Theory (HBChPT) [44] has uncovered a new %-level "inner" correction to the ratio \(g_{A}/g_{V}\) of axial to vector nucleon couplings, missed in previous analyses based on current algebra [5, 45].
In the HBChPT framework for single-nucleon weak CC processes, developed in Refs. [44, 46], the active degrees of freedom are the light leptons, photons, pions, and nucleons. The effect of both electroweak and other hadronic-scale physics is encoded in a number of low-energy constants (LECs). The goal of this paper is to develop a matching procedure to express the relevant LECs in terms of perturbatively calculable Wilson coefficients and hadronic correlation functions that can be then estimated with non-perturbative methods, such as dispersive methods or lattice QCD. Since there are multiple thresholds,
the electroweak scale \(\sim M_{W,Z}\), the chiral symmetry breaking scale \(\Lambda_{\chi}\sim m_{N}\sim\) GeV, with \(m_{N}\) the mass of nucleon, and the pion mass, we adopt a multi-step matching strategy. The first step connects the full Standard Model to the so-called low-energy effective theory (LEFT) below the weak scale, which coincides with the \(V-A\) theory of weak interactions augmented by QED and QCD. This is a perturbative matching step. The second step connects the LEFT to HBChPT and involves non-perturbative physics. These first two steps are similar in spirit to the analysis of Ref. [47] for the meson sector. The third step consists of integrating out the pions, by matching HBChPT onto a pionless EFT (\(\not{\pi}\)EFT) as detailed in Ref. [44]. The main novel aspects of our work are:
* We evaluate the relevant LEFT Wilson coefficient to next-to-leading logarithm accuracy in \(\alpha\): we implement the matching condition at \(\mu_{SM}\sim M_{Z}\) at one-loop and the running via the two-loop anomalous dimension of \(\mathcal{O}(\alpha^{2})\), for which we provide for the first time the full expression. We also use the known two-loop anomalous dimension of \(\mathcal{O}(\alpha\alpha_{s})\) and present solutions of the Renormalization Group Equations (RGEs) summing leading and next-to-leading logarithms of the ratio \(M_{Z}/m_{N}\).
* We set up the general formalism and provide explicit expressions for the HBChPT LECs that shift the vector coupling \(g_{V}\). The relevant non-perturbative input can be obtained either from the existing dispersive analyses [2] or lattice QCD in the future.
* We solve the RGEs for the vector coupling \(g_{V}(\mu_{\chi})\), using one- and two-loop anomalous dimensions in \(\not{\pi}\)EFT. This allows us to sum the leading and next-to-leading logarithms involving the ratio \(m_{N}/E_{0}\), where \(E_{0}\simeq 2.530\)\(m_{e}\) is the electron energy endpoint, representing the infrared (IR) scale of the problem. The RGE evolution thus allows us to identify all terms in the amplitude proportional to \(\alpha^{2}\ln(m_{N}/E_{0})\). Our treatment of these next-to-leading large logarithms differs from the one found in the literature, as discussed in Section 2.
* Throughout, we use dimensional regularization with modified minimal subtraction (\(\overline{\rm MS}\)[48]) in the LEFT, and the chiral version of it (\(\overline{\rm MS}_{\chi}\)[49]), specifying at every step the \(\gamma_{5}\) and evanescent operator scheme. In this framework, the renormalization group (RG) equations have a very simple form, and the standard results on leading logarithm and next-to-leading logarithm resummation can be applied. The residual sensitivity to the renormalization scale order by order in RG-improved perturbation theory gives us a rigorous way to estimate the perturbative uncertainties. More generally, our results provide a new framework to analyze low-energy CC processes to \(\mathcal{O}(G_{F}\alpha)\), largely independent from the current algebra formalism [35].
* As a first application of the new framework, using the dispersive input from Refs. [1, 2, 3, 4, 5, 6] as compiled in Ref. [8], we evaluate the combination of LECs that determine the "inner" corrections to the Fermi transition effective coupling \(g_{V}\). We combine this with the known \(\mathcal{O}(\alpha)\) radiative corrections to the matrix element in HBChPT [44, 46]. We further resum the Coulomb-enhanced terms scaling as \((\pi\alpha/\beta)^{n}\) (\(\beta\equiv p_{e}/E_{e}\)) as well as subleading \(\alpha/\pi(\pi\alpha/\beta)^{n}\) terms, in the nonrelativistic Fermi function, which is the natural quantity appearing in \(\not{\pi}\)EFT. In practice, this amounts to replacing the relativistic Fermi function (which contains large logarithms \(\sim\alpha^{2}\ln(R_{p}p_{e})\), with the proton radius \(R_{p}\sim 1/\Lambda_{\chi}\)) with its nonrelativistic counterpart. Finally, we study the impact on the extraction of \(V_{ud}\) from neutron decay. For the total corrections to the neutron decay rate, we find a result that is one \(\sigma\) above the previous results, pointing to a correspondingly smaller value for \(V_{ud}\).
The paper is organized as follows. In Section 2, we provide a high-level summary of the results worked out in the rest of the paper, highlighting the connections to and differences from the previous literature. Following a top-down approach, we perform a multi-step matching to connect electroweak physics with neutron and nuclear decays. The first step, connecting the full Standard Model to the LEFT, is presented in Section 3. The second step, connecting the LEFT to HBChPT, is presented in Section 4. The resulting effective vector coupling \(g_{V}(\mu_{\chi}\sim m_{N})\) at the matching scale \(\mu_{\chi}\sim m_{N}\) and its evolution to the scale
of the decay, \(\mu_{\chi}\sim E_{0}\), is discussed in Section 5. In Section 6, we discuss the implications for neutron decay and the determination of \(V_{ud}\) and comment on the relation to superallowed \(0^{+}\to 0^{+}\) transitions. Conclusions and outlook are presented in Section 7. Appendix A contains details about electric charge renormalization and running in the LEFT and Chiral Perturbation Theory. Appendix B discusses the factorization of the nonrelativistic Fermi function in nonrelativistic QED, while Appendix C contains details on the extraction of the \({\cal O}(\alpha^{2})\) anomalous dimension in LEFT and HBChPT/\(\pi\)EFT.
## 2 Statement of the problem and results
Neutron decay is a low-energy process characterized by the energy scales of the neutron-proton mass difference, \(m_{n}-m_{p}\approx 1.3\) MeV, and the electron mass \(m_{e}\approx 511\) keV. These scales, which we denote by \(q_{\rm ext}\), are much smaller than the pion mass, \(m_{\pi}\approx 137\) MeV, the nucleon mass, \(m_{N}\approx 939\) MeV, and the \(W\) boson mass, \(M_{W}\approx 80\) GeV. The existence of widely separated mass scales makes the process amenable to a description based on EFTs. In this work, we systematically implement EFT methods to study low-energy charged-current processes such as neutron decay. We first integrate out the heavy particles (\(W\), \(Z\), \(h\), \(t\)) and match the full Standard Model onto the so-called LEFT. Subsequently, we integrate out the scale of the nucleon mass, by matching the LEFT onto HBChPT [50]. We finally integrate out physics at the scale of the pion mass, following [44], by matching HBChPT onto \(\pi\)EFT. The neutron decay rate is thus organized in an expansion in several small parameters (besides \(G_{F}q_{\rm ext}^{2}\), which sets the overall scale): the electromagnetic coupling constant \(\alpha\), \(\epsilon_{\rm recoil}=q_{\rm ext}/m_{N}\), which describes small kinematic corrections, \(\epsilon_{\not{\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
to \(g_{V}(\mu_{\chi}\sim m_{e})\), resumming large next-to-leading logarithms of order \(\alpha^{2}\ln\left(m_{N}/m_{e}\right)\). The resulting \(g_{V}(\mu_{\chi}\sim m_{e})\) is directly relevant to the calculation of neutron decay and can be used as input for the one-body contribution to nuclear decays.
In this work, we have focused on the application to neutron decay. With \(g_{V}(\mu_{\chi}\sim m_{e})\) at hand, we combined both virtual and real photon corrections to the decay rate [33, 44, 46] to obtain the effective phase-space correction \(\Delta_{f}\) and the radiative correction \(\Delta_{R}\) to the neutron lifetime, see Section 6, and the relation
\[|V_{ud}|^{2}\,\tau_{n}\,\left(1+3\lambda^{2}\right)\left(1+\Delta_{f}\right) \left(1+\Delta_{R}\right)=5283.321(5)\ \mathrm{s}, \tag{4}\]
with \(\Delta_{f}\) and \(\Delta_{R}\) given in Eqs. (109) and (110), respectively. Our definitions for \(\Delta_{f}\) and \(\Delta_{R}\) differ from the traditional approach both conceptually and numerically. Technically, the bulk of this difference is in shifting all short-distance contributions from \(\Delta_{f}\) to \(\Delta_{R}\). \(\Delta_{f}\) describes Coulomb-enhanced long-distance contributions and recoil corrections, while \(\Delta_{R}\) includes all electroweak and HBChPT short-distance contributions along with the non-Coulomb radiative corrections in \(\not\!\pi\)EFT, as specified in Eqs. (78), (89), and (113). Numerically, we find
\[\Delta_{f}=3.573(5)\times 10^{-2}, \tag{5}\] \[\Delta_{R}=4.044(24)_{\mathrm{Had}}(8)_{\alpha\alpha_{s}^{2}}(7) _{\alpha\epsilon_{\chi}^{2}}(5)_{\mu_{\chi}}[27]_{\mathrm{total}}\times 10^{-2}. \tag{6}\]
The uncertainty in \(\Delta_{f}\) stems from an estimate of mixed recoil times Coulomb corrections. The dominant sources of uncertainty to \(\Delta_{R}\) are given by: the non-perturbative hadronic contributions, associated to the "\(\gamma W\) box" diagram in the standard approach [1, 2, 3, 4, 5, 6]; contributions of \(\mathcal{O}(\alpha\alpha_{s}^{2})\) not included in our renormalization group analysis in the LEFT; chiral corrections of \(\alpha\epsilon_{\chi}^{2}\); residual dependence on the \(\not\!\pi\)EFT renormalization scale, varied between \(m_{e}/\sqrt{2}\) and \(\sqrt{2}m_{e}\), which is an indicator of the \(\mathcal{O}(\alpha^{2})\) corrections. A detailed discussion of uncertainties is presented in Sections 5.4 (for \(g_{V}\)) and 6.2 (for the remaining contributions to \(\Delta_{R}\)).
Our result for \(\Delta_{f}\) in Eq. (5) differs from the one found in the literature \(\Delta_{f}=3.608\times 10^{-2}\)[38] by \(-0.035\%\). This is because in the phase space integration we use the nonrelativistic Fermi function, for the reasons discussed in Section 6.1, and neglect corrections induced by modeling the proton as a uniformly charged sphere of radius \(R_{p}\simeq 1\) fm [53] (this effect is at the level of \(0.005\%\)).
Our result for \(\Delta_{R}\) in Eq. (6) exceeds the current value \(\Delta_{R}=3.983(27)\times 10^{-2}\), compiled in Ref. [8] by combining the results of [1, 2, 3, 4, 5, 6], by about twice the estimated uncertainties. The \(+0.061\%\) shift in the central value is almost entirely due to the different treatment of the next-to-leading logarithmic terms at the hadronic level, i.e., the terms that scale as \(\alpha^{2}\ln\left(m_{N}/m_{e}\right)\). In both approaches, there is a contribution of this type coming from the cross term between the one-loop RGE correction to \(g_{V}\), scaling as \(\frac{\alpha}{\pi}\ln\left(m_{N}/m_{e}\right)\), and \(\mathcal{O}\left(\frac{\alpha\pi}{\beta}\right)\) terms in the Fermi function. In our approach, additional \(\alpha^{2}\ln\left(m_{N}/m_{e}\right)\) large logarithmic corrections arise entirely from the two-loop anomalous dimension contribution to the RGE (88) for the effective coupling \(g_{V}(\mu_{\chi})\) and produce a positive shift in \(\Delta_{R}\) of \(0.010\%\). In the EFT approach, there are no other sources of large logarithms of the ratio \((m_{N}/m_{e})\) in the matrix element of the four-fermion operator (1) to \(\mathcal{O}(\alpha^{2})\). In the literature, this class of effects is not associated with the running of \(g_{V}\), but arises through the _negative_ correction \(\alpha/(2\pi)\times\delta=-0.043\%\), introduced in Ref. [38] by adapting the results of Refs. [54, 55].1 The mismatch of the two approaches produces a \(+0.053\%\) shift in our results. The remaining difference is due to a combination of the following, individually smaller, effects: (i) we re-evaluate the "elastic" hadronic contribution, as discussed in Section 5.2, which leads to a \(-0.006\%\) shift to \(\Delta_{R}\); (ii) for the next-to-leading logarithmic corrections of \(\mathcal{O}(\alpha^{2}\ln(M_{W}/m_{c}))\), our result differs from the one in Ref. [38], producing a negative shift of approximately \(-0.011\%\); (iii) we do
not include \(\mathcal{O}(\alpha\alpha_{s}^{2})\) terms in the running of our Wilson coefficient (corresponding to the "deep inelastic scattering" region of the \(\gamma W\) box in the literature) that amounts to a net \(+0.007\%\) in \(\Delta_{R}\); (iv) finally, different choices in the factorization between electroweak and \(m_{N}/m_{e}\) logarithms compared to Refs. [8, 38] account for the remaining mismatch.
Using \(\Delta_{f,R}\) from Eqs. (6)-(5), respectively, in the master formula (4), we can extract \(V_{ud}\). This requires experimental input for the neutron lifetime \(\tau_{n}\) and the ratio \(\lambda\) of axial to vector couplings. Using the PDG [56, 57] averages for the experimental input, we obtain
\[V_{ud}^{\text{n, PDG}}=0.97430(2)_{\Delta_{f}}(13)_{\Delta_{R}}(82)_{\lambda}( 28)_{\tau_{n}}[88]_{\text{total}}. \tag{7}\]
Both \(\tau_{n}\) and \(\lambda\) carry an inflated error due to scale factors. Following Ref. [8], if we instead use the most precise neutron lifetime measurement \(\tau_{n}=877.75(36)\) s from UCN\(\tau\)@LANL [58] and the determination of \(\lambda\) from the most precise measurement of the beta asymmetry in polarized neutron decay by PERKEO-III [59, 60], we obtain a very competitive extraction of \(V_{ud}\) from neutron decay:
\[V_{ud}^{\text{n, best}}=0.97402(2)_{\Delta_{f}}(13)_{\Delta_{R}}(35)_{\lambda} (20)_{\tau_{n}}[42]_{\text{total}}, \tag{8}\]
with an uncertainty approaching the currently quoted error \(\delta V_{ud}=31\times 10^{-5}\) from \(0^{+}\to 0^{+}\) nuclear beta decays [7]. Compared to the baseline correction of Refs. [1, 2, 3, 4, 5, 6, 8], the positive shift of \(+0.061\%\) in \(\Delta_{R}\) and the negative shift of \(-0.035\%\) in \(\Delta_{f}\) partially compensate, producing a smaller positive shift of \(+0.026\%\) in the correction to the rate. This one, in turn, provides a negative shift in \(V_{ud}\), \(\delta V_{ud}\simeq-13\times 10^{-5}\), compared to the results quoted in Ref. [8].
In the remainder of this paper, we provide details on the derivation of the results presented above.
## 3 Step I: matching the Standard Model to LEFT
In this Section, we perform the matching of the Standard Model to the LEFT and present the RGE that control the effective couplings in the LEFT between the electroweak and QCD scales. We then introduce spurions and external sources in the LEFT to describe the electromagnetic and weak interactions of quarks [47, 61], which is particularly useful in the matching of LEFT to chiral perturbation theory, to be described in subsequent sections. Throughout, we regulate the UV divergences in dimensional regularization, working in \(d=4-2\epsilon\) spacetime dimensions.
### Wilson coefficient and RGE
The part of the LEFT Lagrangian relevant for muon and \(\beta\) decays just below the weak scale reads
\[\mathcal{L}_{\text{LEFT}}=-2\sqrt{2}G_{F}\ \bar{e}_{L}\gamma_{\rho}\mu_{L}\, \bar{\nu}_{\mu L}\gamma^{\rho}\nu_{eL}-2\sqrt{2}G_{F}V_{ud}\ C_{\beta}^{r}(a, \mu)\ \bar{e}_{L}\gamma_{\rho}\nu_{eL}\,\bar{u}_{L}\gamma^{\rho}d_{L}+\ \text{h.c.}+.... \tag{9}\]
Here \(\mu\) denotes the \(\overline{\text{MS}}\) renormalization scale and
\[G_{F}=\frac{\pi\alpha\left(\mu\right)g\left(\mu\right)}{\sqrt{2}M_{W}^{2}\left( \mu\right)s_{W}^{2}\left(\mu\right)}, \tag{10}\]
is the scale-independent Fermi constant, that is extracted from precise measurements of the muon lifetime [62, 63, 64, 65], expressed in terms of \(\overline{\text{MS}}\) Standard Model parameters (with \(s_{W}^{2}=1-M_{W}^{2}/M_{Z}^{2}\)). The function \(g\left(\mu\right)\) can be found in Ref. [66] and reduces to \(g\left(\mu\right)=1\) at tree level. The effective coupling multiplying the semileptonic operator that mediates \(\beta\) decays involves the _same_\(G_{F}\) as the pure-leptonic term in Eq. (9), the CKM matrix element \(V_{ud}\), and the \(\overline{\text{MS}}\)-subtracted Wilson coefficient \(C_{\beta}^{r}\left(a,\mu\right)\), which reads [66, 67, 36]
\[C_{\beta}^{r}\left(a,\mu\right) =1+\frac{\alpha}{\pi}\,\ln\frac{M_{Z}}{\mu}+\frac{\alpha}{\pi}B \left(a\right)-\frac{\alpha\alpha_{s}}{4\pi^{2}}\ln\frac{M_{W}}{\mu}+\mathcal{O }(\alpha\alpha_{s})+\mathcal{O}(\alpha^{2}), \tag{11}\] \[B\left(a\right) =\frac{a}{6}-\frac{3}{4}. \tag{12}\]
The finite \(\mathcal{O}(\alpha)\) matching coefficient depends on the scheme through \(B(a)\). We have used the Naive Dimensional Regularization (NDR) scheme for \(\gamma_{5}\) and kept track of the additional evanescent operator scheme dependence via the parameter \(a\), defined by [68, 69, 70]
\[\gamma^{\alpha}\gamma^{\rho}\gamma^{\beta}\mathrm{P}_{\mathrm{L}}\otimes\gamma_ {\beta}\gamma_{\rho}\gamma_{\alpha}\mathrm{P}_{\mathrm{L}}=4\left[1+a\left(4- d\right)\right]\gamma^{\rho}\mathrm{P}_{\mathrm{L}}\otimes\gamma_{\rho} \mathrm{P}_{\mathrm{L}}+\mathrm{E}\left(a\right), \tag{13}\]
with an evanescent operator \(\mathrm{E}(a)\) that has a vanishing matrix element in \(d=4\). Current conservation protects \(C_{\beta}\) from \(\mathcal{O}(\alpha_{s})\) corrections. Concerning the terms of \(\mathcal{O}(\alpha\alpha_{s})\), we only keep logarithmic contributions, as the finite matching coefficients and the corresponding three-loop anomalous dimensions are not known.
The renormalized Wilson coefficient \(C_{\beta}^{r}\left(a,\mu\right)\) obeys the following RGE:
\[\mu\frac{\mathrm{d}C_{\beta}^{r}\left(a,\mu\right)}{\mathrm{d}\mu} = \gamma(\alpha,\alpha_{s})\ C_{\beta}^{r}\left(a,\mu\right),\] (14a) \[\gamma(\alpha,\alpha_{s}) = \gamma_{0}\,\frac{\alpha}{\pi}+\gamma_{1}\,\left(\frac{\alpha}{ \pi}\right)^{2}+\gamma_{se}\,\frac{\alpha}{\pi}\,\frac{\alpha_{s}}{4\pi}\ +\ \cdots,\] (14b) \[\gamma_{0} = -1\] [36], (14c) \[\gamma_{1}^{NDR}(a) = \frac{\tilde{n}}{18}\,\left(2a+1\right),\qquad\qquad\tilde{n}= \sum_{f}n_{f}Q_{f}^{2},\] (14d) \[\gamma_{se} = +1\] [36, 66, 71], (14e)
where \(\tilde{n}\) is the scale-dependent effective number of fermions, \(\alpha\left(\mu\right)\) and \(\alpha_{s}\left(\mu\right)\) are the electromagnetic and strong running coupling constants. We have obtained \(\gamma_{1}^{NDR}(a)\) by adapting the QCD calculation in [68]. As far as we know, this is the first time the full two-loop anomalous dimension is worked out.2 With appropriate rescalings of the QCD diagrams of Ref. [68], we also reproduce \(\gamma_{se}=1\). \(\gamma_{0}\) and \(\gamma_{se}\) are scheme-independent. The scheme independence of \(\gamma_{se}\) follows from the general argument given in Ref. [72], combined with the fact that there is no finite matching term nor anomalous dimension to \(\mathcal{O}(\alpha_{s})\) for the operator under study here. On the other hand, \(\gamma_{1}\) depends on both the treatment of \(\gamma_{5}\) in \(d\) spacetime dimensions and on the scheme used for evanescent operators.
Footnote 2: Ref. [38] quotes the \(\tilde{n}\)-enhanced component of \(\gamma_{1}\). Taking into account the different normalization, Ref. [38] obtains \(\gamma_{1}^{NDR}(a=-1)=-(1/16)\times(44/9)\tilde{n}+\mathcal{O}(\tilde{n}^{0})\), while we find \(\gamma_{1}^{NDR}(a=-1)=-(1/16)\times(8/9)\tilde{n}\) for the total.
In our final result, we will use the numerical solution for \(C_{\beta}^{r}(a,\mu)\). However, it is quite instructive to provide an approximate analytic solution, based on the perturbative treatment of the next-to-leading logarithm (NLL) terms associated with the scheme-dependent two-loop anomalous dimension \(\gamma_{1}(a)=\mathcal{O}(\alpha^{2})\) and the finite one-loop matching condition \(B(a)\). First, setting \(\gamma_{1}(a)\to 0\) and consistently taking as an initial condition \(C_{\beta}^{r}(a,\mu_{SM})=1\), generalizing the result of Ref. [71] given at the \(\tau\)-mass scale, we obtain the solution
\[\tilde{C}_{\beta}^{r}(\mu)=\left(\frac{\alpha\left(m_{c}\right)}{ \alpha\left(\mu\right)}\right)^{\frac{3}{8}}\left(\frac{\alpha\left(m_{\tau} \right)}{\alpha\left(m_{c}\right)}\right)^{\frac{9}{32}}\left(\frac{\alpha \left(m_{b}\right)}{\alpha\left(m_{\tau}\right)}\right)^{\frac{9}{38}}\left( \frac{\alpha\left(\mu_{SM}\right)}{\alpha\left(m_{b}\right)}\right)^{\frac{9} {40}}\] \[\times\left(\frac{\alpha_{s}\left(m_{c}\right)}{\alpha_{s}\left( \mu\right)}\right)^{\frac{1}{18}\frac{\alpha\left(\mu\right)}{\pi}}\left(\frac{ \alpha_{s}\left(m_{b}\right)}{\alpha_{s}\left(m_{c}\right)}\right)^{\frac{3}{ 50}\frac{\alpha\left(m_{c}\right)}{\pi}}\left(\frac{\alpha_{s}\left(\mu_{SM} \right)}{\alpha_{s}\left(m_{b}\right)}\right)^{\frac{3}{46}\frac{\alpha\left(m _{b}\right)}{\pi}}, \tag{15}\]
where we have subsequently integrated out the \(b\) quark, \(\tau\) lepton, and \(c\) quark, and the strong and electromagnetic running couplings are obtained by solving the one-loop RGEs. This solution resums all the terms of \(\mathcal{O}(\alpha^{n}\ln^{n}(\mu_{SM}/\mu))\) and \(\mathcal{O}(\alpha\alpha_{s}^{n}\ln^{n}(\mu_{SM}/\mu))\). We can then perturbatively include the effects of \(\mathcal{O}(\alpha^{2}\ln(\mu_{SM}/\mu))\) due to \(\gamma_{1}(a)\) and \(B(a)\), arriving at
\[C_{\beta}^{r}(a,\mu)=\left(1+\frac{\alpha(\mu)}{\pi}B(a)\right)\times\tilde{C} _{\beta}^{r}(\mu)\times\delta_{NLL}(\mu), \tag{16}\]
where
\[\delta_{NLL}(\mu) = 1-\kappa\Bigg{(}\tilde{n}(m_{b})\,\left(\frac{\alpha(m_{b})}{\pi} \right)^{2}\,\ln\frac{\mu_{SM}}{m_{b}}\ +\ \tilde{n}(m_{\tau})\,\left(\frac{\alpha(m_{\tau})}{\pi}\right)^{2}\,\ln\frac{m_{ b}}{m_{\tau}} \tag{17}\] \[+\ \tilde{n}(m_{c})\,\left(\frac{\alpha(m_{c})}{\pi}\right)^{2}\, \ln\frac{m_{\tau}}{m_{c}}\ +\ \tilde{n}(\mu)\,\left(\frac{\alpha(\mu)}{\pi}\right)^{2}\,\ln\frac{m_{c}}{\mu}\Bigg{)}\] \[\approx\ 1-\kappa\,\tilde{n}(m_{b})\left(\frac{\alpha(\mu)}{\pi} \right)^{2}\,\ln\frac{\mu_{SM}}{\mu},\]
and the scheme-independent combination \(\kappa\) is given by3
Footnote 3: Ref. [38] finds \(\kappa=2/9\), more than a factor of two smaller compared to our result.
\[\kappa=\frac{1}{\tilde{n}}\left(\gamma_{1}(a)+\frac{1}{2}\beta_{0}B(a)\right)= \frac{5}{9}. \tag{18}\]
In the equation above, \(\beta_{0}=-(4/3)\tilde{n}\) controls the one-loop \(\beta\) function for \(\alpha\) via \(\mu d\alpha/d\mu=-(\beta_{0}/(2\pi))\alpha^{2}\). The scale-dependent effective number of fermions takes the values \(\tilde{n}(\mu<m_{c})=4\), \(\tilde{n}(m_{c})=16/3\), \(\tilde{n}(m_{\tau})=19/3\), and \(\tilde{n}(m_{b})=20/3\). Note that the scheme dependence of \(C_{\beta}^{r}(a,\mu)\) in the solution (16) appears only through the initial factor involving \(B(a)\). As we will show below, this term explicitly cancels when one includes the \(\mathcal{O}(\alpha)\) corrections to the matrix element of the semileptonic operator \(\bar{u}_{L}\gamma_{a}d_{L}\bar{e}_{L}\gamma^{a}\nu_{eL}\).
We also provide an analytic solution to the RGE (14) in terms of the evolution operator \(U(\mu,\mu_{SM})\) to NLL accuracy, formally written as \(C_{\beta}^{r}\left(a,\mu\right)=U(\mu,\mu_{SM})C_{\beta}^{r}(a,\mu_{SM})\), with the initial condition \(C_{\beta}^{r}(a,\mu_{SM})=1+(\alpha/\pi)(\ln(M_{Z}/\mu_{SM})+B\left(a\right))\). Using the two-loop running coupling \(\alpha(\mu)\) and the one-loop running \(\alpha_{s}(\mu)\), we resum the series of leading logarithms (\(n\geq 1\)) \(\mathcal{O}(\alpha^{n}\ln^{n}(\mu_{SM}/\mu))\), and sub-leading logarithms \(\mathcal{O}(\alpha\alpha_{s}^{n}\ln^{n}(\mu_{SM}/\mu))\) and \(\mathcal{O}(\alpha^{n+1}\ln^{n}(\mu_{SM}/\mu))\). The NLL solution for the evolution operator \(U(\mu_{1},\mu_{2})\), valid between two mass thresholds \(\mu_{1}\) and \(\mu_{2}\) takes the form [73, 74, 75]
\[U(\mu_{1},\mu_{2})=\left(\frac{\alpha(\mu_{1})}{\alpha(\mu_{2})}\right)^{- \frac{2\pi 0}{\beta_{0}}}\left(\frac{\alpha_{s}(\mu_{1})}{\alpha_{s}(\mu_{2})}\right)^{- \frac{2\pi 0}{\beta_{0},s}}\frac{\alpha(\mu_{1})}{4\pi}\left[1-\frac{2\gamma_{1}}{\beta_ {0}}\frac{\alpha(\mu_{1})-\alpha(\mu_{2})}{\pi}\right]\,, \tag{19}\]
where we expanded \(\alpha\) with respect to its two-loop beta function, \(\beta_{1}\), after which the \(\beta_{1}\) dependence cancels in Eq. (19). Therefore, both \(\alpha\) and \(\alpha_{s}\) in Eq. (19) are evaluated using the one-loop RGEs, and the QCD beta function \(\beta_{0,s}\) is expressed in terms of the number of active quarks \(n_{f}\) as \(\beta_{0,s}=\left(11N_{c}-2n_{f}\right)/3\). Neglecting two-loop matching conditions, the evolution operator between the electroweak scale, \(\mu_{SM}\), and the low-energy scale, \(\mu\), can then be obtained by using Eq. (19) between each particle threshold \(U(\mu,\mu_{SM})=U(\mu,m_{c})U(m_{c},m_{\tau})U(m_{\tau},m_{b})U(m_{b},\mu_{SM})\).
### External sources and spurions
The matching of LEFT to HBChPT is conveniently performed by introducing classical source fields \(\bar{l}^{\mu}(x)\) and \(\bar{r}^{\mu}(x)\) for the left- and right-handed chiral currents of quarks as well as electromagnetic left \(\mathbf{q}_{L}\) and right \(\mathbf{q}_{R}\) spurions, and the weak spurion \(\mathbf{q}_{W}\)[76, 61, 77, 47]. These allow one to handle the explicit chiral symmetry breaking introduced by the electromagnetic and weak interactions at the quark level in a compact way. With this motivation in mind, we write the source term for currents plus the QED and weak CC interactions of the light quarks \(q^{T}=(u,d)\) as
\[\mathcal{L}_{\rm LEFT}=\bar{q}_{L}\bar{l}q_{L}+\bar{q}_{R}\not{\!q}q_{R}-e\left( \bar{q}_{L}\mathbf{q}_{L}\not{\!A}q_{L}+\bar{q}_{R}\mathbf{q}_{R}\not{\!A}q_{ R}\right)+(\bar{e}_{L}\gamma_{\rho}\nu_{eL}\,\bar{q}_{L}\mathbf{q}_{W}\gamma^{ \rho}q_{L}+{\rm h.c.})+..., \tag{20}\]
where \(A^{\mu}\) denotes the photon field. The Lagrangian in (20) is invariant under local \(G=SU(2)_{L}\times SU(2)_{R}\times U(1)_{V}\) transformations
\[q_{L}\to L(x)e^{i\alpha_{V}(x)}q_{L},\quad q_{R}\to R(x)e^{i\alpha_{V}(x)}q_{R}, \tag{21}\]
with \(L,R\in SU(2)_{L,R}\), provided \({\bf q}_{L,R}\) and \({\bf q}_{W}\) transform as "spurions" under the chiral group, namely \({\bf q}_{L,W}\to L{\bf q}_{L,W}L^{\dagger}\) and \({\bf q}_{R}\to R{\bf q}_{R}R^{\dagger}\), and that \(\bar{l}_{\mu}\) and \(\bar{r}_{\mu}\) transform as gauge fields under \(G\). At the physical point,
\[{\bf q}_{L}={\bf q}_{R}={\rm diag}(Q_{u},Q_{d}),\qquad\qquad{\bf q}_{W}=-2\sqrt {2}{\rm G}_{\rm F}V_{ud}C^{r}_{\beta}\,\tau^{+}, \tag{22}\]
with \(\tau^{+}=(\tau^{1}+i\tau^{2})/2\) in terms of the Pauli matrices \(\tau^{a}\). Note that we include the LEFT Wilson coefficient \(C^{r}_{\beta}\) in the definition of the spurion \({\bf q}_{W}\). With this identification, Eq. (20) reproduces the semileptonic piece of Eq. (9).
The \({\cal O}(e^{2})\) counterterms in the LEFT Lagrangian can be written in terms of spurions as [47]
\[{\cal L}_{\rm LEFT}^{\rm CTT} =-2e^{2}Q_{e}^{2}g_{00}\,\bar{e}\left(i\partial\!\!\!/-eQ_{e}A\! \!\!/-m_{e}\right)e-ig_{23}e^{2}\left(\bar{q}_{L}\left[{\bf q}_{L},D^{\rho}{ \bf q}_{L}\right]\gamma_{\rho}q_{L}+\bar{q}_{R}\left[{\bf q}_{R},D^{\rho}{\bf q }_{R}\right]\gamma_{\rho}q_{R}\right)\] \[+e^{2}Q_{e}\Big{(}\bar{e}_{L}\gamma_{\rho}\nu_{L}\left(g_{02}\, \bar{q}_{L}{\bf q}_{W}{\bf q}_{L}\gamma^{\rho}q_{L}+g_{03}\,\bar{q}_{L}{\bf q} _{L}{\bf q}_{W}\gamma^{\rho}q_{L}\right)+{\rm h.c.}\Big{)}, \tag{23}\]
where \(g_{00}\) is the counterterm related to the electron wavefunction renormalization, \(g_{02}\) and \(g_{03}\) come from the counterterm of \(C_{\beta}\), while \(g_{23}\) includes contributions from both the counterterm of \(C_{\beta}\) as well as divergences related to the quark wavefunction renormalization. Furthermore,
\[D^{\rho}{\bf q}_{L} \equiv\partial^{\rho}{\bf q}_{L}-i\left[l^{\rho},{\bf q}_{L} \right], \tag{24}\] \[D^{\rho}{\bf q}_{R} \equiv\partial^{\rho}{\bf q}_{R}-i\left[r^{\rho},{\bf q}_{R} \right], \tag{25}\]
are chiral covariant derivatives, expressed in terms of the fields \(l^{\mu}(x)\) and \(r^{\mu}(x)\) that combine the classical sources and the electroweak fields:
\[l_{\mu} =\bar{l}_{\mu}-e{\bf q}_{L}A_{\mu}+{\bf q}_{W}\,\bar{e}_{L}\gamma_ {\mu}\nu_{eL}+{\bf q}_{W}^{\dagger}\bar{\nu}_{eL}\gamma_{\mu}e_{L}, \tag{26}\] \[r_{\mu} =\bar{r}_{\mu}-e{\bf q}_{R}A_{\mu}. \tag{27}\]
In the \(\overline{\rm MS}\) scheme, the \(g_{ij}\) couplings appearing in Eq. (23) are determined by the \(1/\varepsilon\) divergences and can be written as
\[g_{ij}=\frac{h_{ij}}{\left(4\pi\right)^{2}}\left(\frac{1}{\varepsilon}-\gamma _{E}+\ln\left(4\pi\right)\right), \tag{28}\]
with \(h_{00}=1/2\), \(h_{23}=(1/2)(1-\alpha_{s}/\pi)\), \(h_{02}=-1-\alpha_{s}/\pi\), and \(h_{03}=4-2\alpha_{s}/\pi\).
## 4 Step II: matching LEFT to HBChPT
The goal of this section is to find a representation for the LECs appearing in \(\hat{C}_{V}\), see (3), in terms of the LEFT counterterms \(g_{ij}\) and quark correlation functions, which can then be modeled or computed via non-perturbative techniques such as lattice QCD.
### The Chiral Lagrangian
The chiral representation of Eq. (23) can be built using standard spurion techniques. As in Eq. (23), we will need purely leptonic operators, purely electromagnetic operators, and operators with charged leptons and neutrinos. The corresponding chiral Lagrangians were built in Refs. [44, 61, 78, 79, 80]. Here we extend the bases of [44, 80] in order to avoid assumptions regarding \({\bf q}_{L}\) and \({\bf q}_{R}\), allowing us to keep the spurions \({\bf q}_{L,R}\) completely general. Moreover, we do not use the equations of motions to reduce the operator set in order to avoid hadronic contributions to purely leptonic LECs [47].
As we will see below, to perform the matching between LEFT and HBChPT it is convenient to introduce vector and axial-vector charge spurions and sources, which we define as
\[{\bf q}_{V}={\bf q}_{L}+{\bf q}_{R},\qquad{\bf q}_{A}={\bf q}_{L}-{\bf q}_{R}, \qquad{\rm v}_{\rho}=l_{\rho}+r_{\rho},\qquad{\rm a}_{\rho}=l_{\rho}-r_{\rho}. \tag{29}\]
It is also convenient to decompose the electromagnetic charge spurions in isovector and isoscalar components
\[{\bf q}_{J}^{\rm baryon}={\bf q}_{J}^{0}+{\bf q}_{J}^{a}\tau^{a},\qquad{\bf q}_{ J}^{\rm quark}=\frac{{\bf q}_{J}^{0}}{3}+{\bf q}_{J}^{a}\tau^{a}, \tag{30}\]
with \(J\in\{L,R,V,A\}\). The physical values are \({\bf q}_{L,R}^{0}={\bf q}_{L,R}^{3}=\frac{1}{2}\) for the left and right spurions, \({\bf q}_{V}^{0}={\bf q}_{V}^{3}=1\) for the vector spurion, and \({\bf q}_{A}^{0}={\bf q}_{A}^{3}=0\) for the axial case.4
Footnote 4: In what follows, we will omit the superscripts in the charge spurions: whenever \({\bf q}_{L,R,V,A}\) appears in the HBChPT Lagrangian, it is understood to be \({\bf q}_{L,R,V,A}^{\rm baryon}\).
The chiral Lagrangians are built using the chiral covariant functions of the charges and of the corresponding covariant derivatives in Eqs. (24) and (25)
\[{\cal Q}_{L}^{W}=u{\bf q}_{W}u^{\dagger},\quad{\cal Q}_{L}=u{\bf q}_{L}u^{ \dagger},\quad{\cal Q}_{R}=u^{\dagger}{\bf q}_{R}u,\quad{\cal Q}_{\pm}=\frac{{ \cal Q}_{L}\pm{\cal Q}_{R}}{2},\quad c_{\rho}^{\pm}=\frac{1}{2}\left(u\left(D_ {\rho}{\bf q}_{L}\right)u^{\dagger}\pm u^{\dagger}\left(D_{\rho}{\bf q}_{R} \right)u\right), \tag{31}\]
with \(u^{2}=U=\exp(i\mathbf{\pi}\cdot\mathbf{\tau}/F_{\pi})\) and \(F_{\pi}\approx 92\) MeV.
At lowest order, the HBChPT Lagrangian is given by
\[{\cal L}_{\pi}^{p^{2}}+{\cal L}_{\pi}^{e^{2}}+{\cal L}_{\pi N}^{p}=\frac{F_{ \pi}^{2}}{4}\langle u_{\mu}u^{\mu}+\chi_{+}\rangle+e^{2}Z_{\pi}F_{\pi}^{4} \langle{\cal Q}_{L}{\cal Q}_{R}\rangle+\bar{N}_{v}iv\cdot\nabla N_{v}+g_{A}^{ (0)}\bar{N}_{v}S\cdot uN_{v}, \tag{32}\]
where \(F_{\pi}\) and \(g_{A}^{(0)}\) denote the pion decay constant and the nucleon axial coupling in the chiral limit. \(u_{\mu}\) and \(\chi_{+}\) are given by
\[u_{\mu}=i\left[u^{\dagger}(\partial_{\mu}-ir_{\mu})u-u(\partial_{\mu}-il_{\mu} )u^{\dagger}\right],\qquad\chi_{\pm}=u^{\dagger}\chi u^{\dagger}\pm u\chi^{ \dagger}u,\qquad\chi=B_{0}(m_{q}+\bar{s}+i\bar{p}), \tag{33}\]
with the light quark masses \(m_{q}\) and a LEC of dimension of mass \(B_{0}\), which is related to the quark condensate. We further introduced the nucleon chiral covariant derivative
\[\nabla_{\mu}N \equiv \left(\partial_{\mu}+\Gamma_{\mu}\right)N,\qquad\Gamma_{\mu}= \frac{1}{2}\left[u(\partial_{\mu}-il_{\mu}^{\rm baryon})u^{\dagger}+u^{ \dagger}(\partial_{\mu}-ir_{\mu}^{\rm baryon})u\right], \tag{34}\]
where by the superscript _baryon_ we indicate that the photon couples to the nucleon via the charge \({\bf q}_{V}^{\rm baryon}\) in Eq. (30). In addition to the weak and electromagnetic interactions arising from chiral covariant derivatives, Eq. (32) contains electromagnetic effects mediated by high-momentum photons via the coupling \(Z_{\pi}\), which is related to the pion-mass splitting.
The chiral Lagrangian needed at \({\cal O}(G_{F}\alpha)\) is given by
\[{\cal L}={\cal L}_{\rm lept}^{CT}+{\cal L}_{\pi N}^{e^{2}p}+{\cal L}_{\pi N \ell}^{e^{2}p}. \tag{35}\]
\({\cal L}_{\rm lept}^{CT}\) is a purely leptonic counterterm Lagrangian
\[{\cal L}_{\rm lept}^{CT}=e^{2}X_{6}\bar{e}\left(i\partial\!\!\!/+e\not\!\!A \right)e. \tag{36}\]
The coupling \(X_{6}\) is determined by computing the electron propagator in LEFT and chiral perturbation theory, obtaining
\[X_{6}^{r}\left(\mu_{\chi},\mu\right)=\frac{\xi}{(4\pi)^{2}}\left(1-\ln\frac{ \mu_{\chi}^{2}}{\mu^{2}}\right), \tag{37}\]
in arbitrary \(R_{\xi}\) gauge, where \(\mu\) and \(\mu_{\chi}\) are the LEFT and HBChPT renormalization scales, respectively. \(X_{6}^{r}(\mu_{\chi},\mu)\) denotes the renormalized coupling, after subtraction of the \(1/\varepsilon\) pole in the \(\overline{\rm MS}_{\chi}\) scheme. Note that, following standard practice [49], in the \(\overline{\rm MS}_{\chi}\) scheme, we subtract
\[\frac{1}{\varepsilon}-\gamma_{E}+\ln\left(4\pi\right)+1, \tag{38}\]
instead of the conventional \(\overline{\rm MS}\) subtraction used in LEFT:
\[\frac{1}{\varepsilon}-\gamma_{E}+\ln\left(4\pi\right). \tag{39}\]
For the electromagnetic Lagrangian \({\cal L}_{\pi N}^{e^{2}p}\), we use the construction of Ref. [80]. Only one operator is required to describe Fermi transitions,
\[{\cal L}_{\pi N}^{e^{2}p}=e^{2}g_{9}\bar{N}_{v}\left(\frac{i}{2}\left[{\cal Q} _{+},v\cdot c^{+}\right]+{\rm h.c.}\right)N_{v}. \tag{40}\]
For the electroweak sector with charged leptons and neutrinos, we provide the most general weak-interaction Lagrangian in the heavy-baryon sector with one charge and one weak spurion, where we only assumed the constraint \(\langle{\bf q}_{W}\rangle=0\)[81]:
\[{\cal L}_{\pi N\ell}^{e^{2}p}=e^{2}\sum_{i=1}^{6}\bar{e}_{L}\gamma_{\rho}\nu_ {eL}\bar{N}_{v}\,\left(V_{i}v^{\rho}-2A_{i}g_{A}^{(0)}S^{\rho}\right){\rm O}_{ i}N_{v}+{\rm h.c.}, \tag{41}\]
where \(g_{A}^{(0)}\) denotes the nucleon axial coupling in the chiral limit and
\[{\rm O}_{1} =[{\cal Q}_{L},{\cal Q}_{L}^{W}], {\rm O}_{2} =[{\cal Q}_{R},{\cal Q}_{L}^{W}],\] \[{\rm O}_{3} =\{{\cal Q}_{L},{\cal Q}_{L}^{W}\}, {\rm O}_{4} =\{{\cal Q}_{R},{\cal Q}_{L}^{W}\},\] \[{\rm O}_{5} =\langle{\cal Q}_{L}{\cal Q}_{L}^{W}\rangle, {\rm O}_{6} =\langle{\cal Q}_{R}{\cal Q}_{L}^{W}\rangle. \tag{42}\]
The dimensionless low-energy coupling constants in Ref. [44] are related to the couplings in Eq. (41) by the relations \(\tilde{X}_{1}/C_{\beta}^{r}=V_{1}+V_{3}+V_{4}-A_{1}-A_{3}-A_{4}\), \(\tilde{X}_{2}/C_{\beta}^{r}=-V_{2}\), \(\tilde{X}_{3}/C_{\beta}^{r}=2A_{2}g_{A}^{(0)}\), \(\tilde{X}_{4}/C_{\beta}^{r}=V_{4}+V_{6}\), \(\tilde{X}_{5}/C_{\beta}^{r}=-2(A_{4}+A_{6})g_{A}^{(0)}\) when the spurions take physical values. In Ref. [44], the authors have used the equations of motion to eliminate some \(S^{\rho}\)-dependent operators. In addition, they reduced operators that are bi-linear in spurions to linear expressions by exploiting the relations \({\bf q}_{L}{\bf q}_{W}=(2/3){\bf q}_{W}\) and \({\bf q}_{W}{\bf q}_{L}=-(1/3){\bf q}_{W}\), valid for physical values of the spurions.
As realized in the mesonic sector in Refs. [47, 77], we can interpret amplitudes in LEFT and in HBChPT as functionals of the same charges \(q^{0,a}(x)\), promoted to be spacetime-dependent external fields. The matching between the LEFT and HBChPT can then be obtained by equating functional derivatives of the effective action with respect to \(q^{0,a}(x)\) in both theories. As we will see, this allows us to derive an explicit representation for the LECs and to keep track of unphysical scale and scheme dependence appearing at intermediate steps of the calculation.
### Electromagnetic coupling constant
We start from the electromagnetic coupling \(g_{9}\). Expanding the charge covariant derivative in Eq. (40), we obtain
\[{\cal L}_{\pi N}^{e^{2}p}=e^{2}g_{9}\frac{i}{4}\bar{N}_{v}v^{\rho}\left([{\bf q }_{V},\partial_{\rho}{\bf q}_{V}]-\frac{i}{2}\left[{\bf q}_{V},[v_{\rho},{\bf q }_{V}]\right]-\frac{i}{2}\left[{\bf q}_{V},[{\rm a}_{\rho},{\bf q}_{A}]\right] \right)N_{v}. \tag{43}\]
\(g_{9}\) can then be evaluated by taking functional derivatives with respect to two isovector charges, provided that the charge carries non-zero momentum, or by taking three derivatives, two with respect to the charges and one with respect to a vector or axial source. The first representation is more convenient, since, as we will see, it allows one to automatically obtain cancellations between electromagnetic and weak couplings.
More precisely, we will consider the following object:
\[\Gamma_{VV}=\frac{\varepsilon^{abc}\tau_{ij}^{c}\,\delta^{\sigma^{\prime} \sigma}}{12}\frac{i}{2}v_{\rho}\frac{\partial}{\partial r_{\rho}}\left(\int{ \rm d}^{d}xe^{ir\cdot x}\langle N(k^{\prime},\sigma^{\prime},j)|\frac{\delta ^{2}W\left({\bf q}_{V},{\bf q}_{A}\right)}{\delta{\bf q}_{V}^{b}\left(x\right) \delta{\bf q}_{V}^{a}\left(0\right)}\Bigg{|}_{{\bf q}=0}|N(k,\sigma,i)\rangle \right)\Bigg{|}_{r_{\rho}=0}, \tag{44}\]
where \(k\) and \(k^{\prime}\) are the nucleon momenta, \(\sigma\) and \(\sigma^{\prime}\) denote the nucleon spins, and \(i\), \(j\) the nucleon isospins. We take the nucleon to be at rest, \(k=k^{\prime}=m_{N}v\) and use the nonrelativistic normalization for heavy-particle states \(\langle N\left(k^{\prime},\sigma^{\prime},j\right)|N\left(k,\sigma,i\right) \rangle=(2\pi)^{3}\delta^{(3)}(\mathbf{k}-\mathbf{k}^{\prime})\delta^{ij} \delta^{\sigma\sigma^{\prime}}\). \(W=-i\ln Z\) denotes the generating functional of the connected diagrams.
\(\Gamma_{VV}\) needs to be computed in both HBChPT and LEFT, and, in both theories, it receives tree-level and loop contributions. The contributions to \(\Gamma_{VV}\) in HBChPT are illustrated in Fig. 1. The short-range contributions are determined by LECs in the \(\mathcal{L}_{\pi N}^{e^{2}p}\) Lagrangian. \(g_{9}\) provides the only contribution to \(\Gamma_{VV}\). The loops are determined by couplings in the leading-order (LO) pion and pion-nucleon Lagrangians. In particular, the diagram with pion-mass splitting \(Z_{\pi}\) is symmetric in isospin, and vanishes once contracted with the Levi-Civita tensor, so that the loop corrections are purely determined by the minimal coupling of the photon to the nucleon. In arbitrary \(R_{\xi}\) gauge, we introduce the photon mass \(\lambda_{\gamma}\) as an infrared regulator and obtain
\[\Gamma_{VV}|^{\text{HBXPT}} =e^{2}\left(g_{9}+\int\frac{id^{d}q}{\left(2\pi\right)^{d}}\frac {1}{\left(q^{2}-\lambda_{\gamma}^{2}\right)^{2}}+\frac{1-\xi}{2}\int\frac{id^ {d}q}{\left(2\pi\right)^{d}}\frac{1}{\left(q^{2}-\lambda_{\gamma}^{2}\right) \left(q^{2}-\xi\lambda_{\gamma}^{2}\right)}\right)\] \[=\frac{e^{2}}{(4\pi)^{2}}\left((4\pi)^{2}g_{9}^{r}(\mu_{\chi},\mu )-\left(1+\frac{1-\xi}{2}\right)\ln\frac{\mu_{\chi}^{2}}{\lambda_{\gamma}^{2} }+1-\frac{\xi}{2}\ln\xi\right). \tag{45}\]
\(g_{9}^{r}(\mu_{\chi},\mu)\) in the second line denotes the renormalized coupling, after subtraction of the \(1/\varepsilon\) pole in the \(\overline{\text{MS}}_{\chi}\) scheme. For \(\xi=1\), the anomalous dimension of \(g_{9}^{r}(\mu_{\chi},\mu)\) agrees with the result of Ref. [80], so that Eq. (45) is independent of the scale \(\mu_{\chi}\).
In the LEFT, the same matrix element is given by
\[\Gamma_{VV}|^{\text{LEFT}}=e^{2}\left(-g_{23}+\int\frac{\text{d}^{d}q}{\left( 2\pi\right)^{d}}\frac{v\cdot q\,g_{\mu\nu}T_{VV}^{\mu\nu}\left(q,v\right)}{ \left(q^{2}-\lambda_{\gamma}^{2}\right)^{2}}+\frac{1-\xi}{2}\int\frac{i\text{ d}^{d}q}{\left(2\pi\right)^{d}}\frac{1}{\left(q^{2}-\lambda_{\gamma}^{2} \right)\left(q^{2}-\xi\lambda_{\gamma}^{2}\right)}\right). \tag{46}\]
Eq. (46) contains a tree-level term, proportional to the counterterm \(g_{23}\) that cancels the divergences generated by loop diagrams. The loop contribution contains the hadronic tensor \(T_{VV}^{\mu\nu}\left(q,v\right)\), which can be expressed in terms of the two-point correlation function of quark currents. Here, we use the definition [34]
\[T_{VV(A)}^{\mu\nu}\left(q,v\right)=\frac{\varepsilon^{abc}\tau_{ij}^{c}\delta^ {\sigma^{\prime}\sigma}}{12}\frac{i}{4}\int\text{d}^{d}x\,e^{iq\cdot x}\langle N (k,\sigma^{\prime},j)|T\left[\overline{q}\gamma^{\mu}\tau^{b}q\left(x\right) \overline{q}\gamma^{\nu}\left(\gamma_{5}\right)\tau^{a}q(0)\right]|N(k,\sigma,i)\rangle. \tag{47}\]
The gauge-dependent term in Eq. (46) is obtained using
\[q_{\mu}T_{VV}^{\mu\nu}(q,v)=q_{\mu}T_{VV}^{\nu\mu}(q,v)=iv^{\nu}, \tag{48}\]
which follows from the conservation of the vector current.
Figure 1: Diagrams that contribute to \(\Gamma_{VV}\) in HBChPT are shown. Double, wiggly, and dashed lines denote nucleons, photons, and pions, respectively. Dashed circles denote insertions of the sources \(\mathbf{q}_{V}^{a,b}\). The arrows denote the flow of the momentum \(r\) inserted by the sources. The first three diagrams originate from the leading-order \(\pi\) and \(\pi\)N Lagrangians, \(\mathcal{L}_{\pi}^{p^{2}}\), \(\mathcal{L}_{\pi}^{e^{2}}\), and \(\mathcal{L}_{\pi N}^{p}\)[80, 44, 82], which are presented in Eq. (32). The last diagram denotes contributions from \(\mathcal{L}_{\pi N}^{e^{2}p}\) and is proportional to \(g_{9}\).
To highlight the UV structure of Eq. (47), we add and subtract the high-energy limit of the hadronic tensor provided by the operator product expansion (OPE)
\[g_{\mu\nu}T_{VV}^{\mu\nu}(q,v)\big{|}_{\rm OPE}=\frac{iv\cdot q}{q^{2}-\mu_{0}^{ 2}}\,\left(2-d+2\frac{\alpha_{s}}{\pi}\right), \tag{49}\]
where for the OPE of the relevant currents we use results from Refs. [83, 84], adapted to include the appropriate color factors [35]. Since our calculation is only accurate at leading logarithm in \(\mathcal{O}(\alpha\alpha_{s})\), the \(\mathcal{O}(\alpha_{s})\) correction to the OPE is computed in \(d=4\). Note that in Eq. (49) we have introduced an arbitrary scale \(\mu_{0}\) to regulate infrared divergences that appear when evaluating the convolution integrals with \(T_{\rm OPE}\). Performing the relevant integrations, we obtain
\[\Gamma_{VV}|^{\rm LEFT} =\frac{e^{2}}{(4\pi)^{2}}\left(\frac{1}{2}\left(1-\frac{\alpha_{s }}{\pi}\right)\ln\frac{\mu^{2}}{\mu_{0}^{2}}+\frac{1}{4}-\frac{1-\xi}{2}\left( \ln\frac{\mu^{2}}{\lambda_{\gamma}^{2}}+1\right)-\frac{\xi}{2}\ln\xi\right.\] \[\left.+(4\pi)^{2}\int\frac{{\rm d}^{4}q}{\left(2\pi\right)^{4}} \frac{v\cdot q\,g_{\mu\nu}\overline{T}_{VV}^{\mu\nu}\left(q,v\right)}{\left( q^{2}-\lambda_{\gamma}^{2}\right)^{2}}\right), \tag{50}\]
where \(\overline{T}\) denotes the subtracted hadronic tensor, \(\overline{T}=T-T_{\rm OPE}\). \(\overline{T}\) depends on \(\mu_{0}\) in such a way that the final results are \(\mu_{0}\)-independent. Finally, note that we are dropping terms of \(\mathcal{O}(\alpha\alpha_{s})\) that appear without logarithmic enhancements, because they are beyond the accuracy of our calculation.
Equating Eqs. (45) and (46), we obtain a representation for \(g_{9}\):
\[g_{9}^{r}(\mu_{\chi},\mu) =\int\!\frac{{\rm d}^{4}q}{\left(2\pi\right)^{4}}\frac{v\cdot q\, g_{\mu\nu}\overline{T}_{VV}^{\mu\nu}\left(q,v\right)}{\left(q^{2}-\lambda_{\gamma}^{2} \right)^{2}}\] \[+\frac{1}{\left(4\pi\right)^{2}}\left[\ln\frac{\mu_{\chi}^{2}}{ \lambda_{\gamma}^{2}}+\frac{1}{2}\left(1-\frac{\alpha_{s}}{\pi}\right)\ln \frac{\mu^{2}}{\mu_{0}^{2}}+\frac{1-\xi}{2}\ln\frac{\mu_{\chi}^{2}}{\mu^{2}}- \frac{5}{4}+\frac{\xi}{2}\right]. \tag{51}\]
Alternatively, to control the infrared region and see a cancellation of the infrared divergences, we can introduce the combination \(\tilde{T}=T-T_{\rm IR}\), where \(T_{\rm IR}\) is the leading infrared contribution \(g_{\mu\nu}T_{\rm IR}^{\mu\nu}=i/\left(v\cdot q\right)\) and obtain
\[g_{9}^{r}(\mu_{\chi},\mu)=\int\!\frac{{\rm d}^{d}q}{\left(2\pi \right)^{d}}\frac{v\cdot q\,g_{\mu\nu}\tilde{T}_{VV}^{\mu\nu}\left(q,v\right) }{\left(q^{2}\right)^{2}}+\frac{1}{\left(4\pi\right)^{2}}\left[\left(1+\frac{ 1-\xi}{2}\right)\ln\frac{\mu_{\chi}^{2}}{\mu^{2}}-\frac{3}{2}+\frac{\xi}{2} \right]. \tag{52}\]
### Electroweak coupling constants
We follow the same strategy for the determination of the electroweak coupling constants. In this case, the operators \(V_{1}\) and \(V_{2}\) receive contributions from the isovector component of the electromagnetic charges,
Figure 2: Diagrams that contribute to \(\Gamma_{VW}\) in HBChPT are shown. Single lines denote electrons and neutrinos. The remaining notations are the same as in Fig. 1. In this case, the sources inject zero momentum. The first two diagrams originate from the LO \(\pi\)N Lagrangian \(\mathcal{L}_{\pi N}^{p}\), the last diagram denotes contributions from \(\mathcal{L}_{\pi N\ell}^{e^{2}p}\). Diagrams with the sources coupling to pions do not contribute at this order.
while \(V_{3}\) and \(V_{4}\) from the isoscalar component. We thus define two matrix elements
\[\bar{e}\not{\rm r}\!{\rm P}_{\rm L}\nu_{e}\,\Gamma^{(1)}_{VW} =\frac{\varepsilon^{abc}\tau^{c}_{ij}\delta^{\sigma^{\prime}\sigma} }{12}\frac{i}{2}\int{\rm d}^{d}x\langle e^{-}\bar{\nu}_{e}N(k,\sigma^{\prime}, j)|\frac{\delta^{2}W\left({\bf q}_{V},{\bf q}_{A},{\bf q}_{W}\right)}{\delta{ \bf q}^{b}_{V}\left(x\right)\delta{\bf q}^{a}_{W}\left(0\right)}\Bigg{|}_{{\bf q }=0}|N(k,\sigma,i)\rangle, \tag{53}\] \[\bar{e}\not{\rm r}\!{\rm P}_{\rm L}\nu_{e}\,\Gamma^{(0)}_{VW} =\frac{\tau^{a}_{ij}\delta^{\sigma^{\prime}\sigma}}{12}\int{\rm d }^{d}x\langle e^{-}\bar{\nu}_{e}N(k,\sigma^{\prime},j)|\frac{\delta^{2}W\left( {\bf q}_{V},{\bf q}_{A},{\bf q}_{W}\right)}{\delta{\bf q}^{0}_{V}\left(x \right)\delta{\bf q}^{a}_{W}\left(0\right)}\Bigg{|}_{{\bf q}=0}|N(k,\sigma,i)\rangle. \tag{54}\]
At the order we are working, the electron and neutrinos can be taken to be massless and to carry zero momentum.
The HBChPT diagrams contributing to \(\Gamma^{(0,1)}_{VW}\) are shown in Fig. 2. The loop diagrams cancel for isoscalar electromagnetic couplings, so that we obtain
\[\Gamma^{(1)}_{VW}\Big{|}^{\rm HB\times PT} =e^{2}\left(V_{1}+V_{2}-\frac{1}{2}\int\frac{i{\rm d}^{d}q}{\left( 2\pi\right)^{d}}\frac{1}{q^{2}\left(q^{2}-\lambda_{\gamma}^{2}\right)}+\frac{ 1-\xi}{2}\int\frac{i{\rm d}^{d}q}{\left(2\pi\right)^{d}}\frac{1}{\left(q^{2}- \lambda_{\gamma}^{2}\right)\left(q^{2}-\xi\lambda_{\gamma}^{2}\right)}\right), \tag{55}\] \[\Gamma^{(0)}_{VW}\Big{|}^{\rm HB\times PT} =e^{2}\left(V_{3}+V_{4}\right). \tag{56}\]
In the LEFT, the isovector and isoscalar components are given by
\[\bar{e}\not{\rm r}\!{\rm P}_{\rm L}\nu_{e}\,\,\Gamma^{(1)}_{VW} \Big{|}^{\rm LEFT} =e^{2}\bar{e}\not{\rm r}\!{\rm P}_{\rm L}\nu_{e}\left(\frac{g_{0 2}-g_{03}}{4}+\frac{1-\xi}{2}\int\frac{i{\rm d}^{d}q}{\left(2\pi\right)^{d}} \frac{1}{\left(q^{2}-\lambda_{\gamma}^{2}\right)\left(q^{2}-\xi\lambda_{\gamma }^{2}\right)}\right)\] \[-\frac{e^{2}}{2}\int\frac{{\rm d}^{d}q}{\left(2\pi\right)^{d}} \frac{1}{q^{2}\left(q^{2}-\lambda_{\gamma}^{2}\right)}\bar{e}\gamma_{\mu}\not {\rm q}\gamma_{\nu}P_{L}\nu_{e}\,\left(T^{\mu\nu}_{VV}(q,v)-T^{\mu\nu}_{VA}(q,v )\right), \tag{57}\] \[\bar{e}\not{\rm r}\!{\rm P}_{\rm L}\nu_{e}\,\,\Gamma^{(0)}_{VW} \Big{|}^{\rm LEFT} =-e^{2}\bar{e}\not{\rm r}\!{\rm P}_{\rm L}\nu_{e}\frac{g_{02}+g_{ 03}}{12}\] \[+\frac{e^{2}}{2}\int\frac{i{\rm d}^{d}q}{\left(2\pi\right)^{d}} \frac{1}{q^{2}\left(q^{2}-\lambda_{\gamma}^{2}\right)}\bar{e}\gamma_{\mu}\not {\rm q}\gamma_{\nu}P_{L}\nu_{e}\,\left(T^{\mu\nu}_{VV,\,0}(q,v)-T^{\mu\nu}_{VA,\,0}(q,v)\right). \tag{58}\]
The hadronic tensors with two isovector currents are defined in Eq. (47), while we define the hadronic tensor with one isoscalar vector current as
\[T^{\mu\nu}_{VV(A),\,0}\left(q,v\right)=\frac{\tau^{a}_{ij}\delta^{\sigma^{ \prime}\sigma}}{12}\frac{i}{6}\int{\rm d}^{d}x\,e^{iq\cdot x}\langle N(k, \sigma^{\prime},j)|T\left[\bar{q}\gamma^{\mu}q\left(x\right)\bar{q}\gamma^{ \nu}\left(\gamma_{5}\right)\tau^{a}q(0)\right]|N(k,\sigma,i)\rangle. \tag{59}\]
As in Sec. 4.2, the UV divergences in the LEFT are determined by the operator product expansion. In NDR, the leading-order OPEs of \(T^{\mu\nu}_{VV}-T^{\mu\nu}_{VA}\) and \(T^{\mu\nu,0}_{VV,0}-T^{\mu\nu}_{VA,0}\) are proportional to the symmetric and antisymmetric combinations of Dirac matrices \((\gamma^{\mu}\not{q}\gamma^{\nu}\pm\gamma^{\nu}\not{q}\gamma^{\mu})P_{L}\), respectively. The symmetric combination does not depend on the scheme, while the antisymmetric piece depends on the definition of the evanescent operators, in such a way as to compensate the dependence of the couplings in the LEFT. Using the OPE, we obtain
\[\bar{e}\gamma_{\mu}\not{q}\gamma_{\nu}P_{L}\nu_{e}\left(T^{\mu\nu }_{VV}-T^{\mu\nu}_{VA}\right)\big{|}_{\rm OPE} =i\left[\frac{3d-2}{d}\,-\frac{1}{2}\frac{\alpha_{s}}{\pi} \right]\frac{q^{2}}{q^{2}-\mu_{0}^{2}}\,\bar{e}\not{\rm r}P_{L}\nu_{e}, \tag{60}\] \[\bar{e}\gamma_{\mu}\not{q}\gamma_{\nu}P_{L}\nu_{e}\left(T^{\mu\nu }_{VV,0}-T^{\mu\nu}_{VA,0}\right)\Big{|}_{\rm OPE} =\left[\frac{1}{d}\left((4-d)\left(1+\frac{4a}{3}\right)-2\right)+ \frac{1}{2}\frac{\alpha_{s}}{\pi}\right]\frac{q^{2}}{q^{2}-\mu_{0}^{2}}\,\bar{ e}\not{\rm r}P_{L}\nu_{e}. \tag{61}\]
The integrals of the subtracted hadronic tensors \(\bar{T}\) are convergent, so that we can perform the Dirac algebra on the leptonic leg in \(d=4\) dimension. Putting everything together, we arrive at the matching
equations
\[2(V_{1}+V_{2})(\mu_{\chi},\mu) =\int\frac{\mathrm{d}^{4}q}{\left(2\pi\right)^{4}}\frac{1}{q^{2} \left(q^{2}-\lambda_{\gamma}^{2}\right)}\,\left(v\cdot q\,g_{\mu\nu}\overline{T} _{VV}^{\mu\nu}(q,v)+i\varepsilon_{\mu\rho\nu\sigma}q^{\rho}v^{\sigma}\overline {T}_{VA,\,0}^{\mu\nu}(q,v)\right)\] \[+\frac{1}{(4\pi)^{2}}\left[2\ln\frac{\mu^{2}}{\lambda_{\gamma}^{2 }}+\frac{1}{2}\left(1-\frac{\alpha_{s}}{\pi}\right)\ln\frac{\mu^{2}}{\mu_{0}^{2 }}-\ln\frac{\mu_{\chi}^{2}}{\lambda_{\gamma}^{2}}+\frac{9}{4}+(1-\xi)\left(\ln \frac{\mu_{\chi}^{2}}{\mu^{2}}-1\right)\right], \tag{62}\] \[2(V_{3}+V_{4})(a,\mu_{\chi},\mu) =-\int\frac{\mathrm{d}^{4}q}{(2\pi)^{4}}\frac{1}{q^{2}\left(q^{2} -\lambda_{\gamma}^{2}\right)}\left(v\cdot q\,g_{\mu\nu}\overline{T}_{VV,\,0}^ {\mu\nu}(q,v)+i\varepsilon_{\mu\rho\nu\sigma}q^{\rho}v^{\sigma}\overline{T}_{ VA,\,0}^{\mu\nu}(q,v)\right)\] \[+\frac{1}{(4\pi)^{2}}\left[\frac{1}{2}\left(1-\frac{\alpha_{s}}{ \pi}\right)\ln\frac{\mu^{2}}{\mu_{0}^{2}}+\frac{3-8a}{12}\right]. \tag{63}\]
To obtain the second line of Eqs. (62) and (63), we used the Ward identities on the subtracted tensors,
\[q_{\mu}\overline{T}_{VV}^{\mu\nu}(q,v)=iv^{\nu}\left(1-\frac{q^{2}}{q^{2}-\mu _{0}^{2}}\right),\qquad q_{\mu}\overline{T}_{VV,\,0}^{\mu\nu}(q,v)=0, \tag{64}\]
the symmetry (antisymmetry) of unpolarized hadronic tensors \(T_{VV(0)}^{\mu\nu}\) (\(T_{VA(0)}^{\mu\nu}\)) under \(\mu\leftrightarrow\nu\), and, in the contractions with the Levi-Civita tensor, we replaced
\[\bar{e}\gamma^{\sigma}P_{L}\nu_{e}\to\bar{e}\not{v}v^{\sigma}P_{L}\nu_{e}+\bar {e}\left(\gamma^{\sigma}-\not{v}v^{\sigma}\right)P_{L}\nu_{e}. \tag{65}\]
The non-perturbative QCD input in the LECs is encoded in the subtracted hadronic tensors \(\overline{T}_{VV}\), \(\overline{T}_{VA}\), \(\overline{T}_{VV,\,0}\), and \(\overline{T}_{VA,\,0}\). Using time reversal and crossing symmetry [2, 33], we can show that the scalar functions in the matching equations (62) and (63) are odd or even under \(q\to-q\), explicitly we have
\[g_{\mu\nu}\overline{T}_{VV}^{\mu\nu}(q^{2},v\cdot q) =-g_{\mu\nu}\overline{T}_{VV}^{\mu\nu}(q^{2},-v\cdot q),\quad i \varepsilon_{\mu\rho\nu\sigma}q^{\rho}v^{\sigma}\overline{T}_{VA}^{\mu\nu}(q^ {2},v\cdot q)=-i\varepsilon_{\mu\rho\nu\sigma}q^{\rho}v^{\sigma}\overline{T} _{VA}^{\mu\nu}(q^{2},-v\cdot q), \tag{66}\] \[g_{\mu\nu}\overline{T}_{VV,\,0}^{\mu\nu}(q^{2},v\cdot q) =g_{\mu\nu}\overline{T}_{VV,\,0}^{\mu\nu}(q^{2},-v\cdot q),\quad i \varepsilon_{\mu\rho\nu\sigma}q^{\rho}v^{\sigma}\overline{T}_{VA,\,0}^{\mu \nu}(q^{2},v\cdot q)=i\varepsilon_{\mu\rho\nu\sigma}q^{\rho}v^{\sigma} \overline{T}_{VA,\,0}^{\mu\nu}(q^{2},-v\cdot q), \tag{67}\]
where we indicated that the functions depend only on the invariants
\[Q^{2}=-q^{2},\qquad\nu=v\cdot q. \tag{68}\]
As a consequence of Eqs. (66) and (67), \(T_{VA}\) and \(T_{VV,\,0}\) do not contribute to the matching, and the final expressions for the combinations of LECs \(V_{1}+V_{2}\) and \(V_{3}+V_{4}\) are
\[2(V_{1}+V_{2})(\mu_{\chi},\mu) =\int\frac{\mathrm{d}^{4}q}{\left(2\pi\right)^{4}}\frac{1}{q^{2} \left(q^{2}-\lambda_{\gamma}^{2}\right)}\,v\cdot q\,g_{\mu\nu}\overline{T}_{VV }^{\mu\nu}(q,v)\] \[+\frac{1}{(4\pi)^{2}}\left[2\ln\frac{\mu^{2}}{\lambda_{\gamma}^{2 }}+\frac{1}{2}\left(1-\frac{\alpha_{s}}{\pi}\right)\ln\frac{\mu^{2}}{\mu_{0}^{ 2}}-\ln\frac{\mu_{\chi}^{2}}{\lambda_{\gamma}^{2}}+\frac{9}{4}+(1-\xi)\left( \ln\frac{\mu_{\chi}^{2}}{\mu^{2}}-1\right)\right], \tag{69}\] \[2(V_{3}+V_{4})(a,\mu_{\chi},\mu) =-\int\frac{\mathrm{d}^{4}q}{(2\pi)^{4}}\frac{1}{q^{2}\left(q^{2}- \lambda_{\gamma}^{2}\right)}i\varepsilon_{\mu\rho\nu\sigma}q^{\rho}v^{\sigma} \overline{T}_{VA,\,0}^{\mu\nu}(q,v)\] \[+\frac{1}{(4\pi)^{2}}\left[\frac{1}{2}\left(1-\frac{\alpha_{s}}{ \pi}\right)\ln\frac{\mu^{2}}{\mu_{0}^{2}}+\frac{3-8a}{12}\right]. \tag{70}\]
Note that in this framework, the LECs depend not only on the chiral renormalization scale (\(\mu_{\chi}\)), but also on the LEFT renormalization scale (\(\mu\)) and the schemes adopted for \(\gamma_{5}\) and the evanescent operators.
Corrections to \(g_{v}\)
In this Section, we combine the coupling constants of the heavy-baryon chiral perturbation theory into the counterterm of \(g_{V}\) in \(\not\!\pi\)EFT. We subsequently evaluate the non-perturbative inputs to the vector coupling constant, resum logarithms between the chiral and electron-mass scales, and provide numerical results for \(g_{V}\).
### Matching at the baryon-mass scale
Having determined the electroweak coupling constants \(V_{1}\)-\(V_{4}\) and the electromagnetic coupling constant \(g_{9}\), we can evaluate the \({\cal O}(\alpha)\) contribution to \(g_{V}\) in the low-energy effective theory, cf. Eqs. (2) and (3). These corrections are known in the literature as "inner" radiative corrections.
Before getting to the final result, we can combine the LECs that depend on the \(VV\) hadronic tensor, \(g_{9}\) and \(V_{1}+V_{2}\), and the lepton wavefunction renormalization \(X_{6}\), obtaining
\[\left(-\frac{X_{6}}{2}+2\left(V_{1}+V_{2}\right)-g_{9}\right)( \mu_{\chi},\mu) =\frac{1}{\left(4\pi\right)^{2}}\left[1+\frac{3}{2}\left(1-\ln \frac{\mu_{\chi}^{2}}{\mu^{2}}\right)\right]\] \[-\int\frac{\mathrm{d}^{4}q}{\left(2\pi\right)^{4}}\frac{\lambda_ {\gamma}^{2}}{q^{2}\left(q^{2}-\lambda_{\gamma}^{2}\right)^{2}}\,v\cdot q\,g_ {\mu\nu}\overline{T}_{VV}^{\mu\nu}(q,v), \tag{71}\]
which is independent of the gauge parameter \(\xi\). \(T_{VV}\) enters this combination of LECs multiplied by the IR regulator \(\lambda_{\gamma}^{2}\). The only contribution to the integral can thus come from the infrared limit of \(T_{VV}\), where the hadronic tensor is well approximated by the elastic piece. The integral over the hadronic tensor then only leaves behind a finite piece, yielding
\[\left(-\frac{X_{6}}{2}+2\left(V_{1}+V_{2}\right)-g_{9}\right)(\mu_{\chi},\mu) =\frac{1}{(4\pi)^{2}}\frac{3}{2}\left(1-\ln\frac{\mu_{\chi}^{2}}{\mu^{2}} \right). \tag{72}\]
Thus, the only contributions to \(-\frac{X_{6}}{2}+2\left(V_{1}+V_{2}\right)-g_{9}\) are due to the different renormalization scales, \(\mu\) vs \(\mu_{\chi}\), and the different subtraction scheme commonly used in HBChPT, \(\overline{\mathrm{MS}}_{\chi}\) vs \(\overline{\mathrm{MS}}\).
The other combination of LECs \(V_{3}+V_{4}\) is conveniently expressed in terms of the scalar amplitude \(T_{3}\left(\nu,Q^{2}\right)\) as
\[2(V_{3}+V_{4})(a,\mu_{\chi},\mu)=\frac{1}{(4\pi)^{2}}\left[\frac{1}{2}\left( 1-\frac{\alpha_{s}}{\pi}\right)\ln\frac{\mu^{2}}{\mu_{0}^{2}}+\frac{3-8a}{12} \right]-\int\frac{i\mathrm{d}^{4}q}{(2\pi)^{4}}\frac{\nu^{2}+Q^{2}}{Q^{4}} \frac{\overline{T}_{3}(\nu,Q^{2})}{2m_{N}\nu}, \tag{73}\]
where we defined the amplitude \(T_{3}\) from the tensor decomposition of the hadronic tensor as [85, 86, 87, 88, 89, 90]5
Footnote 5: Note that \(T_{3}\) defined in this paper is equal to \(i\) times the \(T_{3}\) defined in [45], which in turn is twice as large as the \(T_{3}\) defined in [1].
\[T_{VA,0}^{\mu\nu}=i\varepsilon^{\mu\nu\sigma\rho}q_{\rho}v_{\sigma}\frac{T_{3 }}{4m_{N}\nu}+\ \cdots, \tag{74}\]
with the OPE-subtracted expression
\[\overline{T}_{3}(\nu,Q^{2})=T_{3}(\nu,Q^{2})-\frac{4}{3}\frac{m_{N}\nu}{Q^{2} +\mu_{0}^{2}}\,\left(1-\frac{\alpha_{s}}{\pi}\right). \tag{75}\]
In the OPE, we have retained the \({\cal O}(\alpha_{s})\) correction, which is needed to cancel the \(\mu\)-dependent term proportional to \(\alpha\alpha_{s}\ln(M_{W}/\mu)\) in \(C_{\beta}^{r}\). To the order we are working, we can use \(\alpha_{s}(\mu)\) at any \(\mu\) where QCD is perturbative. We will use \(\alpha_{s}(\mu_{0})\) in what follows.
Combining the HBChPT coupling constants into the \(\not{\pi}\)EFT counterterm \(\hat{C}_{V}\) according to Eqs. (2), (3), (72), and (73), we achieve the matching condition
\[g_{V}\left(\mu_{\chi}\right) =C_{\beta}^{r}\left(a,\mu\right)\Bigg{[}1-\frac{\alpha\left(\mu_{ \chi}\right)}{2\pi}\left(2B(a)+\frac{5}{8}+\frac{3}{4}\ln\frac{\mu_{\chi}^{2}} {\mu_{0}^{2}}+\left(1-\frac{\alpha_{s}}{4\pi}\right)\ln\frac{\mu_{0}^{2}}{\mu^ {2}}\right)\] \[-e^{2}\int\frac{i\mathrm{d}^{4}q}{\left(2\pi\right)^{4}}\frac{ \nu^{2}+Q^{2}}{Q^{4}}\frac{\overline{T}_{3}(\nu,Q^{2})}{2m_{N}\nu}\Bigg{]}, \tag{76}\]
where we resummed logarithms in the Wilson coefficient \(C_{\beta}^{r}\left(a,\mu\right)\), as it is described in Section 3.1. This expression does not contain electroweak-scale parameters or artificial hadronic scales, besides the dependence contained in the coupling constant \(C_{\beta}^{r}\left(a,\mu\right)\). The vector coupling \(g_{V}\left(\mu_{\chi}\right)\) does not depend on the scale and scheme used in the LEFT at the one-loop level.
We can further simplify the expression for \(g_{V}(\mu_{\chi})\) and connect it to the previous literature. First, we eliminate the evanescent scheme dependence by defining the scheme-independent NLO Wilson coefficient [68]
\[\overline{C}_{\beta}^{r}(\mu)=\frac{C_{\beta}^{r}(a,\mu)}{1+\frac{\alpha(\mu )}{\pi}B(a)}, \tag{77}\]
which can be immediately read off from Eq. (16). We then have
\[g_{V}\left(\mu_{\chi}\right)=\overline{C}_{\beta}^{r}\left(\mu\right)\left[1 +\overline{\square}_{\mathrm{Had}}^{V}(\mu_{0})-\frac{\alpha\left(\mu_{\chi} \right)}{2\pi}\left(\frac{5}{8}+\frac{3}{4}\ln\frac{\mu_{\chi}^{2}}{\mu_{0}^{ 2}}+\left(1-\frac{\alpha_{s}}{4\pi}\right)\ln\frac{\mu_{0}^{2}}{\mu^{2}} \right)\right], \tag{78}\]
where the non-perturbative input is in the "subtracted" hadronic contribution \(\overline{\square}_{\mathrm{Had}}^{V}(\mu_{0})\), which is closely related to the standard \(\square_{\gamma W}^{V}\) of Refs. [1, 2, 39]
\[\overline{\square}_{\mathrm{Had}}^{V}(\mu_{0}) =-e^{2}\int\frac{i\mathrm{d}^{4}q}{\left(2\pi\right)^{4}}\frac{ \nu^{2}+Q^{2}}{Q^{4}}\left[\frac{T_{3}(\nu,Q^{2})}{2m_{N}\nu}-\frac{2}{3} \frac{1}{Q^{2}+\mu_{0}^{2}}\,\left(1-\frac{\alpha_{s}(\mu_{0}^{2})}{\pi} \right)\right], \tag{79}\] \[\square_{\gamma W}^{V} =-e^{2}\int\frac{i\mathrm{d}^{4}q}{\left(2\pi\right)^{4}}\frac{M_ {W}^{2}}{Q^{2}+M_{W}^{2}}\frac{\nu^{2}+Q^{2}}{Q^{4}}\frac{T_{3}(\nu,Q^{2})}{2 m_{N}\nu}. \tag{80}\]
We will evaluate the non-perturbative input in Eq. (79) in Sec. 5.2.
Eq. (78) encodes the so-called "inner" radiative corrections to the Fermi transitions in the EFT language in the form of a \(\mu_{\chi}\)-dependent coupling \(g_{V}(\mu_{\chi})\), which appears in the effective Lagrangian of Eq. (1). Once all large electroweak logarithms are resummed via the RGE in \(\overline{C}_{\beta}(\mu)\), Eq. (78) does not contain additional large logarithms when the scales \(\mu_{\chi}\), \(\mu\), and \(\mu_{0}\) are similar and of order \(\Lambda_{\chi}\sim 1\) GeV. As shown below, the \(\mu_{\chi}\)-scale dependence in \(g_{V}(\mu_{\chi})\) is canceled in physical amplitudes by the \(\mu_{\chi}\) dependence of the virtual photon corrections computed in the pionless theory. Since the only scale of these loops is \(\mathcal{O}(m_{e})\), we will evolve \(g_{V}(\mu_{\chi})\) down to the scale \(\mu_{\chi}\sim m_{e}\) in order to avoid large logarithms, see Sec. 5.3.
### Evaluation of the non-perturbative input
As shown in Refs. [1, 2], the box function can be represented as a one-dimensional integral over the \(Q^{2}>0\) variable
\[\square_{\gamma W}^{V}=\frac{\alpha}{8\pi}\int_{0}^{\infty}\mathrm{d}Q^{2} \frac{M_{W}^{2}}{M_{W}^{2}+Q^{2}}F(Q^{2}), \tag{81}\]
where \(F(Q^{2})=(12/Q^{2})M_{3}^{(0)}(1,Q^{2})\) and \(M_{3}^{(0)}(1,Q^{2})\) is the first Nachtmann moment of the structure function defined in terms of the imaginary part of \(T_{3}(\nu,Q^{2})\). Following Refs. [1, 2], it is useful to isolate
the well-defined elastic contribution to \(F(Q^{2})\), which we denote by \(F_{\rm el}(Q^{2})\), known in terms of the nucleon isoscalar magnetic vector and axial form factors, and define
\[F(Q^{2})=F_{\rm el}(Q^{2})+\overline{F}(Q^{2}), \tag{82}\]
where \(\overline{F}(Q^{2})\) includes inelastic contributions. For \(Q^{2}\leq Q_{0}^{2}=2\) GeV\({}^{2}\),6\(\overline{F}(Q^{2})\) contains contributions from the resonance region and the so-called Regge region. Current knowledge is based on modeling [1, 2, 3, 4, 5] and lattice QCD input [6]. For \(Q^{2}\geq Q_{0}^{2}=2\) GeV\({}^{2}\), one enters the deep inelastic scattering region (DIS), controlled by the OPE with Wilson coefficients computed in perturbative QCD (pQCD). The OPE representation of \(\overline{F}(Q^{2})\) is known to leading order in \(1/Q^{2}\), with coefficients known to \({\cal O}(\alpha_{s}^{4})\)[3, 91, 92, 93]:
Footnote 6: The value of \(Q_{0}\) is somewhat arbitrary and here we follow the choice of Refs. [1, 2].
\[\overline{F}_{\rm DIS}(Q^{2})=\frac{1}{Q^{2}}\left(1-\Delta(Q^{2})\right), \qquad\qquad\Delta(Q^{2})=\sum_{n=1}^{4}\tilde{c}_{n}\left(\frac{\alpha_{s}(Q )}{\pi}\right)^{n}. \tag{83}\]
In practice, we will use only the \(n=1\) term (with coefficient \(\tilde{c}_{1}=1\)) in \(\Delta(Q^{2})\), as higher-order terms are beyond the accuracy of our NLL LEFT analysis. Moreover, for consistency with the OPE terms that we subtract in the matching procedure, we will use \(\Delta(Q^{2})\to\Delta(\mu_{0}^{2})\) in \(\overline{F}_{\rm DIS}(Q^{2})\).
In terms of the quantities defined above, the subtracted hadronic contribution reads
\[\overline{\Box}_{\rm Had}^{V}(\mu_{0})=\frac{\alpha}{8\pi}\int_{0}^{\infty}{ \rm d}Q^{2}\left[F_{\rm el}(Q^{2})+\overline{F}(Q^{2})-\frac{1}{Q^{2}+\mu_{0} ^{2}}\left(1-\Delta(\mu_{0}^{2})\right)\right]. \tag{84}\]
Isolating the elastic contribution and separating the integration in the regions below and above \(Q_{0}^{2}=(\sqrt{2}\) GeV\()^{2}\), we find
\[\Box_{\gamma W}^{V} =\Box_{\gamma W}^{V}\Big{|}_{\rm el}+\frac{\alpha}{8\pi}\int_{0} ^{Q_{0}^{2}}{\rm d}Q^{2}\ \overline{F}(Q^{2})\ +\ \frac{\alpha}{8\pi}\Big{(}1-\Delta(\mu_{0}^{2})\Big{)}\ln\frac{M_{W}^{2}}{Q_{0 }^{2}}+{\cal O}\left(\frac{Q_{0}^{2}}{M_{W}^{2}}\right), \tag{85}\] \[\overline{\Box}_{\rm Had}^{V}(\mu_{0}) =\ \Box_{\gamma W}^{V}\Big{|}_{\rm el}+\frac{\alpha}{8\pi}\int_{0} ^{Q_{0}^{2}}{\rm d}Q^{2}\ \overline{F}(Q^{2})\ +\ \frac{\alpha}{8\pi}\Big{(}1-\Delta(\mu_{0}^{2})\Big{)}\ln\frac{\mu_{0}^{2}}{Q_{0 }^{2}}, \tag{86}\]
Numerically, for the non-perturbative contributions we find
\[\Box_{\gamma W}^{V}\Big{|}_{\rm el}=1.030(48)\times 10^{-3}, \tag{87a}\] \[\int_{0}^{Q_{0}^{2}}{\rm d}Q^{2}\ \overline{F}(Q^{2})\qquad\quad \longrightarrow\qquad\qquad\delta\overline{\Box}_{\rm Had}^{V}\Big{|}_{\rm Regge +Res.}=(0.49(11)+0.04(1))\times 10^{-3}. \tag{87b}\]
We evaluated the elastic contribution with the isoscalar magnetic vector form factor, which is extracted from experimental \(ep\) and \(en\) scattering data, measurements of the neutron scattering length, and \(\mu\)H spectroscopy [94]. For the axial-vector form factor, we use the fit to the experimental \(\nu_{\mu}d\) scattering data from Ref. [95]. Our result is in reasonable agreement with previous evaluations of the elastic contribution to \(\Box_{\gamma W}^{V}\), giving \((1.05\pm 0.04)\times 10^{-3}\)[4], \((1.06\pm 0.06)\times 10^{-3}\)[1, 2, 45], \((1.06\pm 0.06)\times 10^{-3}\)[5], and \((0.99\pm 0.10)\times 10^{-3}\)[3], but contains an improved uncertainty estimate since our errors are directly propagated from the experimental data.
Up to negligible contributions of \({\cal O}(Q_{0}^{2}/M_{W}^{2})\), the integral of \(\overline{F}(Q^{2})\) between \(0\) and \(Q_{0}^{2}\) in Eq. (87b) coincides with the \(Q^{2}\leq Q_{0}^{2}\) inelastic piece of the "box diagram", recently considered in the literature [1, 2, 3, 4, 5, 6]. The result is usually written as the sum of the "Regge" plus "Resonance" contributions. The various evaluations in the literature have been recently combined by Ref. [8], leading to the numbers used in Eq. (87b). This part of our result is fully correlated with previous work and carries the dominant contribution to the error budget for the radiative corrections.
### RG evolution of \(g_{V}\) below the baryon scale
To account for higher-order perturbative logarithms, which are needed for precise predictions of \(\beta\)-decay rates and (anti)neutrino-nucleon scattering, we evolve the low-energy coupling constant \(g_{V}(\mu_{\chi})\) from the matching scale \(\mu_{\chi}\sim\Lambda_{\chi}\) to the physical scale \(\mu_{\chi}\sim m_{e}\) using the one- and two-loop anomalous dimensions. The vector coupling constant evolves according to
\[\mu_{\chi}\frac{\mathrm{d}g_{V}\left(\mu_{\chi}\right)}{\mathrm{d} \mu_{\chi}} =\gamma(\alpha)\ g_{V}\left(\mu_{\chi}\right), \tag{88a}\] \[\gamma(\alpha) =\tilde{\gamma}_{0}\,\frac{\alpha}{\pi}+\tilde{\gamma}_{1}\left( \frac{\alpha}{\pi}\right)^{2}\ +\ \cdots,\] (88b) \[\tilde{\gamma}_{0} =-\frac{3}{4},\] (88c) \[\tilde{\gamma}_{1} =\frac{5\tilde{n}}{24}+\frac{5}{32}-\frac{\pi^{2}}{6}, \tag{88d}\]
with the effective number of particles \(\tilde{n}\), as it is described in Appendix A.2. The appropriate one-loop anomalous dimension \(\tilde{\gamma}_{0}\) has been identified in Refs. [38, 44, 46, 51, 96, 97].7 It can also be extracted from calculations of the "heavy-light" current QCD anomalous dimension in the context of heavy quark physics, as for example in Refs. [98, 99]. As discussed in Appendix C, we can exploit this analogy to extract the QED two-loop anomalous dimension \(\tilde{\gamma}_{1}\), by adapting the results from Ref. [100] (see also Refs. [101, 102, 103]). The above expression for \(\tilde{\gamma}_{1}\) only includes two-loop diagrams involving two virtual photons in the pionless theory. Possible contributions arising from diagrams involving pions and photons are not included. Note that the term in \(\tilde{\gamma}_{1}\) proportional to \(\pi^{2}\) can lead to contributions to the decay rate that scale as \(\alpha^{2}\ln\left(m_{N}/m_{e}\right)\), larger than a typical two-loop contribution.
Footnote 7: Note that the one-loop anomalous dimension in the theory with relativistic nucleons is factor 2 larger than \(\tilde{\gamma}_{0}\) in Eqs. (88) and, therefore, our coupling constant can be used for the calculation of radiative corrections only in the theory with heavy nucleons.
Using the expression for the evolution operator in Eq. (19), we solve the RGE in (88) and resum the leading and subleading logarithms between particle thresholds according to
\[g_{V}\left(\mu_{\chi}\right) =\tilde{U}(\mu_{\chi},m_{\mu})\tilde{U}(m_{\mu},m_{\pi})\tilde{U}( m_{\pi},\Lambda_{\chi})g_{V}(\Lambda_{\chi})\,,\] \[\tilde{U}(\mu_{1},\mu_{2}) =\left(\frac{\alpha\left(\mu_{1}\right)}{\alpha\left(\mu_{2} \right)}\right)^{-2\tilde{\gamma}_{0}/\tilde{\beta}_{0}}\left[1-\frac{2\tilde {\gamma}_{1}(\mu_{1})}{\tilde{\beta}_{0}(\mu_{1})}\frac{\alpha(\mu_{1})-\alpha (\mu_{2})}{\pi}\right]\,. \tag{89}\]
Below the baryon scale, we determine \(\alpha\) from its value in the Thomson limit by evolving it up in scale with the electron, muon, and charged pion as active degrees of freedom, which leads to
\[\tilde{\beta}_{0}=-\frac{4}{3}\sum_{\ell=e,\mu}Q_{\ell}^{2}\theta(\mu-m_{\ell })-\frac{1}{3}Q_{\pi}^{2}\theta(\mu-m_{\pi})\,. \tag{90}\]
See Appendix A for details on the definition of the fine-structure constant both in LEFT and \(\chi\)PT.
In Eq. (89), \(g_{V}(\Lambda_{\chi})\) is obtained by evaluating Eq. (78) at \(\mu_{\chi}=\Lambda_{\chi}\sim m_{N}\). Note that both \(\tilde{\gamma}_{0}\) and \(\tilde{\gamma}_{1}\) are negative, implying \(g_{V}(m_{e})/g_{V}(\Lambda_{\chi})>1\).
### Numerical results and uncertainty estimates
We next present numerical results for the vector coupling \(g_{V}(m_{e})\) and discuss the various sources of uncertainty. We start by providing some intermediate results that illustrate the impact of corrections at various orders in our RGE analysis.
For the semileptonic Wilson coefficient \(C_{\beta}^{r}\), we include \(\alpha,\ \alpha\alpha_{s}\), and \(\alpha^{2}\) contributions to the running, as described in Section 3.1.8 To illustrate the effect of running from the electroweak to GeV scales, we provide results for the fixed order (LO) \(C_{\beta}(m_{c})=1+(\alpha(m_{c})/\pi)\ln(M_{Z}/m_{c})\), leading logarithms (LL), next-to-leading logarithms NLL1, which includes the anomalous dimensions up to order \(\alpha\alpha_{s}\), and next-to-leading logarithms NLL2, including the anomalous dimensions up to orders \(\alpha\alpha_{s}\) and \(\alpha^{2}\). For the initial conditions, we specify
Footnote 8: We perform the one-loop running for \(\alpha\left(\mu\right)\) and \(\alpha_{s}\left(\mu\right)\) in LEFT, consistently with the order of our calculation. We have checked that using the higher-order couplings as in Ref. [66] modifies our final results at the level of \(0.001\%\).
\[C_{\beta}^{\rm LL,NLL1}(M_{W}) =1+\frac{\alpha(M_{W})}{\pi}\ln\frac{M_{Z}}{M_{W}}, \tag{91}\] \[C_{\beta}^{\rm NLL2}(M_{W}) =1+\frac{\alpha(M_{W})}{\pi}\ln\frac{M_{Z}}{M_{W}}+\frac{\alpha (M_{W})}{\pi}\,B(a=-1). \tag{92}\]
After numerically solving the RGEs, we obtain the following values for the effective couplings at \(\mu=m_{c}\):
\[C_{\beta}^{\rm LO}(m_{c}) =1.01014, \tag{93a}\] \[C_{\beta}^{\rm LL}(m_{c}) =1.01043,\] (93b) \[C_{\beta}^{\rm NLL1}(m_{c}) =1.01027,\] (93c) \[\overline{C}_{\beta}^{\rm NLL2}(m_{c}) =1.01018. \tag{93d}\]
The effects of NLL1 and NLL2 resummations combine to essentially "undo" the effect of LL resummation. The final result is very close to the pertubative one. The numerical solution of the RGEs agrees with the analytic solutions provided in Section 3.1. Our result for the NLL1 correction is consistent with the finding of Ref. [71]. The impact of NLL2 corrections in our result is more than a factor of 2 larger than in Ref. [38], reflecting the difference discussed in Section 3.1.
For the running of the vector coupling constant \(g_{V}\), we include the \(\mathcal{O}(\alpha)\) and \(\mathcal{O}(\alpha^{2})\) anomalous dimensions, as described in Section 5.3. 9 We provide the relative running contributions for the one-loop logarithm (LO), namely \(g_{V}(m_{e})/g_{V}(m_{p})|_{\rm LO}=1+(3/4)(\alpha/\pi)\ln(m_{p}/m_{e})\), the LL resummation, where we include only \(\tilde{\gamma}_{0}\) in the RGE, and NLL resummation, where we also include \(\tilde{\gamma}_{1}\) in the RGE:
Footnote 9: We match LEFT at the scale \(\mu=m_{c}\) to the HB\(\chi\)PT at the scale \(\mu_{\chi}=m_{p}\), below which we perform the running of \(\alpha\) with the one-loop anomalous dimension for leptons and pions [8, 66].
\[\left.\frac{g_{V}(m_{e})}{g_{V}(m_{p})}\right|_{\rm LO} =1.01308, \tag{94a}\] \[\left.\frac{g_{V}(m_{e})}{g_{V}(m_{p})}\right|_{\rm LL} =1.01325,\] (94b) \[\left.\frac{g_{V}(m_{e})}{g_{V}(m_{p})}\right|_{\rm NLL} =1.01330. \tag{94c}\]
At the level of decay rate, our NLL correction implies an increase of \(1.0\times 10^{-4}\) (roughly half of the final uncertainty of the radiative corrections).
Putting together all the results obtained so far, we evaluate the vector coupling constant \(g_{V}\left(\mu_{\chi}\right)\) in the \(\overline{\rm MS}\) renormalization scheme of \(\chi\)PT at the scale \(\mu_{\chi}=m_{e}\), where \(\mathcal{O}(\alpha^{n})\) loop corrections to the matrix elements of the Lagrangian (1) do not contain large logarithms:
\[g_{V}\left(\mu_{\chi}=m_{e}\right)-1=\left(2.499\pm 0.013\right)\%. \tag{95}\]
In contrast, the vector coupling at fixed order \(g_{V}^{1-{\rm loop}}\) (i.e., without resummation, without \(\alpha\alpha_{s}\) corrections, and taking the value for the electromagnetic coupling constant in the Thomson limit) takes the value
\[g_{V}^{1-{\rm loop}}\left(\mu_{\chi}=m_{e}\right)-1=\left(2.430\pm 0.012\right) \%. \tag{96}\]
In the RGE evolution, the electromagnetic and \(\alpha\alpha_{s}\) effects contribute with opposite signs, resulting in a net increase of \(g_{V}\) at level of \(0.07\%\).
For the uncertainty estimate, we add the following dominant sources in quadrature:
* \(0.012\%\): the hadronic error for Regge, resonance, and \(\pi N\) contributions from Ref. [45] is added in quadrature to the uncertainty propagated from the lepton-nucleon experimental data for the elastic contribution.
* \(0.004\%\): the higher-order \(\alpha\alpha_{s}^{2}\) uncertainty is estimated by including the known terms of \({\cal O}(\alpha_{s}^{2})\)[1, 2, 3, 38] in the pQCD correction \(\Delta(\mu_{0}^{2})\) that controls the DIS region of \(\Box_{\gamma W}^{V}\) in Eq. (85). In our approach, this DIS contribution maps onto the \(\alpha\alpha_{s}^{2}\) anomalous dimension for the Wilson coefficient \(C_{\beta}(\mu)\) in LEFT.
* \(0.003\%\): the higher-order \(\chi\)PT uncertainty is estimated by assuming the natural size for unaccounted corrections, i.e., \(\frac{\alpha}{\pi}\frac{m_{e}^{2}}{16\pi^{2}F_{s}^{2}}\).
All other perturbative and parametric sources of uncertainties are at the level \(0.001\%\) or even below.
We conclude this section by noting that the effective coupling \(g_{V}(\mu_{\chi}\approx m_{e})\) captures the "inner" corrections to one-body weak transitions through NLL, i.e., up to and including terms of order \(\alpha^{2}L^{2}\) and \(\alpha^{2}L\) (where \(L\) indicates large logarithms of \(M_{Z}/m_{N}\) and \(m_{N}/m_{e}\)), with residual uncertainty at \({\cal O}(\alpha^{2})\) due to finite terms in two-loop diagrams. Importantly, \(g_{V}\) controls both neutron decay and the one-body contribution to nuclear \(\beta\) decays, in combination with appropriate \(n\to pe\bar{\nu}_{e}\) and \((N,Z)\to(N-1,Z+1)e\bar{\nu}_{e}\) matrix elements computed to the same accuracy. For applications in neutrino and nuclear physics, in Table 1 we provide the coupling constant \(g_{V}\) for a few values of the renormalization scale up to \(50\) MeV.
## 6 Corrections to neutron decay and impact on \(V_{ud}\)
We can now use the \(\not{\pi}\)EFT Lagrangian in Eq. (1) with \(g_{V}(\mu_{\chi}=m_{e})\) from Eq. (89) to compute the neutron decay rate including radiative corrections. The final ingredient is the square modulus of the \(n\to pe\bar{\nu}_{e}\) and \(n\to pe\bar{\nu}_{e}\gamma\) matrix elements in HBChPT, evaluated at \(\mu_{\chi}\sim m_{e}\). To match the accuracy achieved in \(g_{V}(m_{e})\), since \(\ln(\mu_{\chi}/m_{e})\sim{\cal O}(1)\), we will need the matrix elements to \({\cal O}(\alpha)\) and will ignore terms of \({\cal O}(\alpha^{2})\) and higher. The only exceptions are "Coulomb"-enhanced terms scaling as \((\pi\alpha/\beta)^{n}\) and \(\alpha/\pi(\pi\alpha/\beta)^{n}\), where \(\beta\equiv p_{e}/E_{e}\), which are parametrically large, diverge for \(\beta\to 0\), and can be resummed in the nonrelativistic Fermi function.
### "Long distance" electromagnetic corrections and differential decay rate
After including the contributions from both virtual and real photons [44, 46] as well as recoil corrections [46, 53], the differential decay rate \({\rm d}\Gamma_{n}\) for unpolarized neutrons takes the form [53, 33]
\[\frac{{\rm d}\Gamma_{n}}{{\rm d}E_{e}}=\frac{G_{F}^{2}\left|V_{ud}\right|^{2 }}{(2\pi)^{5}}\left(1+3\lambda^{2}\right)\ p_{e}E_{e}(E_{0}-E_{e})^{2}\left[g _{V}(\mu_{\chi})\right]^{2}\ \Big{(}1+\tilde{\delta}_{\rm RC}(E_{e},\mu_{\chi})\Big{)} \Big{(}1+\delta_{\rm recoil}(E_{e})\Big{)}, \tag{97}\]
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|} \hline \(\mu_{\chi}\), MeV & 1 & 5 & 10 & 20 & 30 & 50 \\ \hline \(g_{V}-1\), \% & 2.379 & 2.090 & 1.966 & 1.842 & 1.770 & 1.678 \\ \hline \end{tabular}
\end{table}
Table 1: The coupling constant \(g_{V}\) is presented for a few values of the renormalization scale \(\mu_{\chi}\).
where \(E_{0}=(m_{n}^{2}-m_{p}^{2}+m_{e}^{2})/(2m_{n})\) is the electron endpoint energy and \(\lambda\equiv g_{A}/g_{V}\) is the ratio of effective axial and vector couplings in the low-energy Lagrangian (1). The ratio \(\lambda=\lambda^{\text{QCD}}(1+\delta_{\text{RC}}^{(\lambda)})\) is affected by a \(\mu_{\chi}\)-independent electromagnetic correction \(\delta_{\text{RC}}^{(\lambda)}\) parameterized in terms of calculable pion loops and certain chiral LECs (see Ref. [44]). \(\lambda\) itself can be extracted from beta decay correlation experiments, so that we do not need to know \(\delta_{\text{RC}}^{(\lambda)}\) for the purpose of studying total decay rates and the extraction of \(V_{ud}\). \(\delta_{\text{recoil}}(E_{e})\) collects recoil corrections that can be found in Ref. [46]. They are usually factorized since the impact of the product of radiative times recoil corrections is estimated to be well below \(10^{-4}\). Finally, \(\tilde{\delta}_{\text{RC}}(E_{e})\) represents the electromagnetic corrections arising from the matrix element squared. To \(\mathcal{O}(\alpha)\), one finds
\[\tilde{\delta}_{\text{RC}}(E_{e},\mu_{\chi})=\frac{\alpha\left(\mu_{\chi} \right)}{2\pi}\left(\frac{2\pi^{2}}{\beta}+\frac{3}{2}\ln\frac{\mu_{\chi}^{2} }{m_{e}^{2}}+\frac{5}{4}+\hat{g}\left(E_{e},E_{0}\right)\right), \tag{98}\]
where \(\hat{g}(E_{e},E_{0})\) is a "subtracted" Sirlin function
\[\hat{g}\left(E_{e},E_{0}\right)=g\left(E_{e},E_{0}\right)-\frac{3}{2}\ln\frac {m_{N}^{2}}{m_{e}^{2}}, \tag{99}\]
defined in terms of the Sirlin function \(g\left(E_{e},E_{0}\right)\) of Ref. [33]. \(\hat{g}\left(E_{e},E_{0}\right)\) arises naturally in the EFT calculation and does not contain any large logarithm of \(m_{N}/m_{e}\).
The corrections proportional to \(\pi\alpha/\beta\) in Eq. (98) are enhanced by a factor of \(\pi^{2}\) compared to the naive scaling of loop corrections, and are numerically dominant even for \(\beta\sim\mathcal{O}(1)\). The leading terms in the series in \(\pi\alpha/\beta\) arise from the momentum regions of loop integrals in which the photon momentum has potential scaling, \(k_{0}\sim m_{e}\beta^{2}\ll|\vec{k}|\sim m_{e}\beta\), and they can be identified with nonrelativistic EFT methods [104, 105, 106, 107]. Their resummation leads to the nonrelativistic Fermi function \(F_{NR}(\beta)\)[108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118]
\[F_{NR}\left(\beta\right)=\frac{2\pi\alpha}{\beta}\frac{1}{1-e^{-\frac{2\pi \alpha}{\beta}}}\approx 1+\frac{\pi\alpha}{\beta}+\frac{\pi^{2}\alpha^{2}}{3 \beta^{2}}-\frac{\pi^{4}\alpha^{4}}{45\beta^{4}}+..., \tag{100}\]
which we include in the matrix element squared as
\[1+\tilde{\delta}_{\text{RC}}(E_{e},\mu_{\chi}) =F_{NR}(\beta)+\frac{\alpha\left(\mu_{\chi}\right)}{2\pi}\left( \frac{3}{2}\ln\frac{\mu_{\chi}^{2}}{m_{e}^{2}}+\frac{5}{4}+\hat{g}\left(E_{e}, E_{0}\right)\right)\] \[\longrightarrow F_{NR}(\beta)\Big{(}1+\delta_{\text{RC}}(E_{e},\mu_{ \chi})\Big{)}+\mathcal{O}\left(\alpha^{2}\right), \tag{101}\]
where
\[\delta_{\text{RC}}(E_{e},\mu_{\chi})=\frac{\alpha\left(\mu_{\chi}\right)}{2 \pi}\left(\frac{3}{2}\ln\frac{\mu_{\chi}^{2}}{m_{e}^{2}}+\frac{5}{4}+\hat{g} \left(E_{e},E_{0}\right)\right). \tag{102}\]
As we discuss in Appendix B, the factorization ansatz in Eq. (101) captures all numerically-enhanced leading and subleading terms in \(1/\beta\), and reproduces similar results for the production of two heavy quarks at threshold, derived with nonrelativistic QCD and potential nonrelativistic QCD [104, 105, 106, 107, 119, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122]. At \(\mathcal{O}(\alpha^{2})\), Eq. (101) gives
\[F_{NR}(\beta)\Big{(}1+\delta_{\text{RC}}(E_{e},\mu_{\chi})\Big{)}=F_{NR}(\beta)- \frac{11}{4}\frac{\alpha^{2}}{\beta}+\frac{\left(E_{0}-m_{e}\right)^{2}}{12m_{ e}^{2}}\frac{\alpha^{2}}{\beta}+\delta_{\text{RC}}(E_{e},\mu_{\chi})+\mathcal{O} \left(\alpha^{2}\right). \tag{103}\]
Indeed, the first cross term \(-(11/4)\alpha^{2}/\beta\) corresponds to the matching coefficient of heavy-light to heavy-heavy current [123] in the \(\overline{\text{MS}}_{\chi}\) renormalization scheme. The second cross term \(\left(\alpha^{2}/\beta\right)\left(E_{0}-m_{e}\right)^{2}/\left(12m_{e}^{2}\right)\) comes from the product of the Fermi function with real radiation. These terms are beyond the accuracy of our calculation and can be booked as \(\mathcal{O}(\alpha^{2}\beta^{3})\) in the nonrelativistic limit. In the case of neutron decay, this term provides a negligible shift of \(1.6\times 10^{-5}\) to the decay rate.
We thus arrive to our final form for the differential decay rate:
\[\frac{\mathrm{d}\Gamma_{n}}{\mathrm{d}E_{e}}=\frac{G_{F}^{2}\,|V_{ud}|^{2}}{(2\pi )^{5}}\,\big{(}1+3\lambda^{2}\big{)}\,\,p_{e}E_{e}(E_{0}\!-\!E_{e})^{2}\,\,\,[g_ {V}(\mu_{\chi})]^{2}\,\,F_{NR}(\beta)\bigg{(}1\!+\!\delta_{\mathrm{RC}}(E_{e},\mu_{\chi})\bigg{)}\bigg{(}1\!+\!\delta_{\mathrm{recoil}}(E_{e})\bigg{)}. \tag{104}\]
Compared to state-of-the-art analyses of neutron decay in the literature (see e.g. Ref. [38]), our result (104) amounts to replacing the relativistic Fermi function [109, 110, 124, 125, 126, 127, 53, 127] with the nonrelativistic one, \(F_{0}\to F_{NR}\). While we arrived at this result by constructing the relevant terms of the amplitude in the EFT framework, one could also argue for this replacement along the following lines. First, recall that the leading corrections to the phase space coming from the distortion of the electron wavefunction in the Coulomb field of the proton is usually captured by the function [53]
\[F_{0}(\beta)=\frac{2}{1+\gamma}F(\beta)=4(2E_{e}\beta R)^{2(\gamma-1)}e^{\pi y }\frac{|\Gamma(\gamma+iy)|^{2}}{(\Gamma(1+2\gamma))^{2}},\quad y=\frac{\alpha} {\beta},\quad\gamma=\sqrt{1-\alpha^{2}}. \tag{105}\]
This form is obtained by solving the Dirac equation for an electron moving in the charge distribution of a uniformly charged sphere of radius \(R\)[53], but corresponds to a rescaling of the solution of the Dirac equation for a point-like proton, \(F(\beta)\), evaluated not at the origin, where the wavefunction diverges logarithmically, but at the "nucleon radius" \(R\). \(R\) corresponds to a mass scale much larger than \(m_{e}\), and effectively acts as a UV regulator. So we see that while \(F_{0}(\beta)\) coincides with \(F_{NR}(\beta)\) at one-loop level, \(F_{0}\) includes a dependence on the UV regulator via the logarithms of \(R\) that first appear at \(\mathcal{O}(\alpha^{2})\). Expanding \(F_{0}\) in series of \(\alpha\), one obtains
\[F_{0}(\beta)=F_{NR}\left(\beta\right)\left[1-\alpha^{2}\left(\gamma_{E}-3+ \ln(2E_{e}R\beta)\right)+\mathcal{O}(\alpha^{4})\right]. \tag{106}\]
The dependence on the UV regulator \(R\sim 1/\mu\) does not match the \(\mu\)-dependence of \(g_{V}(\mu)\) in the \(\overline{\mathrm{MS}}_{\chi}\) scheme presented so far. In dimensional regularization, indeed, the \(\ln R\) term in Eq. (106) corresponds to a UV singularity that appears in the first two diagrams in Fig. 3, when we consider only the contribution arising from picking the two nucleon poles. This is only one piece of the full anomalous dimension \(\tilde{\gamma}_{1}\). In order not to double-count large logarithms, one should set the logarithmic term in \(F_{0}\) to zero when using the RGEs to evaluate the large logarithms as we do here. The remaining \(\mathcal{O}(\alpha^{2})\) terms in Eq. (106) are incomplete and beyond the accuracy of our calculation, which allows us to drop them and replace the relativistic Fermi function \(F_{0}\) by its nonrelativistic counterpart \(F_{NR}\).
Figure 3: HBChPT diagrams contributing to the anomalous dimension of \(g_{V}\) and to \(\tilde{\delta}_{\mathrm{RC}}\) at two loop. Only the first two diagrams give rise to terms in the \(\tilde{\gamma}_{1}\) enhanced by \(\pi^{2}\)[100]. These diagrams also give rise to the leading \(\alpha^{2}\pi^{2}/\beta^{2}\) behavior captured by the nonrelativistic Fermi function.
### Total decay rate and extraction of \(V_{ud}\)
Upon performing the integration over \(E_{e}\) in Eq. (104), the decay rate can be written as
\[\Gamma_{n}=\frac{G_{F}^{2}|V_{ud}|^{2}m_{e}^{5}}{2\pi^{3}}\,\big{(}1+3\lambda^{2 }\big{)}\cdot f_{0}\cdot\big{(}1+\Delta_{f}\big{)}\cdot\big{(}1+\Delta_{R} \big{)}, \tag{107}\]
where the phase space integral is given by
\[f_{0}=\int_{1}^{x_{0}}w(x,x_{0})\ \mathrm{d}x,\qquad w(x,x_{0})=x\,(x_{0}-x)^{2 }\,\sqrt{x^{2}-1}, \tag{108}\]
with \(x_{0}=E_{0}/m_{e}\) and \(E_{0}=1.292581\) MeV, and takes the value \(f_{0}(x_{0})=1.62989\). Following standard practice [38, 53], in Eq. (107) we have lumped the Coulomb (\(F_{NR}\)) and recoil terms into an effective phase-space correction \(\Delta_{f}\), separating the remaining radiative corrections into \(\Delta_{R}\). In this factorization scheme, the various corrections to the decay rate are defined by
\[f_{0}\,(1+\Delta_{f}) =\int_{1}^{x_{0}}w(x,x_{0})\,F_{NR}\,(\beta(x))\ (1+\delta_{\mathrm{ recoil}}\,(xm_{e}))\ \mathrm{d}x, \tag{109}\] \[1+\Delta_{R} =[g_{V}(\mu_{\chi})]^{2}\left(1+\frac{\int_{1}^{x_{0}}w(x,x_{0}) \,F_{NR}\,(\beta(x))\,(1+\delta_{\mathrm{recoil}}\,(xm_{e}))\ \delta_{\mathrm{RC}}\,(xm_{e},\mu_{\chi})\ \mathrm{d}x}{f_{0}(1+\Delta_{f})}\right), \tag{110}\]
where \(\beta(x)=\sqrt{1-1/x^{2}}\). A few remarks are in order:
* The decay rate in Eq. (107) corresponds to the usual definition adopted in the literature [38], upon identifying \(f\equiv f_{0}(1+\Delta_{f})\). Therefore, the total shift in the decay rate \[\Delta_{\mathrm{TOT}}=-1+(1+\Delta_{f})(1+\Delta_{R}),\] (111) which impacts the extraction of \(V_{ud}\), requires specifying both \(\Delta_{f}\) and \(\Delta_{R}\). The expressions and numerical values of \(\Delta_{f}\) and \(\Delta_{R}\) in our EFT approach differ from the results found in the literature (see Ref. [38] and most recent calculations of \(\Delta_{R}\)[1, 2, 3, 4, 5, 6, 8]). In what follows, when necessary we will discuss the origin of the differences.
* For \(\Delta_{f}\), which encodes Coulomb and recoil corrections, we find \[\Delta_{f}=3.573(5)\%,\] (112) where we estimated the uncertainty to be of the size of Coulomb corrections times recoil cross term. The difference from the standard result \(\Delta_{f}=3.608\times 10^{-2}\)[38] is mainly due to the fact that we use the nonrelativistic Fermi function, for the reasons discussed above, while Ref. [38] uses the relativistic Fermi function. We also do not include the corrections induced by modeling the proton as a uniformly charged sphere of radius \(R_{p}\simeq 1\) fm [53]: this is a small effect shifting \(\Delta_{f}\) by \(0.005\%\).
* Up to the accuracy of our calculation, the remaining radiative correction \(\Delta_{R}\) in our framework is given by \[\Delta_{R}=[g_{V}(\mu_{\chi})]^{2}\left(1+\frac{\alpha\,(\mu_{\chi})}{2\pi} \left(\frac{3}{2}\ln\frac{\mu_{\chi}^{2}}{m_{e}^{2}}+\frac{5}{4}\!+\tilde{ \tilde{g}}\;(E_{0})\right)\right)-1,\] (113) where \(\mu_{\chi}\sim m_{e}\) and \(\tilde{\tilde{g}}\;(E_{0})=-9.58766\) is obtained by averaging the subtracted Sirlin function \(\hat{g}(E_{e},E_{0})\) over the phase space, according to Eq. (110). At leading order in \(\alpha\), the \(\mu_{\chi}\)-scale dependence in Eq (113) cancels between the coupling constant \(g_{V}\;(\mu_{\chi})\) and virtual one-loop contributions, while higher-order perturbative logarithms from virtual diagrams at scales \(\mu_{\chi}\sim m_{e}\) are small.
* To separate hadronic and electroweak contributions to \(g_{V}\left(\mu_{\chi}\right)\), and to make contact with some of the previous literature, we provide the fixed-order result \[\Delta_{R}=2\overline{\square}_{\rm Had}^{V}(\mu_{0})+\frac{\alpha}{2\pi}\left[2 \left(1-\frac{\alpha_{s}}{4\pi}\right)\ln\frac{M_{Z}^{2}}{\mu_{0}^{2}}+\frac{3 }{2}\ln\frac{\mu_{0}^{2}}{m_{e}^{2}}+\tilde{\bar{g}}\left(E_{0}\right)\right].\] (114) In the above relations, the explicit dependence on \(\mu_{0}\) is canceled by the implicit dependence in \(\overline{\square}_{\rm Had}^{V}(\mu_{0})\). The hadronic physics is included in \(\overline{\square}_{\rm Had}^{V}\), while the two logarithms in Eq. (114), which are proportional to the anomalous dimensions, correspond to the ratios between electroweak vs hadronic and hadronic vs beta-decay scales.
* Our numerical result for \(\Delta_{R}\) is \[\Delta_{\rm R}=4.044(27)\%,\] (115) which, apart from the uncertainty coming from \(g_{V}\) discussed in Sect. 5.4, includes a perturbative uncertainty of \(0.005\%\) obtained by varying the scale of the calculation \(\mu_{\chi}\) in the range \(m_{e}^{2}/2\leq\mu_{\chi}^{2}\leq 2m_{e}^{2}\). Our result for \(\Delta_{R}\) is \(0.061\%\) above the most recent evaluation [8] based on Refs. [1, 2, 3, 4, 5, 6]. The sources of this difference are discussed in Section 2. Combining \(\Delta_{f}\) and \(\Delta_{R}\) in the factorization scheme of Eq. (107) we obtain \[\Delta_{\rm TOT}=7.761(27)\%.\] (116) Using the results from Refs. [1, 2, 3, 4, 5, 6, 8], one gets \(\Delta_{\rm TOT}=7.735(27)\%\), about one \(\sigma\) below our result. The difference is due to two competing factors in our analysis: a positive shift of \(+0.061\%\) in \(\Delta_{R}\) and a negative shift of \(-0.035\%\) in \(\Delta_{f}\).
* As a consistency check on the accuracy of the calculation and the size of cross terms (such as recoil \(\times\) electromagnetic corrections), we have performed the phase-space integration in a different scheme that does not assume factorization of \(F_{NR}\) and \(\delta_{\rm recoil}\), defined by \[\Gamma_{n}\to\frac{G_{F}^{2}|V_{ud}|^{2}m_{e}^{5}}{2\pi^{3}}\left(1+3\lambda^ {2}\right)\cdot f_{0}\cdot(1+\Delta_{g_{V}})\cdot\bigg{(}1+\Delta_{\rm recoil} +\Delta_{C}+\Delta_{\rm RC}\bigg{)},\] (117) with \[\Delta_{g_{V}} =[g_{V}(\mu_{\chi})]^{2}-1,\] (118) \[\Delta_{\rm C} =\frac{1}{f_{0}}\,\int_{1}^{x_{0}}w(x,x_{0})\,\left[F_{NR}\left( \beta\left(x\right)\right)-\left(11-\frac{\left(E_{0}-m_{e}\right)^{2}}{3m_{e }^{2}}\right)\frac{\alpha^{2}}{4\beta\left(x\right)}-1\right]\,\,{\rm d}x,\] (119) \[\Delta_{\rm RC} =\frac{1}{f_{0}}\,\int_{1}^{x_{0}}w(x,x_{0})\,\delta_{\rm RC} \left(xm_{e},\mu_{\chi}\right)\,\,{\rm d}x,\] (120) \[\Delta_{\rm recoil} =\frac{1}{f_{0}}\,\int_{1}^{x_{0}}w(x,x_{0})\,\delta_{\rm recoil} \left(xm_{e}\right)\,\,{\rm d}x.\] (121) For the numerical values in this scheme, we find \(\Delta_{g_{V}}=5.060(27)\%\), \(\Delta_{\rm C}=3.375\%\), \(\Delta_{\rm RC}=-0.969\%\), \(\Delta_{\rm recoil}=0.173\%\), leading to \(\Delta_{\rm TOT}=7.770\%\). The latter differs from the factorized result by \(0.009\%\), consistent with its expected size of \({\cal O}(\alpha^{2})\) and the uncertainties quoted above.
Finally, we extract the CKM matrix element \(V_{ud}\) from precise measurements of the neutron lifetime with our updated calculation of radiative corrections and present the results in Section 2.
### Comments on radiative corrections to nuclear decays
Finally, we comment on the connection to the standard framework for the analysis of superallowed \(0^{+}\to 0^{+}\) transitions, described for example in Ref. [7]. The corrections to nuclear beta decays are combined into the quantity \({\cal F}t\), related to the experimental \(ft\) values as
\[{\cal F}t=ft(1+\delta^{\prime}_{R})(1+\delta_{NS}-\delta_{C})=\frac{K}{2G_{F}^{ 2}|V_{ud}|^{2}(1+\Delta_{R}^{V})}, \tag{122}\]
where \(K\) is a constant and \(\delta_{C}\) is the isospin-symmetry breaking contribution. The correction \(\delta_{NS}\) corresponds to the transition-dependent nuclear structure correction. \(\Delta_{R}^{V}\) is the transition-independent part of the radiative correction, which is related to the correction to neutron decay via
\[\Delta_{R}^{V}=\Delta_{R}-\frac{\alpha}{2\pi}\bar{g}(E_{0})=\left[g_{V}\left( \mu_{\chi}\right)\right]^{2}\left(1+\frac{\alpha\left(\mu_{\chi}\right)}{2\pi} \left(\frac{3}{2}\ln\frac{\mu_{\chi}^{2}}{m_{N}^{2}}+\frac{5}{4}\right)\right) -1. \tag{123}\]
\(\delta^{\prime}_{R}\) contains the so-called "outer corrections", which depend on the transition but not on the nuclear structure, and corresponds to soft photon emissions from point-like nuclei. \(\delta^{\prime}_{R}\) reduces to the Sirlin function at \({\cal O}(\alpha)\) and includes a set of \({\cal O}(Z\alpha^{2})\) and \({\cal O}(Z^{2}\alpha^{3})\) corrections [54, 55, 128, 129, 130]. In addition, it contains the leading-logarithm renormalization group evolution from \(m_{N}\) to \(m_{e}\), using the RGE kernel derived in Ref. [38] and discussed in Ref. [131]. Therefore, the standard breakdown of radiative corrections corresponds to evaluating the coupling \(g_{V}\) at a scale \(\mu_{\chi}\sim\Lambda_{\chi}\sim m_{N}\) in Eq. (123), and then lumping the leading-logarithm RG evolution and the matrix element in \(\delta^{\prime}_{R}\), namely
\[\Delta_{R}^{V}\big{|}_{\rm Traditional}=\left[g_{V}\left(m_{N}\right)\right]^{2} \left(1+\frac{5\alpha(m_{N})}{8\pi}\right)-1=2.471(25)\%, \tag{124}\]
which agrees with the result compiled in Ref. [8].
From an EFT point of view aiming at describing nuclei starting from nucleon degrees of freedom, it is more natural to evolve the single-nucleon (and possibly two-nucleon, three-nucleon \(\ldots\)) coupling \(g_{V}\) all the way down the scale \(\mu_{\chi}=m_{e}\), and only leave the evaluation of the fixed-order matrix element in \(\delta^{\prime}_{R}\). This can be achieved by defining the universal correction \(\Delta_{R}^{V}|_{\rm EFT}\):
\[\Delta_{R}^{V}\big{|}_{\rm EFT}=\left[g_{V}\left(m_{e}\right)\right]^{2} \left(1+\frac{5\alpha\left(m_{e}\right)}{8\pi}\right)-1, \tag{125}\]
and appropriately redefining the Fermi function, the outer correction \(\delta^{\prime}_{R}\) and the nuclear correction \(\delta_{NS}\) in such a way that they do not contain large logarithms. This requires a new EFT analysis of both \(\delta^{\prime}_{R}\) and \(\delta_{NS}\).
## 7 Conclusions and Outlook
In this paper, we developed a systematically-improvable top-down effective field theory framework for radiative corrections to the neutron \(\beta\) decay and low-energy (anti)neutrino-nucleon scattering. As a first step, we perform the matching at the electroweak scale of the Standard Model to the LEFT with effective four-fermion operators. We resum leading and next-to-leading large electroweak logarithms to all orders by evolving the semileptonic coupling constant in the LEFT from electroweak to GeV scale according to the LEFT RGEs. Next, we perform the matching to the heavy-baryon chiral perturbation theory at the hadronic scale and express the HBChPT low-energy constants in terms of non-perturbative correlation functions of quark currents. To avoid large logarithms in the evaluation of the matrix elements, we resum leading and subleading logarithms between the hadronic scale and electron-mass scale according to the RGEs in HBChPT. In this framework, all contributions from physics above the scale of the electron mass
play the role of short-distance effects, which are captured by the Wilson coefficient \(g_{V}\). We compare our framework to the traditional current-algebra approach and find an agreement at the one-loop level. Contrary to the traditional approach, we employ dimensional regularization with minimal subtraction (\(\overline{\text{MS}}\)) and specify the scale and scheme dependence in all steps of the calculation allowing us to consistently include the next-to-leading logarithms and their resummation. In our approach, the so-called DIS region of the \(\gamma W\) box is mapped onto a contribution to the Wilson coefficient in LEFT.
In our new EFT framework, we determined the low-energy vector coupling constant \(g_{V}(m_{e})-1=(2.499\pm 0.012)\%\), which controls the neutron decay rate as well as low-energy (anti)neutrino-nucleon scattering, and provides the basis for one-body contributions to nuclear decays. We also extracted the CKM matrix element \(V_{ud}\) from neutron decay measurements with our new values for the radiative corrections. An updated value \(\left|V_{ud}\right|=0.97402(42)\) based on the most precise determinations of the neutron lifetime and axial-vector ratio is smaller than previous results. The difference with respect to the previous analyses originates from the consistent inclusion of the next-to-leading logarithms and Coulomb corrections within our framework.
The effective field theory approach to radiative corrections in weak processes advocated in this paper can be extended to the analysis of the axial-vector coupling constant \(g_{A}\), which is a natural next step that will be presented in future work. The developed EFT approach can straightforwardly be applied to precise first-principles cross-section calculations in low-energy (anti)neutrino-nucleon scattering and can be extended to describe neutral-current processes with nucleons at low energies. The EFT framework can also be generalized to address radiative corrections to nuclear decays. In fact, one of the advantages of EFT is that the effective couplings \(g_{V}\) and \(g_{A}\) already determine the one-body inner corrections to nuclear decays at \(\mathcal{O}(G_{F}\alpha)\). Consequently, matrix elements of the weak Lagrangian of Eq. (1) should be computed to \(\mathcal{O}\left(\alpha\right)\) in the low-energy nuclear many body theory. In this approach, the so-called "nuclear \(\gamma W\) box" arises from contributions at scales smaller than or equal to the Fermi momentum \(k_{F}\), which are calculable in the nuclear EFT. Short-range physics is captured by \(g_{V}\), and potentially, by two-nucleon and/or few-nucleon weak operators in HBChPT, whose contributions at a given order in \(\epsilon_{\chi}\) can be be estimated by the power counting in chiral EFT. The full analysis of radiative corrections to nuclear beta decay will require the development of the EFT framework for few-nucleon systems to \(\mathcal{O}(G_{F}\alpha\epsilon_{\chi}^{n})\).
## Acknowledgments
We thank Chien-Yeah Seng for providing us with hadronic inputs in the evaluation of the \(\gamma W\) box diagram and for explaining the details of his calculations, Andrzej Czarnecki and Bastian Kubis for useful correspondence, Ryan Plestid and Martin Hoferichter for useful discussions at the INT Program INT-23-1B, Richard Hill for useful discussions and validations. We thank Jordy de Vries, Mikhail Gorchtein, Leendert Hayen, Duff Neill, Alessandro Vicini, Andreas von Manteuffel, Andre Walker-Loud for useful discussions and comments on the manuscript. V.C. and W.D. acknowledge support by the U.S. DOE under Grant No. DE-FG02-00ER41132. This work is supported by the US Department of Energy through the Los Alamos National Laboratory and by LANL's Laboratory Directed Research and Development (LDRD/PRD) program under projects 20210968PRD4 and 20210190ER. Los Alamos National Laboratory is operated by Triad National Security, LLC, for the National Nuclear Security Administration of U.S. Department of Energy (Contract No. 89233218CNA000001). We acknowledge support from the DOE Topical Collaboration "Nuclear Theory for New Physics", award No. DE-SC0023663. FeynCalc [132, 133], LoopTools [134], and Mathematica [135] were extremely useful in this work.
## Appendix A Electromagnetic fine-structure constant in LEFT and \(\chi\)Pt
In this appendix, we discuss the definition we adopt for the running fine-structure constant \(\alpha(\mu)\) used in the LEFT and \(\alpha_{\chi}(\mu_{\chi})\) in \(\chi\)PT.
In any theory, including LEFT and \(\chi\)PT, charge renormalization is studied in connection with the photon self-energy tensor \(\Pi^{\mu\nu}(q^{2})\) and the vacuum polarization function \(\Pi(q^{2})\) defined by [136]
\[\Pi^{\mu\nu}(q)=\left(g^{\mu\nu}q^{2}-q^{\mu}q^{\nu}\right)\Pi(q^{2}). \tag{126}\]
Including resummed self-energy corrections, the amplitude for scattering of two charged particles is proportional to the physical (renormalization scale and scheme-independent) combination
\[\kappa_{phys}(q^{2})=\frac{\alpha_{R}}{1-\Pi_{R}(q^{2})}, \tag{127}\]
where the subscript "\(R\)" labels the renormalization scheme, \(\alpha_{R}\) denotes the renormalized fine-structure constant and \(\Pi_{R}\) is the corresponding subtracted, UV-finite vacuum polarization function. For example, in the "on-shell" (\(OS\)) renormalization scheme the renormalized vacuum polarization function is defined by \(\Pi_{OS}(q^{2})=\Pi(q^{2})-\Pi(0)\equiv\Delta\alpha(q^{2})\). In this scheme, using observables at \(q^{2}\to 0\), one extracts \(\alpha_{OS}=1/137.036\). This scheme can be implemented in any field theory, including \(\chi\)PT, LEFT, and the full Standard Model.
Importantly, Eq. (127) allows one to relate \(\alpha_{R}\) (in any scheme and at any renormalization scale) to \(\alpha_{OS}\), in terms of \(\Pi_{R}(q^{2}=0)\). Eq. (127) can also be used to relate the electromagnetic couplings defined in _any_ two renormalization schemes and even in two different EFTs.
### Charge renormalization in LEFT
Throughout this paper, we use the notation \(\alpha(\mu)\) to indicate the electromagnetic coupling in LEFT, defined in the modified minimal subtraction scheme (\(\overline{\rm MS}\)).
At any value of \(\mu<M_{W,Z}\), where the LEFT is applicable, \(\alpha(\mu)\) can be defined by its relation to \(\alpha_{OS}\) via Eq. (127), leading to
\[\alpha_{OS}=\frac{\alpha(\mu)}{1-\Pi_{\overline{\rm MS}}(0)}=\frac{\alpha(\mu )}{1-\Pi_{\overline{\rm MS}}(\tilde{\mu}^{2})+\Pi_{OS}(\tilde{\mu}^{2})},\] (128a) where in the second equality we have expressed \[\Pi_{\overline{\rm MS}}(0)\] in terms of \[\Pi_{\overline{\rm MS}}(\tilde{\mu}^{2})\] and \[\Pi_{OS}(\tilde{\mu}^{2})\], at the arbitrary scale \[\tilde{\mu}\gg\Lambda_{QCD}\]. In LEFT, the vacuum polarization receives contributions from charged fermions only. The contribution of charged leptons to both \[\Pi_{\overline{\rm MS}}(\tilde{\mu}^{2})\] and \[\Pi_{OS}(\tilde{\mu}^{2})=\Pi(\tilde{\mu}^{2})-\Pi(0)\] can be computed in perturbation theory. For quarks, the calculation of \[\Pi_{\overline{\rm MS}}(\tilde{\mu}^{2})\] can be carried out in perturbation theory because \[\tilde{\mu}^{2}\gg\Lambda_{\rm QCD}^{2}\], with each quark flavor of charge \[Q_{q}\] contributing (to zeroth order in the QCD coupling \[\alpha_{s}\] ) \[\Pi_{\overline{\rm MS}}^{(q)}(\tilde{\mu}^{2}\gg\Lambda_{\rm QCD}^{2})=\frac{ 4}{3}N_{C}Q_{q}^{2}\frac{\alpha}{4\pi}\left[\ln\frac{\tilde{\mu}^{2}}{\mu^{2} }-\frac{5}{3}\right]. \tag{128b}\]
The non-perturbative contributions are encoded in \(\Pi_{OS}(\tilde{\mu}^{2})\) and can be evaluated via a dispersion relation
\[\Pi_{OS}(\tilde{\mu}^{2})=\Delta\alpha(\tilde{\mu}^{2})=-\frac{\alpha}{3\pi} \,\tilde{\mu}^{2}\,{\rm Re}\int_{4m_{\pi}^{2}}^{\infty}\,{\rm d}s\frac{R(s)}{ s(s-\tilde{\mu}^{2}+i0^{+})}, \tag{128c}\]
where \(R(s)=\sigma_{e^{+}e^{-}\to{\rm hadrons}}(s)/\sigma_{e^{+}e^{-}\to\mu^{+}\mu^{-} }(s)\), see for example Ref. [137].
The scheme matching conditions between the on-shell and \(\overline{\rm MS}\) couplings given in Eqs. (128) are conceptually clean and could be implemented at any value of \(\mu<M_{W,Z}\). They are, however, not extremely practical, because the dispersive integral \(\Delta\alpha(\tilde{\mu}^{2})\) is usually given in the literature only at \(\tilde{\mu}=M_{Z}\)[56, 137]. To circumvent this issue, we define the LEFT \(\overline{\rm MS}\) coupling \(\alpha(\mu)\) by relating it to the \(\overline{\rm MS}\) fine-structure constant in the full Standard Model with five quark flavors, denoted by \(\hat{\alpha}^{(5)}(\mu)\) in the PDG review on the electroweak theory [56]. \(\hat{\alpha}^{(5)}(M_{Z})\) is related to \(\alpha_{OS}\) by an expression which is analogous to (128),
with \(\mu=\tilde{\mu}=M_{Z}\), up to an additional contribution to charge renormalization due to the \(W\) boson [138]. Taking this into account, we find at \(\mu=\mu_{SM}\sim M_{W}\),
\[\frac{1}{\alpha(\mu_{SM})}=\frac{1}{\hat{\alpha}^{(5)}(\mu_{SM})}-\frac{1}{6\pi }+\frac{7}{2\pi}\ln\frac{M_{W}}{\mu_{SM}}. \tag{129}\]
The numerical value \(\hat{\alpha}^{(5)}(M_{Z})^{-1}=127.951(9)\)[56] implies \(\hat{\alpha}^{(5)}(M_{W})^{-1}=127.989(9)\) and \(\alpha(M_{W})^{-1}=127.936(9)\). We use the latter value as initial condition for the RGE
\[\mu\frac{\mathrm{d}\alpha(\mu)}{\mathrm{d}\mu} =-\frac{\beta_{0}(\mu)}{2\pi}\alpha^{2}(\mu)+\mathcal{O}(\alpha^{ 3}), \tag{130}\] \[\beta_{0}(\mu) =-\frac{4}{3}\tilde{n}(\mu), \tilde{n}(\mu) =\sum_{f}Q_{f}^{2}n_{f}\theta(\mu-m_{f}), \tag{131}\]
where \(Q_{f}\) denotes the fermion charge, \(n_{f}\) is the multiplicity (\(n_{f}=1\) for charged leptons, \(n_{f}=N_{C}\) for quarks). To the accuracy we work, \(\alpha(\mu)\) can be treated as continuous across heavy-fermion thresholds in LEFT.
### Charge renormalization in \(\chi\)Pt
Below the QCD scale \(\Lambda_{\chi}\sim m_{N}\), we work with (baryon) \(\chi\)PT extended to include dynamical photons and leptons. For illustrative purposes, we consider \(SU(2)\)\(\chi\)PT with both muon and electron as dynamical fields. This is the field content relevant for matching to the LEFT at the scale \(\Lambda_{\chi}\). We add the gauge-kinetic terms for photons and leptons and couple them to mesons and baryons by appropriate shifts to the external field sources that appear in chiral covariant derivatives, see Eq. (27). Upon redefining the photon field \(A_{\mu}\rightarrow(1/e)A_{\mu}\), the electromagnetic coupling \(e\) only appears in the photon kinetic term and the relevant renormalized Lagrangian, written in terms of renormalized coupling \(\hat{e}_{\chi}\) and counterterms \(Z_{A,\chi}-1\), reads
\[\mathcal{L}_{A,\chi} = -\frac{1}{4\hat{e}_{\chi}^{2}}F_{\mu\nu}F^{\mu\nu}-\frac{1}{4 \hat{e}_{\chi}^{2}}F_{\mu\nu}F^{\mu\nu}\left(Z_{A,\chi}-1\right)+..., \tag{132a}\] \[Z_{A,\chi} = 1+8\hat{e}_{\chi}^{2}h_{2}-4\hat{e}_{\chi}^{2}X_{8}. \tag{132b}\]
The low-energy constant (LEC) \(h_{2}\) was introduced in Ref. [49] in the context of \(SU(2)\) meson \(\chi\)PT. \(X_{8}\) was introduced in Ref. [61], which extended \(\chi\)PT to include dynamical leptons, for the study of semileptonic processes. In \(\chi\)PT, the LECs contain a pure counterterm, that subtracts the UV divergences of meson and lepton loops, and a finite, renormalized coupling that encodes contributions from heavy states not included in the EFT. Adopting dimensional regularization in \(d=4-2\varepsilon\) and following Refs. [49, 139], the generic LEC \(\mathcal{C}_{i}\) is written as
\[\mathcal{C}_{i}=\mathcal{C}_{i}^{r}(\mu_{\chi})-\frac{\gamma_{i}}{2}\frac{1}{( 4\pi)^{2}}\left(\frac{1}{\hat{\varepsilon}}+1\right), \tag{133}\]
where \(\gamma_{h_{2}}=1/12\), \(\gamma_{X_{8}}=-4/3\)[61], and \(1/\hat{\varepsilon}=1/\varepsilon-\gamma_{E}+\ln{(4\pi)}\). The renormalized couplings \(\mathcal{C}_{i}^{r}(\mu_{\chi})\) depend on the scale in such a way that, after including loops, the physical amplitudes are \(\mu_{\chi}\)-independent. \(h_{2}\) cancels divergences induced by pseudoscalar meson loops, while \(X_{8}\) cancels divergences produced by loops with the electron and muon.
In the standard \(\chi\)PT scheme defined by Eqs. (132), we can study charge renormalization and vacuum polarization. The renormalized electromagnetic coupling \(\hat{\alpha}_{\chi}\equiv\hat{e}_{\chi}^{2}/(4\pi)\) and \(\Pi_{\tilde{\chi}}(q^{2})\) are separately scale-independent. In fact, the subtracted vacuum polarization is given by
\[\Pi_{\tilde{\chi}}(q^{2})=\Pi_{\pi}(q^{2},m_{\pi}^{2},\mu_{\chi}^{2})+\Pi_{ \ell}(q^{2},m_{e}^{2},\mu_{\chi}^{2})+\Pi_{\ell}(q^{2},m_{\mu}^{2},\mu_{\chi}^ {2})+\Pi_{LECs}(\mu_{\chi}^{2}), \tag{134}\]
in terms of the pion loop, lepton loop, and the counterterm contributions
\[\Pi_{\pi}(q^{2},m_{\pi}^{2},\mu_{\chi}^{2}) = \frac{\hat{\alpha}_{\chi}}{4\pi}\left[-\frac{1}{3}\frac{1}{\hat{ \varepsilon}}+\int_{0}^{1}\mathrm{d}x\,(1-2x)^{2}\,\ln\left(\frac{m_{\pi}^{2}- q^{2}x(1-x)-i0^{+}}{\mu_{\chi}^{2}}\right)\right], \tag{135a}\] \[\Pi_{\ell}(q^{2},m_{\ell}^{2},\mu_{\chi}^{2}) = \frac{\hat{\alpha}_{\chi}}{4\pi}\left[-\frac{4}{3}\frac{1}{\hat{ \varepsilon}}+8\int_{0}^{1}\mathrm{d}x\,x(1-x)\,\ln\left(\frac{m_{\ell}^{2}-q^ {2}x(1-x)-i0^{+}}{\mu_{\chi}^{2}}\right)\right],\] (135b) \[\Pi_{LECs}(\mu_{\chi}^{2}) = -8\tilde{e}_{\chi}^{2}h_{2}^{r}(\mu_{\chi})+4\tilde{e}_{\chi}^{2 }X_{8}^{r}(\mu_{\chi})\] (135c) \[+ \frac{\hat{\alpha}_{\chi}}{4\pi}\left(\frac{1}{\hat{ \varepsilon}}+1\right)\left[\left(\frac{1}{3}\right)_{h_{2}}+\left(\frac{8}{3 }\right)_{X_{8}}\right].\]
The divergent parts of the LECs, independently presented in Refs. [49, 61], cancel the loop contributions from light charged particles, as expected. The \(\mu_{\chi}\) dependence cancels in the sum of loop and LECs contributions, so that \(\Pi_{\hat{\chi}}(q^{2})\) does not depend on \(\mu_{\chi}\). Moreover, the relation between the known \(\alpha_{OS}\) and \(\hat{\alpha}_{\chi}\) depends on unknown LECs through \(\Pi_{\hat{\chi}}(q^{2}=0)\) in the scheme-matching relation:
\[\alpha_{OS}=\frac{\hat{\alpha}_{\chi}}{1-\Pi_{\hat{\chi}}(0)}. \tag{136}\]
A more convenient renormalization scheme, that more closely resembles the standard minimal subtraction, is achieved by rewriting \(Z_{A,\chi}\) in Eq. (132b) as
\[Z_{A,\chi} = 1+\bar{z}_{A,\chi}(\mu_{\chi})+\delta z_{A,\chi}, \tag{137a}\] \[\bar{z}_{A,\chi}(\mu_{\chi}) = 8e_{\chi}^{2}h_{2}^{r}(\mu_{\chi})-4\tilde{e}_{\chi}^{2}X_{8}( \mu_{\chi}), \tag{137b}\]
where \(\delta z_{A,\chi}\) contains the divergent parts of the LECs, proportional to \((1/\hat{\varepsilon}+1)\), and the fine-structure constant is redefined as
\[\alpha_{\chi}(\mu_{\chi})\equiv\frac{\hat{\alpha}_{\chi}}{1+\bar{z}_{A,\chi}( \mu_{\chi})}. \tag{137c}\]
Such a redefinition corresponds to a different choice (scheme) in separating the counterterm Lagrangian. The first of Eqs. (132) now reads (up to higher-order terms in \(\alpha_{\chi}\)):
\[\mathcal{L}_{A,\chi}=-\frac{1}{4e_{\chi}^{2}}F_{\mu\nu}F^{\mu\nu}-\frac{1}{4e _{\chi}^{2}}F_{\mu\nu}F^{\mu\nu}\ \delta z_{A,\chi}+.... \tag{137d}\]
The corresponding finite vacuum polarization is given by
\[\Pi_{\chi}(q^{2})=\ \tilde{\Pi}_{\pi}(q^{2},m_{\pi}^{2},\mu_{\chi}^{2})+\tilde{ \Pi}_{\ell}(q^{2},m_{e}^{2},\mu_{\chi}^{2})+\tilde{\Pi}_{\ell}(q^{2},m_{\mu}^{ 2},\mu_{\chi}^{2}), \tag{137e}\]
where \(\tilde{\Pi}_{\pi,\ell}\) are obtained from \(\Pi_{\pi,\ell}\) by replacing \(\hat{\alpha}_{\chi}\rightarrow\alpha_{\chi}\) and \(1/\hat{\varepsilon}\rightarrow-1\) in (135a) and (135b).
In this new scheme, the beta function for the renormalized coupling \(\alpha_{\chi}(\mu_{\chi})\) is similar to the standard minimal subtraction scheme. The running of \(\alpha_{\chi}(\mu_{\chi})\) is controlled by
\[\mu_{\chi}\frac{\mathrm{d}\alpha_{\chi}(\mu_{\chi})}{\mathrm{d}\mu _{\chi}} = -\frac{\beta_{0}(\mu_{\chi})}{2\pi}\alpha^{2}(\mu_{\chi})+ \mathcal{O}(\alpha_{\chi}^{3}), \tag{138}\] \[\beta_{0}(\mu_{\chi}) = -\frac{4}{3}\tilde{n}_{\ell}(\mu_{\chi})-\frac{1}{3}\tilde{n}_{ \pi}(\mu_{\chi}),\qquad\tilde{n}_{\ell,\pi}(\mu_{\chi})=\sum_{\ell,\pi}Q_{\ell,\pi}^{2}n_{\ell,\pi}\,\theta(\mu_{\chi}-m_{\ell,\pi}), \tag{139}\]
where the sum over charged leptons (\(\ell\)) includes the electron and the muon.
Another benefit of this renormalization scheme is that the relation to the on-shell coupling does not involve any unknown LECs:
\[\alpha_{OS}=\frac{\alpha_{\chi}}{1-\Pi_{\chi}(0)}, \tag{140}\]
with \(\Pi_{\chi}(0)\) obtained, at one loop, from Eq. (137e). This implies
\[\frac{1}{\alpha_{\chi}(\mu_{\chi})}=\frac{1}{\alpha_{OS}}+\frac{1}{3\pi}\sum_{ \ell=e,\mu}Q_{\ell}^{2}\left(1+\ln\frac{m_{\ell}^{2}}{\mu_{\chi}^{2}}\right) \theta(\mu_{\chi}-m_{\ell})+\frac{1}{12\pi}Q_{\pi}^{2}\left(1+\ln\frac{m_{\pi }^{2}}{\mu_{\chi}^{2}}\right)\theta(\mu_{\chi}-m_{\pi}). \tag{141}\]
The above formula accounts for the discontinuity of \(\alpha_{\chi}(\mu_{\chi})\) at the particle mass threshold, due to the fact that conventionally in \(\chi\)PT one subtracts \(1/\hat{\varepsilon}+1\) rather than \(1/\hat{\varepsilon}\). In the main text of this manuscript, we imply \(\alpha_{\chi}\) as soon as \(\alpha\) has an argument \(\mu_{\chi}\). Numerically, using the boundary conditions described above and running \(\alpha(\mu)\) down from \(\mu=M_{Z}\) and \(\alpha_{\chi}(\mu_{\chi})\) up from \(\mu_{\chi}=m_{e}\), we find \(1/\alpha_{\chi}(m_{p})=135.112\) while \(1/\alpha(m_{p})=134.302\).
## Appendix B Factorization of the decay rate in the nonrelativistic limit
In this appendix, we provide a justification of the factorized form for the electromagnetic corrections provided in Eq. (101). This form can be rigorously derived for the nonrelativistic electron, but, even for the relativistic electron, it captures the leading series in \((\alpha\pi/\beta)^{n}\) and \(\alpha/\pi(\alpha\pi/\beta)^{n}\). As these terms are enhanced by a factor of \(\pi^{2}\) with respect to naive two-loop corrections, they are relevant at the level of \(\sim 10^{-4}\), and their estimate requires the evaluation of the diagrams in Fig. 3 and of NNLO real-virtual and real emission diagrams. As we argue below, the first corrections to Eq. (101) are of order \({\cal O}(\alpha^{2})\).
For \(E_{0}-E_{e}\ll m_{e}\), which does not apply to neutron decay but would apply, for example, to triton decay, we could further integrate out the scale of the electron mass, and match \(\not{\pi}\)EFT onto a theory with nonrelativistic electrons (NRQED). In this theory, the charged-current vector operator, with coupling constant \(g_{V}\) in front, matches onto
\[{\cal L}_{\rm NRQED}=-\sqrt{2}G_{F}V_{ud}\,g_{V}C_{\rm NQRED}\,\bar{\psi}_{e} \gamma_{0}{\rm P}_{\rm L}\nu_{e}\,\bar{N}_{v}\tau^{+}N_{v}+{\cal O}(\beta^{2}), \tag{142}\]
where \(\psi_{e}\) denotes a nonrelativistic electron field. The matching coefficient at one loop can be extracted from the matching of heavy-light onto heavy-heavy currents performed in Ref. [123], and, in the \(\overline{\rm MS}_{\chi}\) scheme, it is given by
\[C_{\rm NRQED}(\mu)=1+\frac{\alpha}{2\pi}\left(\frac{3}{4}\ln\frac{\mu^{2}}{m_ {e}^{2}}-\frac{11}{4}\right). \tag{143}\]
The NRQED Lagrangian still contains various photon modes: soft (\(k_{0}\sim|\vec{k}|\sim m_{e}\beta\)), ultrasoft (\(k_{0}\sim|\vec{k}|\sim m_{e}\beta^{2}\)), and potential (\(k_{0}\sim m_{e}\beta^{2}\ll|\vec{k}|\sim m_{e}\beta\)). We can integrate out soft and potential modes by matching onto potential NRQED (pNRQED) [140, 141]. In this EFT, the proton and electron interact via non-local potentials, and via the exchange of ultrasoft photons. In momentum space, the potential is given at leading order by the Coulomb potential,
\[V_{\rm LO}(\vec{q})=-\frac{4\pi\alpha}{\vec{q}^{\,2}}. \tag{144}\]
In principle, the above potential should be evaluated using \(\alpha\) in the Thomson limit, as the electron no longer contributes to the running of \(\alpha\) within pNRQED. However, as we discuss below, the difference with the coupling in HBChPT, \(\alpha(\mu_{\chi})\), leads to very small effects numerically. Corrections to the potential are organized in powers of \(\alpha\) and \(\beta\). NLO (\({\cal O}(\alpha)\) and \({\cal O}(\beta)\)) and N\({}^{2}\)LO (\({\cal O}(\alpha^{2})\), \({\cal O}(\alpha\beta)\), and \({\cal O}(\beta^{2})\)) corrections to the potential have been computed, both for system of equal mass (\(t\bar{t}\), quarkonium, and positronium), and for systems with different masses (such as the hydrogen atom, or the \(B_{c}\) meson). The results can be found in Refs. [107, 140, 141, 142]. No corrections to the potential appears at NLO. At N\({}^{2}\)LO, we can adapt the result of Refs. [141, 142] to the case of QED, \(C_{F}=1\), \(C_{A}=0\), \(n_{f}=0\). We find
\[V_{\rm N^{2}LO}=+\frac{4\pi\alpha}{m_{e}^{2}}\frac{c_{D}}{8}, \tag{145}\]
where \(c_{D}\) is the Darwin term, \(c_{D}=1\) at the order we are working. Finally, the interactions of ultrasoft photons with heavy quarks do not contribute up to N\({}^{3}\)LO [107, 140]. This can for example be seen by the fact that one-loop real emission diagrams only contribute at \({\cal O}(\alpha\beta^{2})\), three orders smaller than the leading order.
In the hypothetical case in which the electron emitted in neutron decay would be nonrelativistic, the decay rate could thus be expressed as
\[\frac{\mathrm{d}\Gamma_{n}}{\mathrm{d}E_{e}}=\frac{G_{F}^{2}\left|V_{ud} \right|^{2}}{(2\pi)^{5}}\left(1+3\lambda^{2}\right)\ p_{e}E_{e}(E_{0}-E_{e})^{2 }\left[g_{V}(\mu_{\chi})\right]^{2}\ \left|C_{\mathrm{NRQED}}(\mu)\right|^{2}\left| \mathcal{M}_{\mathrm{pNRQED}}\right|^{2}, \tag{146}\]
with the pNRQED matrix element organized as
\[\left|\mathcal{M}_{\mathrm{pNRQED}}\right|^{2}=\sum_{n}\left(\frac{\alpha\pi} {\beta}\right)^{n}\left\{1,\alpha,\beta,\alpha^{2},\alpha\beta,\beta^{2}, \dots\right\}. \tag{147}\]
From the previous discussion, the LO is provided by the iteration of the Coulomb potential, which leads to the nonrelativistic Fermi function
\[\left|\mathcal{M}_{\mathrm{pNRQED}}^{\mathrm{LO}}\right|^{2}=F_{NR}(\beta). \tag{148}\]
Since there are no NLO potentials and the ultrasoft modes only contribute at N\({}^{3}\)LO, \(\mathcal{M}_{\mathrm{pNRQED}}^{\mathrm{LO}}\) does not receive any \({\cal O}(\alpha)\) corrections. \({\cal O}(\alpha)\) corrections are entirely contained in the matching coefficient \(C_{\mathrm{NRQED}}\). Therefore, the leading \((\alpha\pi/\beta)^{n}\) and the subleading \((\alpha\pi/\beta)^{n}\alpha\) terms are captured by the factorized expression
\[\frac{\mathrm{d}\Gamma_{n}}{\mathrm{d}E_{e}}=\frac{G_{F}^{2}\left|V_{ud} \right|^{2}}{(2\pi)^{5}}\left(1+3\lambda^{2}\right)\ p_{e}E_{e}(E_{0}-E_{e})^ {2}\left[g_{V}(\mu_{\chi})\right]^{2}\left|C_{\mathrm{NRQED}}(\mu)\right|^{2}F _{NR}(\beta). \tag{149}\]
This result was proved in the case of \(t\bar{t}\) production at threshold in Refs. [119, 104, 105, 106].
To make connection with the relativistic expressions, we now notice that
\[(1+\delta_{\mathrm{RC}}(E_{e},\mu_{\chi}))\xrightarrow{\beta\to 0}\left|C_{ \mathrm{NRQED}}(\mu_{\chi})\right|^{2}, \tag{150}\]
where we notice that real emission diagrams give a contribution that goes as \((E_{0}-E_{e})^{2}/E_{e}^{2}\sim{\cal O}(\beta^{4})\), and can therefore be neglected.
By expressing the decay rate as
\[\frac{\mathrm{d}\Gamma_{n}}{\mathrm{d}E_{e}}=\frac{G_{F}^{2}\left|V_{ud} \right|^{2}}{(2\pi)^{5}}\left(1+3\lambda^{2}\right)\ p_{e}E_{e}(E_{0}-E_{e})^{ 2}\ \left[g_{V}(\mu_{\chi})\right]^{2}\ F_{NR}(\beta)\bigg{(}1+\delta_{ \mathrm{RC}}(E_{e},\mu_{\chi})\bigg{)}, \tag{151}\]
our expression correctly reproduces the relativistic one-loop result, and, in addition, captures all subleading terms of \((\frac{\alpha\pi}{\beta})^{n}\frac{\alpha}{\pi}\). Numerically, the various contributions to \(\Delta_{f}\) (see Eq. (109)) average to be as follows: \(\left(\frac{\alpha^{2}}{\beta},\alpha^{2},\alpha^{2}\ln\beta\right)=(8,5,2) \times 10^{-5}\). To obtain a diagrammatic expansion of the Fermi function \(F_{NR}\) in the electromagnetic coupling constant at the level of decay rate, we evaluate it with \(\chi\)PT value \(\alpha(\mu_{\chi})\). We also test the \({\cal O}(\alpha^{2})\) difference induced by computing \(F_{NR}\) using \(\alpha(\mu_{\chi})\) versus \(\alpha\) in the Thomson limit and obtain a result for \(\Delta_{f}\) that is only \(0.002\%\) higher. Therefore, theoretical improvements (i.e. knowledge of the terms \({\cal O}(\alpha^{2}\ln\beta)\)) would not induce much change in our results and uncertainty estimates.
## Appendix C Details on the two-loop anomalous dimensions
As discussed in Sections 3.1 and 5.3, we obtain the two-loop \({\cal O}(\alpha^{2})\) anomalous dimensions, \(\gamma_{1}\) and \(\tilde{\gamma}_{1}\), in the LEFT and \(\chi\)PT by adapting calculations in the literature. For \(\gamma_{1}\) in the LEFT, we use Refs. [68, 72] which consider the two-loop QCD anomalous dimension of a four-quark operator. As each diagram is
given separately, their results can be modified to obtain the two-loop QED diagrams for the operator in Eq. (9). This involves replacing the QCD couplings, multiplicities, and color factors with their QED counterparts. The same procedure also allows us to reproduce the \(\mathcal{O}(\alpha\alpha_{s})\) anomalous dimension \(\gamma_{sc}\). Although Refs. [68, 72] provide results using one particular scheme for the evanescent operators, \(a=-1\), the full \(a\) dependence can be recovered due to the fact that both the \(1/\epsilon^{2}\) and \(1/\epsilon\) coefficients of the diagrams are given, leading to the result in Eq. (14).
To obtain \(\tilde{\gamma}_{1}\), relevant for the RGE in \(\chi\)PT, we use calculations for the heavy-light currents in HQET, [100], see also Refs. [101, 102, 103]. These calculations compute the two-loop anomalous dimension in QCD for a heavy-light current, \(\bar{Q}\Gamma q\), where \(Q\) denotes a nonrelativistic field, \(q\) is a relativistic particle, and \(\Gamma\) is an arbitrary Lorentz structure. As we argue below, the graphs in Ref. [100] can be adapted to obtain the relevant diagrams for the anomalous dimension of \(g_{V}\), see Fig. 3.
One way to see that the two cases are related to each other is by rewriting the weak operator in Eq. (1) using Fierz identities,
\[(\bar{p}v^{\mu}n)(\bar{e}_{L}\gamma_{\mu}\nu_{L})=\sum_{i,j}c_{ij}(\bar{\nu}_ {L}^{c}\bar{\Gamma}_{i}n)(\bar{p}\bar{\Gamma}_{j}e^{c})+\mathrm{E}\,, \tag{152}\]
where \(i,j\) run over the Dirac structures, \(c_{ij}\) are coefficients determined by the Fierz relation, and \(\mathrm{E}\) is an evanescent operator. The last bilinear on the right-hand side of Eq. (152) now takes the same form as the heavy-light current, where the proton and the charge-conjugated electron, \(e^{c}\), play the role of the \(Q\) and \(q\) fields. This means the diagrams will take the same form as for the heavy-quark calculation, with the heavy-light vertex \(\Gamma\) replaced by \(\bar{\Gamma}_{j}\), while the neutral neutrino and neutron fields are irrelevant to the calculation.
The final ingredient to show a correspondence is the fact that the loop diagrams do not depend on the Dirac structure of the vertex [103]. Since QED vertices, \(\sim v^{\mu}\), and propagators, \(\sim i/(v\cdot k)\), on the heavy-quark/proton line do not involve any Dirac structure they cannot modify the original vertex. This is less obvious in the part of the diagrams involving the light-quark/electron line as it consists of a string of QED vertices, each of which comes with a propagator. However, one can show that, after performing the loop integrals, all gamma matrices will either be contracted with each other, or with factors of \(v^{\mu}\). This implies that the string of gamma matrices on the electron side also becomes proportional to the identity and leaves the original vertex unchanged. The independence of the gamma structure then allows us to adapt the results of Ref. [100] to obtain the diagrams with insertions of the right-hand side of Eq. (152).10 The relevant replacements again involve replacing the QCD couplings, multiplicities, and color factors with their QED counterparts, where the appearing QED charges are now \(Q_{p}\) and \(Q_{e^{+}}=-Q_{e^{-}}\). The result of this procedure is given in Eq. (88). Consequently, the anomalous dimensions can be obtained by a simple substitution \(C_{F}=1,\ C_{A}=0,\ T_{F}=1\) in the QCD calculation of Ref. [143]. This also allows us to obtain \(\tilde{\gamma}_{2}\) as
Footnote 10: Note that the independence of the gamma structure also means that the evanescent operator cannot contribute to matrix elements in \(d=4\). In addition, one can use the same fact to directly show that insertions of the left-hand side of Eq. (152) are related to the diagrams involving the heavy-light current, without the need for the Fierz rearrangement in Eq. (152).
\[\tilde{\gamma}_{2}=\frac{1}{64}\left[\left(-80\zeta_{4}-36\zeta_{3}+64\zeta_{2 }-\frac{37}{3}\right)+\tilde{n}\left(-\frac{176}{3}\zeta_{3}+\frac{448}{9} \zeta_{2}+\frac{470}{9}\right)+\frac{140}{27}\tilde{n}^{2}\right]. \tag{153}\]
Even though this expression contains terms enhanced by \(\pi^{4}\), the numerical value of \(\tilde{\gamma}_{2}\) is of natural size, \(\tilde{\gamma}_{2}\lesssim 2\) in \(\not{\pi}\)EFT, and its contribution is beyond the required accuracy for neutron \(\beta\) decay.
|
2302.09684 | A robust multiplicity result in a generalized diffusive predator-prey
model | This paper analyzes the generalized spatially heterogeneous diffusive
predator-prey model introduced by the authors in \cite{LGMH20}, whose
interaction terms depend on a saturation coefficient $m(x)\gneq0$. As the
amplitude of the saturation term, measured by $\|m\|_\infty$, blows up to
infinity, the existence of, at least, two coexistence states, is established in
the region of the parameters where the semitrivial positive solution is
linearly stable, regardless the sizes and the shapes of the remaining function
coefficients in the setting of the model. In some further special cases, an
$S$-shaped component of coexistence states can be constructed, which causes the
existence of, at least, three coexistence states, though this multiplicity
occurs within the parameter regions where the semitrivial positive solution is
linearly unstable. Therefore, these multiplicity results inherit a rather
different nature. | Julián López-Gómez, Eduardo Muñoz-Hernández | 2023-02-19T22:44:06Z | http://arxiv.org/abs/2302.09684v1 | # A robust multiplicity result in a generalized diffusive predator-prey model +
###### Abstract
This paper analyzes the generalized spatially heterogeneous diffusive predator-prey model introduced by the authors in [24], whose interaction terms depend on a saturation coefficient \(m(x)\geq 0\). As the amplitude of the saturation term, measured by \(\|m\|_{\infty}\), blows up to infinity, the existence of, at least, two coexistence states, is established in the region of the parameters where the semitrivial positive solution is linearly stable, regardless the sizes and the shapes of the remaining function coefficients in the setting of the model. In some further special cases, an \(S\)-shaped component of coexistence states can be constructed, which causes the existence of, at least, three coexistence states, though this multiplicity occurs within the parameter regions where the semitrivial positive solution is linearly unstable. Therefore, these multiplicity results inherit a rather different nature.
Introduction
This paper studies the existence and multiplicity of coexistence states for the generalized spatially heterogeneous predator-prey model
\[\left\{\begin{aligned} \mathfrak{L}_{1}u&=\lambda u-a(x)u^{2}-b(x) \frac{uv}{1+\gamma m(x)u}&\quad\text{in}\;\;\Omega,\\ \mathfrak{L}_{2}v&=\mu v-d(x)v^{2}+c(x)\frac{uv}{1+ \gamma m(x)u}&\quad\text{in}\;\;\Omega,\\ \mathfrak{B}_{1}u&=\mathfrak{B}_{2}v=0& \quad\text{on}\;\;\partial\Omega,\end{aligned}\right. \tag{1.1}\]
where \(\Omega\) is a bounded domain of \(\mathbb{R}^{N}\) with boundary, \(\partial\Omega\), of class \(\mathcal{C}^{2}\), and \(\mathfrak{L}_{\kappa}\), \(\kappa=1,2\), are second order uniformly elliptic operators in \(\Omega\) of the form
\[\mathfrak{L}_{\kappa}:=-\text{div}\,(A_{\kappa}\nabla)+\langle b_{\kappa}, \nabla\rangle+c_{\kappa},\qquad\kappa=1,2, \tag{1.2}\]
where, for every \(\kappa=1,2\),
\[A_{\kappa}=\left(a_{ij}^{\kappa}\right)_{1\leq i,j\leq N}\in\mathscr{M}_{N}^{ \text{sym}}(W^{1,\infty}(\Omega)),\quad b_{\kappa}=(b_{1}^{\kappa},...,b_{N}^ {\kappa})\in(L^{\infty}(\Omega))^{N},\quad c_{\kappa}\in L^{\infty}(\Omega).\]
For a given Banach space \(X\), we are denoting by \(\mathscr{M}_{N}^{\text{sym}}(X)\) the space of the symmetric square matrices of order \(N\) with entries in \(X\), and \(W^{1,\infty}(\Omega)\) stands for the Sobolev space of all bounded and measurable functions in \(\Omega\) with weak derivatives in \(L^{\infty}(\Omega)\). In (1.1), for every \(\kappa=1,2\), \(\mathfrak{B}_{\kappa}\) is a general boundary operator of mixed type such that, for every \(\psi\in\mathcal{C}(\bar{\Omega})\cap\mathcal{C}^{1}(\Omega\cup\Gamma_{1}^{ \kappa})\),
\[\mathfrak{B}_{\kappa}\psi=\left\{\begin{aligned} \psi&\quad\text{on}\;\;\Gamma_{0}^{ \kappa},\\ \partial_{\nu_{\kappa}}\psi+\beta_{\kappa}(x)\psi& \quad\text{on}\;\;\Gamma_{1}^{\kappa},\end{aligned}\right. \tag{1.3}\]
where \(\Gamma_{0}^{\kappa}\) and \(\Gamma_{1}^{\kappa}\) are two closed and open disjoint subsets of \(\partial\Omega\) such that \(\Gamma_{0}^{\kappa}\cup\Gamma_{1}^{\kappa}=\partial\Omega\), and \(\nu_{\kappa}=A_{\kappa}n\) is the co-normal vector field, i.e., \(n\) is the outward unit normal vector field of \(\Omega\). In (1.3), \(\beta_{\kappa}\in\mathcal{C}(\Gamma_{1}^{\kappa})\) is not required to have any special sign. As for the coefficient functions \(a(x)\), \(b(x)\), \(c(x)\), \(d(x)\) and \(m(x)\) in the setting of (1.1), we assume that they are functions in \(\mathcal{C}(\bar{\Omega};\mathbb{R})\) such that \(b\neq 0\), \(c\neq 0\), and
\[a(x)>0,\;\;d(x)>0,\;\;b(x)\geq 0,\;\;c(x)\geq 0,\;\;m(x)\geq 0\quad\text{for all}\;\,x\in\bar{\Omega}. \tag{1.4}\]
In other words, \(a\gg 0\), \(d\gg 0\), \(b\geq 0\), \(c\geq 0\) and \(m\geq 0\). Finally, in (1.1), \(\lambda\), \(\mu\) and \(\gamma>0\) are regarded as real parameters.
Except for the incorporation of the new parameter \(\gamma>0\), this model, in its greatest generality, was introduced by the authors in [24] to establish an homotopy between the classical diffusive Lotka-Volterra predator-prey system, when \(m=0\), and the diffusive Holling-Tanner model introduced by Casal, Eilbeck and Lopez-Gomez [5], where \(m\) is a positive constant. The case when \(m\) is constant has been also analyzed by Du and Lou in [10], [11] and [12], under Dirichlet or Neumann boundary conditions, and Du and Shi
[13] assuming the existence of a protection zone for the prey. Some pioneering non-spatial models of this type were studied by Freedman [17], May [27] and Hsu [19], among others.
In Population Dynamics, (1.1) represents the interaction in a common habitat, \(\Omega\), between a prey, with population density \(u\), and a predator, with population density \(v\). According to (1.1), in the absence of the other, each species has a logistic growth determined by the relative sizes of \(\lambda\) and \(\mu\) with respect to the thresholds \(\sigma_{0}[\mathfrak{L}_{1}-c_{1},\mathfrak{B}_{1},\Omega]\) and \(\sigma_{0}[\mathfrak{L}_{2}-c_{2},\mathfrak{B}_{2},\Omega]\), respectively. Throughout this paper, for any given second order elliptic operator \(\mathfrak{L}\) in \(\Omega\) and any boundary operator \(\mathfrak{B}\) on \(\partial\Omega\), we denote by \(\sigma_{0}[\mathfrak{L},\mathfrak{B},\Omega]\) the principal eigenvalue of \((\mathfrak{L},\mathfrak{B},\Omega)\) as discussed in [21]. In (1.1), the term \(\gamma m(x)\) measures the saturation effects in \(\Omega\) of the predator in the presence of a high population of preys. More precisely, for every \(x\in\Omega\), \(\gamma m(x)\) measures the predator saturation level at the location \(x\in\Omega\) if \(m(x)>0\), while the saturation effects at \(x\) do not play any role if \(m(x)=0\). By normalizing \(m(x)\) so that \(\max_{x\in\bar{\Omega}}m(x)=1\), \(\gamma\) becomes the maximal intensity of the saturations effects. So, throughout this paper we will assume that
\[\|m\|_{\infty}\equiv\max_{x\in\bar{\Omega}}\,m(x)=1. \tag{1.5}\]
Furthermore, we assume that \(\Omega_{0}:=\operatorname{int}m^{-1}(0)\) is a nice open subset of class \(\mathcal{C}^{2}\) of \(\Omega\) with finitely many connected components and \(\bar{\Omega}_{0}=m^{-1}(0)\subset\Omega\). Thus, (1.1) combines in the same habitat, \(\Omega\), functional responses of Lotka-Volterra type in the components of \(m^{-1}(0)\) together with Holling-Tanner responses in \(m^{-1}(\mathbb{R}_{+})\), where \(\mathbb{R}_{+}:=(0,+\infty)\). As noticed in Sections 3 and 5 of [24], the existence of both functional responses can lead to global effects in the dynamics of the species, regardless the sizes of the patches where \(m=0\) or \(m>0\). Moreover, the size of the regions where \(m(x)\) or \(b(x)\) degenerate can also affect the global dynamics. Indeed, as shown in Section 4, the greater is the support of \(m(x)\), or \(b^{-1}(0)\), the smaller can be \(\lambda\) so that (1.1) can still admit a coexistence state.
Essentially, this paper is a continuation of [24], where the existence and the uniqueness of coexistence states was established for the generalized problem (1.1), by fixing \(\mu\in\mathbb{R}\) and regarding \(\lambda\in\mathbb{R}\) as a bifurcation parameter. According to Theorem 7.1 of [24], we already know that the one-dimensional counterpart of (1.1) has a unique coexistence state for sufficiently small \(\gamma>0\). The main goal of this paper is to study the dynamics of (1.1) as \(\gamma\uparrow+\infty\). Thus, it is rather natural to perform the change of variables
\[w:=\gamma\,u,\qquad\varepsilon=\frac{1}{\gamma}. \tag{1.6}\]
In these variables, (1.1) can be expressed, equivalently, as
\[\left\{\begin{aligned} &\mathfrak{L}_{1}w=\lambda w-\varepsilon a (x)w^{2}-b(x)\frac{wv}{1+m(x)w}&&\text{in}\;\;\Omega,\\ &\mathfrak{L}_{2}v=\mu v-d(x)v^{2}+\varepsilon c(x)\frac{wv}{1+m( x)w}&&\text{in}\;\;\Omega,\\ &\mathfrak{B}_{1}w=\mathfrak{B}_{2}v=0&&\text{ on}\;\;\partial\Omega.\end{aligned}\right. \tag{1.7}\]
According to (1.6), analyzing the dynamics of (1.1) for sufficiently large \(\gamma\) is equivalent to analyze (1.7) for sufficiently small \(\varepsilon>0\). Thus, it is rather natural to focus attention into (1.7) as a system perturbing from
\[\left\{\begin{aligned} &\mathfrak{L}_{1}w=\lambda w-b(x)\frac{wv}{1+m(x) w}&\text{in}\;\;\Omega,\\ &\mathfrak{L}_{2}v=\mu v-d(x)v^{2}&\text{in}\;\; \Omega,\\ &\mathfrak{B}_{1}w=\mathfrak{B}_{2}v=0&\text{on}\;\; \partial\Omega.\end{aligned}\right. \tag{1.8}\]
This problem has the tremendous advantage that it is uncoupled.
Our main results establish, for every \(\varepsilon\geq 0\), the existence of a component \(\mathscr{C}_{\varepsilon}^{+}\) of the set of coexistence states of (1.7), or (1.8), and ascertain their global structures according to weather \(\varepsilon>0\), or \(\varepsilon=0\). Precisely, when \(\varepsilon=0\), Theorems 4.1 and 4.2 show that \(\mathscr{C}_{0}^{+}\) behaves much like sketched in Figure 3, where the constants \(\Phi(\mu)\) and \(\varphi_{0}(\mu)\) are defined in (3.16) and (3.19), respectively. Later, Theorem 5.1 shows that, as \(\varepsilon>0\) perturbs from \(\varepsilon=0\), the component \(\mathscr{C}_{0}^{+}\) perturbs into \(\mathscr{C}_{\varepsilon}^{+}\) and that, since the coexistence states of (1.7) have uniform a priori bounds on compact subintervals of the parameter \(\lambda\), for any given \(\eta>0\), there exists \(\varepsilon_{0}=\varepsilon_{0}(\eta)>0\) such that \(\mathscr{C}_{\varepsilon}^{+}\) has, at least, two coexistence states for every \(\lambda\in[\varphi_{0}(\mu)-\eta,\Phi(\mu)-\eta]\) if \(\varepsilon\in(0,\varepsilon_{0}]\), as illustrated by Figure 4. This multiplicity result is new even for the simplest prototype model introduced by Casal et al. [5].
Although in the classical setting of Casal et al. [5], Du and Lou [11] proved the existence of the \(S\)-shaped diagrams computed in [5] for sufficiently large \(\gamma>0\) and \(c>0\), with \(\mu>\sigma_{0,2}\equiv\sigma_{0}[\mathfrak{L}_{2},\mathfrak{B}_{2},\Omega]\) sufficiently close to \(\sigma_{0,2}\), the reader should be aware that, in this paper, \(c(x)\) can degenerate and take arbitrary values, and that \(\mu>\sigma_{0,2}\) is arbitrary. Actually, the multiplicity result of this paper has a different nature than the inherent to the \(S\)-shaped diagrams discovered in [5]. In \(S\)-shaped bifurcation diagrams, the problem has, at least, two coexistence states if \(\lambda\in[\Phi(\mu)-\eta,\Phi(\mu)]\), while it has, at least, three, if \(\lambda\in(\Phi(\mu),\Phi(\mu)+\eta]\), for sufficiently small \(\eta>0\), as illustrated in the second picture of Figure 8. In strong contrast, the main result of this paper establishes that, for sufficiently large \(\gamma>0\), (1.1) has, at least, two coexistence states in any compact subinterval of \((\varphi_{0}(\mu),\Phi(\mu))\), regardless the size and shape of the function coefficient \(c(x)\) and how large is \(\mu\). Rather surprisingly, this occurs regardless the size of the support of the saturation term, measured by \(m(x)\), which might be arbitrarily small, as is an atom in a Galaxy. A similar phenomenon, though in a very different problem, was observed by Lopez-Gomez and Rabinowitz [26].
We end this paper by analyzing a simple prototype model with constant coefficients and non-flux boundary conditions, where the constant steady-states are given by a simple algebraic system. Among other things, we will establish the existence of \(S\)-shaped curves of coexistence states when \(bc>ad\) and \(\varepsilon\) is sufficiently large. This example shows that our multiplicity theorem, for sufficiently small \(\varepsilon>0\), has nothing to do with the formation of \(S\)-shaped components of coexistence states.
The plan of this paper is the following. Section 2 introduces some notations and abstract results that are used throughout the paper. Section 3 studies the stability of the semitrivial
curve \((0,\theta_{[\mathfrak{L}_{2},\mathfrak{B}_{2},\Omega]})\), where \(\theta_{[\mathfrak{L}_{2},\mathfrak{B}_{2},\Omega]}\) stands for the unique positive solution of
\[\left\{\begin{aligned} &\mathfrak{L}_{2}=\mu v-dv^{2}&& \text{ in }\Omega,\\ &\mathfrak{B}_{2}v=0&&\text{ on }\partial\Omega, \end{aligned}\right.\]
which exists if, and only if, \(\mu>\sigma_{0,2}\), and analyzes the local bifurcation to coexistence states of (1.7) from it, with special emphasis on the uniform dependence of these local bifurcations on the parameter \(\varepsilon\geq 0\), which is a subtle issue. Section 4 studies the uncoupled system (1.8), establishing the global structure of the component \(\mathscr{C}_{0}^{+}\) near \(\varphi_{0}(\mu)\), its bifurcation point from infinity, and \(\Phi(\mu)\), its bifurcation point from the semitrivial positive solution \((0,\theta_{[\mathfrak{L}_{2},\mathfrak{B}_{2},\Omega]})\). Then, the analysis carried out in Sections 3 and 4 combined with some sophisticated topological and global continuation arguments, will drive us to the proof of Theorem 5.1 of Section 5, which is our main multiplicity result. Finally, in Section 6 we analyze a very simple example with \(S\)-shaped components of coexistence states. A previous analysis of this example is imperative for tackling the problem of the global existence of \(S\)-shaped bifurcation diagrams in its greatest generality, which will be pursued in a forthcoming paper.
## 2 Preliminaries
This section collects some results scattered in a series of papers and monographs that are going to be used throughout this paper. As a direct consequence of the elliptic \(L^{p}\)-theory (see, e.g., Chapters 4 and 5 of [22]), it becomes apparent that any non-negative weak solution of (1.7), \((w,v)\), satisfies
\[w\in\mathscr{W}_{1}\equiv\bigcap_{p\geq N}W^{2,p}_{\mathfrak{B}_{1}}(\Omega), \qquad v\in\mathscr{W}_{2}\equiv\bigcap_{p\geq N}W^{2,p}_{\mathfrak{B}_{2}}( \Omega),\]
where, for every \(\kappa=1,2\) and \(p\geq N\), \(W^{2,p}_{\mathfrak{B}_{\kappa}}(\Omega)\) stands for the Sobolev space of the functions \(z\in W^{2,p}(\Omega)\) such that \(\mathfrak{B}_{\kappa}z=0\) on \(\partial\Omega\). Thus, \((w,v)\) is a strong solution of (1.7). In particular, \(w\) and \(v\) are twice classically differentiable almost everywhere in \(\Omega\) and they are classical solutions in the sense of [22, Def. 4.1]. By the Sobolev embeddings and the Rellich-Kondrashov theorem, it is easily seen that \(\mathscr{W}_{\kappa}\hookrightarrow\mathcal{C}^{1}_{\mathfrak{B}_{\kappa}}(\bar {\Omega})\), \(\kappa=1,2\), with compact embeddings, where \(\mathcal{C}^{1}_{\mathfrak{B}_{\kappa}}(\bar{\Omega})\) stands for the set of functions \(z\in\mathcal{C}^{1}(\bar{\Omega})\) such that \(\mathfrak{B}_{\kappa}z=0\) on \(\partial\Omega\) (see [22, Ch. 4] if necessary).
Throughout this paper, for every weight function \(V\in L^{\infty}(\Omega)\) and \(\kappa=1,2\), we denote by \(\sigma_{0}[\mathfrak{L}_{\kappa}+V,\mathfrak{B}_{\kappa},\Omega]\) the principal eigenvalue of the linear eigenvalue problem
\[\left\{\begin{aligned} &(\mathfrak{L}_{\kappa}+V)\,\varphi= \tau\varphi&&\text{ in }\,\Omega,\\ &\mathfrak{B}_{\kappa}\varphi=0&&\text{ on }\,\partial\Omega,\end{aligned}\right. \tag{2.1}\]
whose existence and uniqueness in our general setting was established by [22, Th. 7.7]. According to Corollary 7.1 and Theorem 7.9 of [22], \(\sigma_{0}[\mathfrak{L}_{\kappa}+V,\mathfrak{B}_{\kappa},\Omega]\) is strictly dominant
and algebraically simple. In particular, it is the lowest real eigenvalue. Moreover, by [22, Th. 7.6], for every \(\kappa=1,2\), the associated principal eigenfunction, unique up to a multiplicative positive constant, can be taken to be strongly positive in \(\Omega\), \(\varphi\gg_{\kappa}0\), in the sense that
\[\varphi(x)>0\;\;\text{for all}\;\;x\in\Omega\cup\Gamma_{1}^{\kappa}\;\;\text{ and}\;\;\frac{\partial\varphi}{\partial n}(x)<0\;\;\text{for all}\;\;x\in\Gamma_{0}^{\kappa},\]
where \(n\) stands for the outward unit vector field to \(\Omega\) along \(\partial\Omega\). Subsequently, we collect some important results that are going to be invoked throughout this paper. The first one, going back to Cano-Casanova and Lopez-Gomez [4] in its present generality, establishes the monotonicity of the principal eigenvalue with respect to the potential.
**Theorem 2.1**.: _Let \(V_{1},V_{2}\in L^{\infty}(\Omega)\) be such that \(V_{1}\lesssim V_{2}\). Then, for every \(\kappa=1,2\),_
\[\sigma_{0}\left[\mathfrak{L}_{\kappa}+V_{1},\mathfrak{B}_{\kappa},\Omega \right]<\sigma_{0}\left[\mathfrak{L}_{\kappa}+V_{2},\mathfrak{B}_{\kappa}, \Omega\right].\]
_Thus, the map \(V\mapsto\sigma_{0}\left[\mathfrak{L}_{k}+V,\mathfrak{B}_{k},\Omega\right]\) is continuous in \(L^{\infty}(\Omega)\) and increasing._
The next characterization theorem is [22, Th. 7.10]. It goes back to Lopez-Gomez and Molina-Meyer [23] for cooperative systems under Dirichlet boundary conditions, and to Amann and Lopez-Gomez [2] in the present setting. The equivalence between (a) and (c) was established, simultaneously to [23], for the single equation under Dirichlet boundary conditions by Berestycki, Nirenberg and Varadhan [3]. However, (b) is the most useful condition from the point of the applications.
**Theorem 2.2**.: _For every \(V\in L^{\infty}(\Omega)\) and \(\kappa=1,2\), the next conditions are equivalent:_
* \(\sigma_{0}\left[\mathfrak{L}_{\kappa}+V,\mathfrak{B}_{\kappa},\Omega\right]>0\)_._
* _The term_ \((\mathfrak{L}_{\kappa}+V,\mathfrak{B}_{\kappa},\Omega)\) _possesses a positive strict supersolution,_ \(h\in\mathscr{W}_{\kappa}\)_, i.e.,_ \(h\) _satisfies_ \(h\gtrsim 0\) _and_ \[\left\{\begin{aligned} &(\mathfrak{L}_{\kappa}+V)h\geq 0&& \text{in}\;\;\Omega,\\ &\mathfrak{B}_{\kappa}h\geq 0&&\text{on}\;\;\partial \Omega,\end{aligned}\right.\] _with some of these inequalities strict._
* _The term_ \((\mathfrak{L}_{\kappa}+V,\mathfrak{B}_{\kappa},\Omega)\) _satisfies the strong maximum principle, i.e., every function_ \(z\in\mathscr{W}_{\kappa}\) _such that_ \[\left\{\begin{aligned} &(\mathfrak{L}_{\kappa}+V)z\geq 0&& \text{in}\;\;\Omega,\\ &\mathfrak{B}_{\kappa}z\geq 0&&\text{on}\;\;\partial \Omega,\end{aligned}\right.\] _with some of these inequalities strict, satisfies_ \[z(x)>0\;\;\text{for all}\;\;x\in\Omega\cup\Gamma_{1}^{\kappa}\;\;\text{and}\;\; \frac{\partial z}{\partial n}(x)<0\;\;\text{for all}\;\;x\in z^{-1}(0)\cap \Gamma_{0}^{\kappa}.\] _To shorten notations, when this occurs, we will simply say that_ \(z\gg_{\kappa}0\)
The next result goes back to Fraile et al. [16, Th. 3.5] for \(\beta_{\kappa}\geq 0\). In the general case when \(\beta_{\kappa}\) changes sign one can either use the change of variable of Fernandez-Rincon and Lopez-Gomez [14, Sect. 3] to reduce the problem to the setting of [16], or one might derive it directly from Theorem 1.1 of Daners and Lopez-Gomez [9]. Subsequently, we say that \(z_{1}\ll_{\kappa}z_{2}\) if \(z_{2}-z_{1}\gg_{\kappa}0\).
**Theorem 2.3**.: _Suppose \(\varrho\in\mathbb{R}\) and \(\xi\in C(\bar{\Omega};\mathbb{R})\) satisfies \(\xi(x)>0\) for all \(x\in\bar{\Omega}\). Then, for every \(\kappa=1,2\) and \(V\in L^{\infty}(\Omega)\), the semilinear boundary value problem_
\[\left\{\begin{aligned} &(\mathfrak{L}_{\kappa}+V)z=\varrho z-\xi(x)z ^{2}&&\text{in}\;\;\Omega,\\ &\mathfrak{B}_{\kappa}z=0&&\text{on}\;\;\partial \Omega,\end{aligned}\right. \tag{2.2}\]
_admits a positive solution if, and only if, \(\varrho>\varrho_{\kappa}\equiv\sigma_{0}\left[\mathfrak{L}_{\kappa}+V, \mathfrak{B}_{\kappa},\Omega\right]\). Moreover, it is unique if it exists, and, denoting it by \(z_{\varrho,\kappa}\equiv\theta_{[\mathfrak{L}_{\kappa}+V,\varrho,\xi]}\), we have that \(w_{\varrho,\kappa}\gg_{\kappa}0\) and_
1. _the map_ \(\varrho\mapsto w_{\varrho,\kappa}\) _is point-wise increasing provided_ \(\varrho>\varrho_{\kappa}\)_,_
2. \(z_{\varrho,\kappa}\) _bifurcates from_ \(z=0\) _at_ \(\varrho=\varrho_{\kappa}\)_,_
3. _as a consequence of Theorem_ 2.2_, if_ \(\bar{u}\) _(resp._ \(\underline{u}\)_) is a positive strict supersolution (resp. subsolution) of (_2.2_), then_ \(\underline{u}\ll_{\kappa}w_{\varrho,\kappa}\) _(resp._ \(w_{\varrho,\kappa}\ll_{\kappa}\bar{u}\)_) provided_ \(\varrho>\varrho_{\kappa}\)_._
More precisely, in this paper we denote by \(\theta_{[\mathfrak{L}_{\kappa}+V,\varrho,\xi]}\) the maximal non-negative solution of (2.2). Then, due to Theorem 2.3,
\[\theta_{[\mathfrak{L}_{\kappa}+V,\varrho,\xi]}:=\left\{\begin{aligned} & 0&&\text{ if }\;\varrho\leq\varrho_{\kappa},\\ &\gg_{\kappa}0&&\text{ if }\;\varrho>\varrho_{\kappa}. \end{aligned}\right.\]
Theorem 2.3 was generalized by Fraile et al. [16] to cover the case when \(\xi\geq 0\) vanishes on some nice subdomain of \(\Omega\), and by Daners and Lopez-Gomez [9, Th. 1.1] to characterize the range of \(\varrho\)'s for which (2.2) admits a positive solution under no requirements on the nature of \(\xi^{-1}(0)\).
**Corollary 2.1**.: _According to Theorem 2.3, we can conclude that_
1. _has a semitrivial positive solution of the form_ \((u,0)\) _if, and only if,_ \(\lambda>\sigma_{0,1}\equiv\sigma_{0}[\mathfrak{L}_{1},\mathfrak{B}_{1},\Omega]\)_, and, in such case,_ \(u=\theta_{[\mathfrak{L}_{1},\lambda,a]}\)_._
2. _Similarly, (_1.1_) has a semitrivial positive solution of the form_ \((0,v)\) _if, and only if,_ \(\mu>\sigma_{0,2}\equiv\sigma_{0}[\mathfrak{L}_{2},\mathfrak{B}_{2},\Omega]\)_, and, in such case,_ \(v=\theta_{[\mathfrak{L}_{2},\mu,d]}\)_._
## 3 Bifurcation of coexistence states from \((0,\theta_{[\mathfrak{L}_{2},\mu,d]})\)
In this section we analyze the bifurcation of coexistence states from the semitrivial curve \((0,\theta_{[\mathfrak{L}_{2},\mu,d]})\) in the problem (1.7). We are particularly interested in ascertaining the nature of
the local bifurcation according to the value of the parameter \(\varepsilon>0\). The linearized stability of \((0,\theta_{[\mathfrak{L}_{2},\mu,d]})\) is determined by the signs of the real parts of the eigenvalues of the problem
\[\left\{\begin{aligned} &\left(\begin{aligned} \mathfrak{L}_{1}+b\theta_{[\mathfrak{L}_{2},\mu,d]}- \lambda&0\\ &-\varepsilon c\theta_{[\mathfrak{L}_{2},\mu,d]}& \mathfrak{L}_{2}+2d\theta_{[\mathfrak{L}_{2},\mu,d]}-\mu\end{aligned} \right)\begin{pmatrix}w\\ v\end{aligned}=\tau\begin{pmatrix}w\\ v\end{aligned}&\text{in}\;\Omega,\\ &\mathfrak{B}_{1}w=\mathfrak{B}_{2}v=0&\text{on}\;\partial \Omega.\end{aligned}\right. \tag{3.1}\]
The next result holds.
**Theorem 3.1**.: _Setting \(\Phi(\mu)\equiv\sigma_{0}\left[\mathfrak{L}_{1}+b\theta_{[\mathfrak{L}_{2}, \mu,d]},\mathfrak{B}_{1},\Omega\right]\) for all \(\mu>\sigma_{0}[\mathfrak{L}_{2},\mathfrak{B}_{2},\Omega]\), the semitrivial solution \((0,\theta_{[\mathfrak{L}_{2},\mu,d]})\) is linearly unstable if, and only if, \(\lambda>\Phi(\mu)\), whereas it is linearly stable if, and only if, \(\lambda<\Phi(\mu)\). Thus, \(\lambda=\Phi(\mu)\) is the curve of change of stability of \((0,\theta_{[\mathfrak{L}_{2},\mu,d]})\)._
Proof.: We first determine the eigenvalues with associated eigenvectors \((w,v)\) such that \(w=0\) and \(v\neq 0\). By (3.1), these eigenvalues satisfy
\[\left\{\begin{aligned} &(\mathfrak{L}_{2}+2d\theta_{[\mathfrak{L}_{2}, \mu,d]}-\mu)v=\tau v&\text{in}\;\Omega,\\ &\mathfrak{B}_{2}v=0&\text{on}\;\partial\Omega.\end{aligned}\right. \tag{3.2}\]
By Theorem 2.1, the definition of \(\theta_{[\mathfrak{L}_{2},\mu,d]}\), and the uniqueness of the principal eigenvalue,
\[\sigma_{0}\left[\mathfrak{L}_{2}+2d\theta_{[\mathfrak{L}_{2},\mu,d]}-\mu, \mathfrak{B}_{2},\Omega\right]>\sigma_{0}\left[\mathfrak{L}_{2}+d\theta_{[ \mathfrak{L}_{2},\mu,d]}-\mu,\mathfrak{B}_{2},\Omega\right]=0. \tag{3.3}\]
Thus, by the dominance of the principal eigenvalue (see [22, Th. 7.8]),
\[\operatorname{Re}\tau\geq\sigma_{0}\left[\mathfrak{L}_{2}+2d\theta_{[ \mathfrak{L}_{2},\mu,d]}-\mu,\mathfrak{B}_{2},\Omega\right]>0\]
for any eigenvalue, \(\tau\), of (3.2).
Now, we will ascertain the real parts of the eigenvalues of (3.1) with associated eigenfunctions, \((w,v)\), such that \(w\neq 0\). By (3.1), they should satisfy
\[\left\{\begin{aligned} &(\mathfrak{L}_{1}+b\theta_{[\mathfrak{L}_{2}, \mu,d]}-\lambda)w=\tau w&\text{in}\;\Omega,\\ &\mathfrak{B}_{1}w=0&\text{on}\;\partial\Omega. \end{aligned}\right. \tag{3.4}\]
These eigenvalues consist of the sequence
\[\tau_{j}:=\sigma_{j}\left[\mathfrak{L}_{1}+b\theta_{[\mathfrak{L}_{2},\mu,d]},\mathfrak{B}_{1},\Omega\right]-\lambda\qquad\text{for}\;\,j\geq 0, \tag{3.5}\]
where \(\{\sigma_{j}\left[\mathfrak{L}_{1}+b\theta_{[\mathfrak{L}_{2},\mu,d]}, \mathfrak{B}_{1},\Omega\right]\}_{j\geq 0}\) is the sequence of eigenvalues of (2.1) with \(\kappa=1\) and \(V=b\theta_{[\mathfrak{L}_{2},\mu,d]}\). As the principal eigenvalue is dominant, it is apparent that
\[\operatorname{Re}\tau_{j}\geq\tau_{0}=\Phi(\mu)-\lambda\quad\text{for all}\;\,j\geq 0.\]
Assume \(\lambda<\Phi(\mu)\). Then, \(\operatorname{Re}\tau_{j}>0\) for all \(j\geq 0\). Thus, any eigenvalue of (3.1) has a positive real part, i.e., \((0,\theta_{[\mathfrak{L}_{2},\mu,d]})\) is linearly stable.
Assume now \(\lambda>\Phi(\mu)\). Then, \(\tau_{0}<0\). Let \(w\neq 0\) be a principal eigenfunction associated to \(\tau_{0}\). Then, the second equation of (3.1) becomes
\[\left\{\begin{aligned} &(\mathfrak{L}_{2}+2d\theta_{[\mathfrak{L}_{2}, \mu,d]}-\mu-\tau_{0})v=\varepsilon c\theta_{[\mathfrak{L}_{2},\mu,d]}w,& \text{ in }\;\Omega,\\ &\mathfrak{B}_{2}v=0&\text{ in }\;\partial\Omega.\end{aligned}\right. \tag{3.6}\]
Since \(-\tau_{0}>0\), it follows from Theorem 2.1 and (3.3) that
\[\sigma_{0}[\mathfrak{L}_{2}+2d\theta_{[\mathfrak{L}_{2},\mu,d]}-\mu-\tau_{0}, \mathfrak{B}_{2},\Omega]>\sigma_{0}[\mathfrak{L}_{2}+2d\theta_{[\mathfrak{L}_ {2},\mu,d]}-\mu,\mathfrak{B}_{2},\Omega]>0.\]
Thus, thanks to Theorem 2.2,
\[v=\left(\mathfrak{L}_{2}+2d\theta_{[\mathfrak{L}_{2},\mu,d]}-\mu-\tau_{0} \right)^{-1}(\varepsilon c\theta_{[\mathfrak{L}_{2},\mu,d]}w)\]
provides us with the unique solution of (3.6). Therefore, \((w,v)\) is an eigenfunction of (3.1) associated to \(\tau_{0}<0\) and hence, \((0,\theta_{[\mathfrak{L}_{2},\mu,d]})\) is linearly unstable.
**Remark 3.1**.: According to the theorems of Lyapunov on linearized stability, it becomes apparent that \((0,\theta_{[\mathfrak{L}_{2},\mu,d]})\) is exponentially asymptotically stable if \(\lambda<\Phi(\mu)\), while it is unstable if \(\lambda>\Phi(\mu)\) (see, e.g., Henry [18, Sec. 5.1]).
Subsequently, we set
\[\sigma_{0,\kappa}\equiv\sigma_{0}[\mathfrak{L}_{\kappa},\mathfrak{B}_{\kappa},\Omega],\qquad\kappa=1,2, \tag{3.7}\]
and pick any real number, \(e\), such that \(e>\max\{-\sigma_{0,1},-\sigma_{0,2}\}\). Then, for every \(\kappa=1,2\),
\[\sigma_{0}[\mathfrak{L}_{\kappa}+e,\mathfrak{B}_{\kappa},\Omega]=\sigma_{0, \kappa}+e>0\]
and hence, by Theorem 2.2, \((\mathfrak{L}_{\kappa}+e,\mathfrak{B}_{\kappa},\Omega)\) is an invertible operator with strongly positive inverse. Obviously, the solutions of the problem (1.7) are given by the zeroes of the operator
\[\mathfrak{F}:\mathbb{R}\times\mathbb{R}\times\mathbb{R}\times\mathcal{C}^{1}_ {\mathfrak{B}_{1}}(\bar{\Omega})\times\mathcal{C}^{1}_{\mathfrak{B}_{2}}(\bar {\Omega})\to\mathscr{W}_{1}\times\mathscr{W}_{2},\]
defined, for every \(\lambda,\mu,\varepsilon\in\mathbb{R}\), \(w\in\mathcal{C}^{1}_{\mathfrak{B}_{1}}(\bar{\Omega})\) and \(v\in\mathcal{C}^{1}_{\mathfrak{B}_{2}}(\bar{\Omega})\), by
\[\mathfrak{F}(\lambda,\mu,\varepsilon,w,v):=\left(\begin{aligned} & w-(\mathfrak{L}_{1}+e)^{-1}\left[(\lambda+e)w- \varepsilon aw^{2}-b_{\frac{uv}{1+mw}}\right]\\ & v-(\mathfrak{L}_{2}+e)^{-1}\left[(\mu+e)v-dv^{2}+\varepsilon c \frac{wv}{1+mw}\right]\end{aligned}\right). \tag{3.8}\]
The operator \(\mathfrak{F}\) is a compact perturbation of the identity map in \(\mathcal{C}^{1}_{\mathfrak{B}_{1}}(\bar{\Omega})\times\mathcal{C}^{1}_{ \mathfrak{B}_{2}}(\bar{\Omega})\). Moreover, it is Frechet differentiable and, since \(D_{(w,v)}\mathfrak{F}\) is a linear compact perturbation of the identity map, \(D_{(w,v)}\mathfrak{F}\) is a Fredholm operator of index zero. Actually, \(\mathfrak{F}\) is real analytic in an open region containing the first quadrant \(w\geq 0\), \(v\geq 0\).
The next result shows that the coexistence states bifurcate from the semitrivial positive solution \((0,\theta_{[\mathfrak{L}_{2},\mu,d]})\) along the curve \(\lambda=\Phi(\mu)\). It is a direct consequence of the theorem of bifurcation from simple eigenvalues of Crandall and Rabinowitz [6]. It provides us with the local structure of the set of bifurcating coexistence states.
**Theorem 3.2**.: _For every \(\mu>\sigma_{0,2}\) and \(\varepsilon\in\mathbb{R}\), there exist \(\delta=\delta(\mu,\varepsilon)>0\) and an analytic map \((\lambda,w,v):(-\delta,\delta)\to\mathbb{R}\times\mathscr{W}_{1}\times\mathscr{W }_{2}\) such that:_
1. \((\lambda(0),w(0),v(0))=\big{(}\Phi(\mu),0,\theta_{[\mathfrak{L}_{2},\mu,d]} \big{)}\)_._
2. \(\mathfrak{F}(\lambda(s),\mu,\varepsilon,w(s),v(s))=0\) _for all_ \(s\in(-\delta,\delta)\)_._
3. \(v(s)\gg_{2}0\) _if_ \(s\in(-\delta,\delta)\)_,_ \(w(s)\gg_{1}0\) _if_ \(s\in(0,\delta)\)_, and_ \(w(s)\ll_{1}0\) _if_ \(s\in(-\delta,0)\)_._
4. _The set of solutions of (_1.7_) in a neighborhood of_ \((\lambda,w,v)=\big{(}\Phi(\mu),0,\theta_{[\mathfrak{L}_{2},\mu,d]}\big{)}\) _consists of the curves_ \(\big{(}\lambda,0,\theta_{[\mathfrak{L}_{2},\mu,d]}\big{)}\)_,_ \(\lambda\thicksim\Phi(\mu)\)_, and_ \((\lambda(s),w(s),v(s))\)_,_ \(s\in(-\delta,\delta)\)_._
_Moreover, there are two functions \(w_{1},w_{1}^{*}\gg_{1}0\) such that_
\[\lambda^{\prime}(\varepsilon)\!\equiv\!\frac{\partial\lambda}{\partial s}(0, \varepsilon)\!=\!\!\!\int_{\Omega}\!\!(\varepsilon a\!-\!b\theta_{[\mathfrak{L }_{2},\mu,d]})\,w_{1}^{2}w_{1}^{*}\!+\!\!\!\int_{\Omega}\!\!b\left(\mathfrak{L }_{2}\!+\!2d\theta_{[\mathfrak{L}_{2},\mu,d]}\!-\!\mu\right)^{-1}\!\!\!\left( \varepsilon c\theta_{[\mathfrak{L}_{2},\mu,d]}w_{1}\right)w_{1}w_{1}^{*}. \tag{3.9}\]
Proof.: By definition, \(\mathfrak{F}(\lambda,\mu,\varepsilon,0,\theta_{[\mathfrak{L}_{2},\mu,d]})=0\). Moreover, the Frechet differential
\[\mathscr{L}(\lambda,\varepsilon):=D_{(w,v)}\mathfrak{F}(\lambda,\mu, \varepsilon,0,\theta_{[\mathfrak{L}_{2},\mu,d]})\]
is the operator defined by
\[\mathscr{L}(\lambda,\varepsilon)(w,v)=\left(\begin{array}{c}w-(\mathfrak{L} _{1}+e)^{-1}\left[(\lambda+e)w-b\theta_{[\mathfrak{L}_{2},\mu,d]}w\right]\\ v-(\mathfrak{L}_{2}+e)^{-1}\left[(\mu+e)v-2d\theta_{[\mathfrak{L}_{2},\mu,d]}v+ \varepsilon c\theta_{[\mathfrak{L}_{2},\mu,d]}w\right]\end{array}\right).\]
Thus, at \(\lambda=\Phi(\mu)\) we have that \((w_{1},v_{1})\in N[\mathscr{L}(\Phi(\mu),\varepsilon)]\) if, and only if,
\[\left\{\begin{aligned} &\left(\mathfrak{L}_{1}+b\theta_{[ \mathfrak{L}_{2},\mu,d]}\right)w_{1}=\Phi(\mu)w_{1},\\ &\left(\mathfrak{L}_{2}+2d\theta_{[\mathfrak{L}_{2},\mu,d]}-\mu \right)v_{1}=\varepsilon c\theta_{[\mathfrak{L}_{2},\mu,d]}w_{1},\end{aligned}\right. \tag{3.10}\]
in \(\Omega\) and \(\mathfrak{B}_{1}w_{1}=\mathfrak{B}_{2}v_{1}=0\) in \(\partial\Omega\). Since \(\Phi(\mu)\equiv\sigma_{0}\left[\mathfrak{L}_{1}+b\theta_{[\mathfrak{L}_{2}, \mu,d]},\mathfrak{B}_{1},\Omega\right]\), by the simplicity of \(\Phi(\mu)\), \(w_{1}\) is unique, up to multiplicative constants. Actually, it can be chosen to satisfy \(w_{1}\gg_{1}0\). Moreover, by (3.3) and Theorem 2.2, the second equation of (3.10) implies that
\[v_{1}=\big{(}\mathfrak{L}_{2}+2d\theta_{[\mathfrak{L}_{2},\mu,d]}-\mu\big{)}^ {-1}\left(\varepsilon c\theta_{[\mathfrak{L}_{2},\mu,d]}w_{1}\right). \tag{3.11}\]
Note that \(\operatorname{sign}v_{1}=\operatorname{sign}\varepsilon\), by Theorem 2.2. Therefore,
\[N[\mathscr{L}(\Phi(\mu),\varepsilon)]=\operatorname{span}[\varphi_{0}],\qquad \varphi_{0}\equiv(w_{1},v_{1}),\quad w_{1}\gg_{1}0.\]
Subsequently, we normalize \(w_{1}\gg_{1}0\) so that \(\int_{\Omega}w_{1}^{2}(x)\,dx=1\), and denote by \(D_{\lambda}\mathscr{L}(\lambda,\varepsilon)\) the derivative of \(\mathscr{L}(\lambda,\varepsilon)\) with respect to \(\lambda\). Then,
\[D_{\lambda}\mathscr{L}(\Phi(\mu),\varepsilon)\varphi_{0}=\left(\begin{array}[] {c}-(\mathfrak{L}_{1}+e)^{-1}w_{1}\\ 0\end{array}\right)\notin R[\mathscr{L}(\Phi(\mu),\varepsilon)], \tag{3.12}\]
i.e., the transversality condition of Crandall and Rabinowitz [6] holds. Indeed, arguing by contradiction, assume that there exists \((w,v)\) such that
\[\mathscr{L}(\Phi(\mu),\varepsilon)(w,v)=\left(\begin{aligned} -(\mathfrak{L}_{1}+e)^{-1}w_{1}\\ 0\end{aligned}\right).\]
Then, we find from the first equation of this system that
\[\left\{\begin{aligned} &\left[\mathfrak{L}_{1}+b\theta_{[ \mathfrak{L}_{2},\mu,d]}-\Phi(\mu)\right]w=-w_{1}&\text{in}\;\; \Omega,\\ &\mathfrak{B}_{1}w=0&\text{on}\;\;\partial\Omega. \end{aligned}\right.\]
By Corollary 7.1(f) of [22] this is impossible. This contradiction shows (3.12). Consequently, the first four assertions of the theorem follow from the main theorem of [6]. To complete the proof it remains to show (3.9). Setting
\[(\lambda(s),w(s),v(s))=\left(\Phi(\mu)+\sum_{j=1}^{\infty}s^{j}\lambda_{j}, \sum_{j=1}^{\infty}s^{j}w_{j},\theta_{[\mathfrak{L}_{2},\mu,d]}+\sum_{j=1}^{ \infty}s^{j}v_{j}\right),\quad s\thicksim 0,\]
and substituting into (1.7), it becomes apparent that
\[\left\{\begin{aligned} &\left(\mathfrak{L}_{1}+b\theta_{[ \mathfrak{L}_{2},\mu,d]}-\Phi(\mu)\right)w_{2}=\left(\lambda_{1}+b\theta_{[ \mathfrak{L}_{2},\mu,d]}w_{1}-\varepsilon aw_{1}-bv_{1}\right)w_{1}& \text{in}\;\;\Omega,\\ &\mathfrak{B}_{1}w_{2}=0&\text{on}\;\;\partial \Omega.\end{aligned}\right. \tag{3.13}\]
Thanks to Corollary 7.1(e) of [22], it is easily seen that the \(L^{2}(\Omega)\)-orthogonal to the kernel of the adjoint problem of \(\left(\mathfrak{L}_{1}+b\theta_{[\mathfrak{L}_{2},\mu,d]}-\Phi(\mu), \mathfrak{B}_{1},\Omega\right)\) is generated by some \(w_{1}^{*}\gg_{1}0\). Therefore, multiplying by \(w_{1}^{*}\) the problem (3.13) and integrating in \(\Omega\), it follows from (3.11) that (3.9) holds. This ends the proof.
**Remark 3.2**.: As the dependence of \(\mathfrak{F}\) on \(\varepsilon\in\mathbb{R}\) is also analytic, by the implicit function theorem used in the proof of the theorem of Crandall and Rabinowitz [6], it becomes apparent that the bifurcated curve
\[(\lambda(s),w(s),v(s))\equiv(\lambda(s,\varepsilon),w(s,\varepsilon),v(s, \varepsilon))\]
also is analytic with respect to the parameter \(\varepsilon\), though in Theorem 3.2 we have refrained to emphasize this dependence on the parameter \(\varepsilon\) to simplify the notations as much as possible.
As a further application of the exchange stability principle of Crandall and Rabinowitz [7, Th. 1.16], the next result holds.
**Theorem 3.3**.: _The curve of coexistence states of (1.7) emanating from \(\left(\Phi(\mu),0,\theta_{[\mathfrak{L}_{2},\mu,d]}\right)\), denoted in Theorem 3.2 by \((\lambda(s),\mu,w(s),v(s))\) for sufficiently small \(s>0\), is unstable, with one-dimensional unstable manifold, if \(\lambda^{\prime}(0)<0\), and exponentially stable if \(\lambda^{\prime}(0)>0\)._
Proof.: According to (3.5), it is apparent that
\[\left\{\begin{aligned} \tau_{0}&>0\;\;\text{and}\;\; \tau_{j}>0\;\;\text{for all}\;j\geq 1\;\;\text{if}\;\lambda<\Phi(\mu),\\ \tau_{0}&=0\;\;\text{and}\;\;\tau_{j}>0\;\;\text{for all}\;j \geq 1\;\;\text{if}\;\lambda=\Phi(\mu),\\ \tau_{0}&<0\;\;\text{and}\;\;\tau_{j}>0\;\;\text{for all}\;j \geq 1\;\;\text{if}\;\lambda\gtrsim\Phi(\mu).\end{aligned}\right. \tag{3.14}\]
Thus, by the exchange stability principle, [7, Th. 1.16], \((\lambda(s),\mu,w(s),v(s))\) is linearly unstable (resp. stable) for sufficiently small \(s>0\) if \(\lambda^{\prime}(0)<0\) (resp. \(\lambda^{\prime}(0)>0\)). Moreover, maintaining the notations of the proof of Theorem 3.2, it follows from [21, Sec. 2.4] that, for sufficiently small \(s>0\),
\[\mathfrak{m}\left[D_{(w,v)}\mathfrak{F}(\lambda(s),\mu,\varepsilon,w(s),v(s)) \right]=\left\{\begin{aligned} \mathfrak{m}\left[\mathscr{L}(\Phi(\mu), \varepsilon)\right]+1&\text{if}\;\lambda^{\prime}(0)<0,\\ \mathfrak{m}\left[\mathscr{L}(\Phi(\mu),\varepsilon)\right]& \text{if}\;\lambda^{\prime}(0)>0,\end{aligned}\right.\]
where \(\mathfrak{m}(L)\) stands for the sum of the algebraic multiplicities of the real negative eigenvalues of \(L\). By (3.14), \(\mathfrak{m}\left[\mathscr{L}(\Phi(\mu),\varepsilon)\right]=0\). Therefore,
\[\mathfrak{m}\left[D_{(w,v)}\mathfrak{F}(\lambda(s),\mu,\varepsilon,w(s),v(s)) \right]=\left\{\begin{aligned} 1&\text{if}\;\lambda^{\prime}(0)<0,\\ 0&\text{if}\;\lambda^{\prime}(0)>0.\end{aligned}\right.\]
The principle of linearized stability of Lyapunov ends the proof.
Figure 1 sketches the corresponding local bifurcation diagrams in the transcritical case when \(\lambda^{\prime}(0)\neq 0\), according to the sign of \(\lambda^{\prime}(0)\). The arcs of analytic curve filled in by exponentially asymptotically stable solutions have been plotted using continuous lines, whereas unstable solutions with one-dimensional unstable manifold are plotted using dashed lines. The \(\lambda\)-axis stands for the constant \(\lambda\)-curve \((\lambda,\mu,0,\theta_{[\mathfrak{L}_{2},\mu,d]})\). Thanks to Theorem 3.1, this solution is linearly unstable if \(\lambda>\Phi(\mu)\) and linearly stable if \(\lambda<\Phi(\mu)\).
We end this section applying Theorems 4.1 and 5.1 of [24] to (1.7), with \(\varepsilon>0\). As a direct consequence, the next result holds.
**Theorem 3.4**.: _Suppose (1.7), with \(\varepsilon>0\), has a coexistence state, \((w,v)\). Then,_
\[\lambda>\varphi_{\varepsilon}(\mu)\equiv\sigma_{0}\Big{[}\mathfrak{L}_{1}+b \tfrac{\theta_{[\mathfrak{L}_{2},\mu,d]}}{1+m\theta_{[\mathfrak{L}_{1},\lambda, \varepsilon a]}},\mathfrak{B}_{1},\Omega\Big{]}, \tag{3.15}\]
_Conversely, under the following condition_
\[\lambda>\Phi(\mu)\equiv\sigma_{0}\left[\mathfrak{L}_{1}+b\theta_{[\mathfrak{L }_{2},\mu,d]},\mathfrak{B}_{1},\Omega\right]\quad\text{and}\quad\mu>\Psi_{ \varepsilon}(\lambda), \tag{3.16}\]
_the problem (1.7) has, at least, a coexistence state._
Since \(\theta_{[\mathfrak{L}_{2},\mu,d]}=0\) if \(\mu\leq\sigma_{0,2}\), under this condition, both (3.15) and (3.16) become into
\[\lambda>\sigma_{0,1}\quad\text{and}\quad\mu>\Psi_{\varepsilon}(\lambda). \tag{3.17}\]
Therefore, (3.17) is not only necessary but also sufficient for the existence of a coexistence state if \(\mu\leq\sigma_{0,2}\). Figure 2 sketches the construction of the wedges (3.15) and (3.16) given by Theorem 3.4. Note that, according to Theorem 2.1,
\[\varphi_{\varepsilon}(\mu)\equiv\sigma_{0}\Big{[}\mathfrak{L}_{1}+b\tfrac{ \theta_{[\mathfrak{L}_{2},\mu,d]}}{1+m\theta_{[\mathfrak{L}_{1},\lambda, \varepsilon a]}},\mathfrak{B}_{1},\Omega\Big{]}<\sigma_{0}[\mathfrak{L}_{1}+b \theta_{[\mathfrak{L}_{2},\mu,d]},\mathfrak{B}_{1},\Omega\Big{]}\equiv\Phi(\mu)\]
for all \(\mu>\sigma_{0,2}\). More precisely, by Theorem 3.4, (1.7) has a coexistence state in the solid (dark) area of Figure 2, whereas outside the union of the solid and dashed patches of Figure 2, it cannot admit any coexistence state.
As already discussed by the authors in Section 3 of [24], the first picture of Figure 2 sketches the behavior of the curve \(\mu=\Psi_{\varepsilon}(\lambda)\), \(\lambda>\sigma_{0,1}\), when \(m(x)>0\) for all \(x\in\bar{\Omega}\), whereas the second picture shows it when \(\Omega_{0}:=\operatorname{int}m^{-1}(0)\) is non-empty, which is the general case dealt with in this paper. In the classical Holling-Tanner case when \(m(x)>0\) for all \(x\in\bar{\Omega}\), thanks to Theorem 2.2, it becomes apparent that
\[\Psi_{\varepsilon}(\lambda)\equiv\sigma_{0}\Big{[}\mathfrak{L}_{2}- \varepsilon c\tfrac{m\theta_{[\mathfrak{L}_{1},\lambda,\varepsilon a]}}{m(1+m \theta_{[\mathfrak{L}_{1},\lambda,\varepsilon a]})},\mathfrak{B}_{2},\Omega \Big{]}>\sigma_{0}\Big{[}\mathfrak{L}_{2}-\frac{\varepsilon c}{m},\mathfrak{B} _{2},\Omega\Big{]}\quad\text{for all}\;\;\lambda>\sigma_{0,1},\]
as illustrated in the first picture of Figure 2. However, when \(\Omega_{0}:=\operatorname{int}m^{-1}(0)\) is a nice (non-empty) open subset with \(\bar{\Omega}_{0}\subset\Omega\), by [4, Pr. 3.2], we have that
\[\Psi_{\varepsilon}(\lambda)\equiv\sigma_{0}\Big{[}\mathfrak{L}_{2}- \varepsilon c\tfrac{\theta_{[\mathfrak{L}_{1},\lambda,\varepsilon a]}}{1+m \theta_{[\mathfrak{L}_{1},\lambda,\varepsilon a]}},\mathfrak{B}_{2},\Omega \Big{]}<\sigma_{0}\Big{[}\mathfrak{L}_{2}-\varepsilon c\theta_{[\mathfrak{L}_ {1},\lambda,\varepsilon a]},\mathfrak{D},\Omega_{0}\Big{]},\]
Figure 1: Stability of the solutions filling in the bifurcating branches
where \(\mathfrak{D}\) stands for the Dirichlet boundary operator on \(\partial\Omega_{0}\). Thus,
\[\lim_{\lambda\uparrow\infty}\Psi_{\varepsilon}(\lambda)=-\infty\quad\text{for all}\;\;\varepsilon>0,\]
as illustrated in the second picture of Figure 2, which is a behavior reminiscent of the one exhibited by the classical Lotka-Volterra model.
Since
\[\theta_{[\mathfrak{L}_{1},\lambda,\varepsilon a]}=\varepsilon^{-1}\theta_{[ \mathfrak{L}_{1},\lambda,a]}, \tag{3.18}\]
it is apparent that
\[\lim_{\varepsilon\downarrow 0}\varphi_{\varepsilon}(\mu) =\lim_{\varepsilon\downarrow 0}\sigma_{0}\Big{[}\mathfrak{L}_{1}+b \tfrac{\theta_{[\mathfrak{L}_{2},\mu,d]}}{1+\frac{\eta\eta}{\varepsilon} \theta_{[\mathfrak{L}_{1},\lambda,a]}},\mathfrak{B}_{1},\Omega\Big{]}\] \[=\sigma_{0}\left[\mathfrak{L}_{1}+\left(1-\chi_{{}_{\rm int\,supp \,}m}\right)b(x)\theta_{[\mathfrak{L}_{2},\mu,d]},\mathfrak{B}_{1},\Omega \right],\]
where, for any subset \(A\subset\mathbb{R}^{N}\), \(\chi_{{}_{A}}\) stands for the characteristic function of the set \(A\), i.e., \(\chi_{{}_{A}}(x)=1\) if \(x\in A\), and \(\chi_{{}_{A}}(x)=0\) if \(x\in\mathbb{R}^{N}\setminus A\). In the next section, it will become apparent that the function
\[\varphi_{0}(\mu):=\sigma_{0}\left[\mathfrak{L}_{1}+\left(1-\chi_{{}_{\rm int\, supp\,}m}\right)b(x)\theta_{[\mathfrak{L}_{2},\mu,d]},\mathfrak{B}_{1},\Omega \right],\qquad\mu>\sigma_{0,2}, \tag{3.19}\]
Figure 2: The coexistence regions of (1.7) according to Theorem 3.4
provides us with the left limiting curve to the region where the uncoupled model (1.8) possesses a coexistence state; recall that (1.8) is (1.7) with \(\varepsilon=0\). The curve \(\lambda=\varphi_{0}(\mu)\) has been also plotted in Figure 2. According to Theorem 2.1, since
\[1-\chi_{{}_{\rm int\,supp\,}m}\,\lesssim\,\frac{1}{1+m\theta_{[\mathfrak{L}_{1},\lambda,\varepsilon a]}}\quad\mbox{in}\;\;\Omega,\]
it follows that, for every \(\mu>\sigma_{0,2}\) and \(\varepsilon>0\),
\[\varphi_{0}(\mu)<\varphi_{\varepsilon}(\mu), \tag{3.20}\]
provided \(bm\geq 0\), as illustrated in Figure 2. Finally, note that, for every \(\lambda>\sigma_{0,1}\),
\[\lim_{\varepsilon\downarrow 0}\Psi_{\varepsilon}(\lambda) =\lim_{\varepsilon\downarrow 0}\sigma_{0}\Big{[}\mathfrak{L}_{2}-c \frac{\theta_{[\mathfrak{L}_{1},\lambda,a]}}{1+\frac{m}{\varepsilon}\theta_{[ \mathfrak{L}_{1},\lambda,a]}},\mathfrak{B}_{2},\Omega\Big{]}\] \[=\sigma_{0}\left[\mathfrak{L}_{2}-\left(1-\chi_{{}_{\rm int\,supp \,}m}\right)c(x)\theta_{[\mathfrak{L}_{1},\lambda,a]},\mathfrak{B}_{2}, \Omega\right]\equiv\Psi_{0}(\lambda)\leq\sigma_{0,2}.\]
Although \(\Psi_{0}(\mu)\) can take different values depending on the distribution of the patches where \(m(x)\) and \(c(x)\) vanish, this does not affect the analysis of (1.8), for as the condition \(\mu>\sigma_{0,2}\) is necessary for the existence of coexistence states.
## 4 The coexistence states of the limiting system (1.8)
This section determines the set of coexistence states of the limiting shadow problem (1.8). Since the component \(v\) satisfies
\[\left\{\begin{aligned} &\mathfrak{L}_{2}v=\mu v-d(x)v^{2}&& \mbox{in}\;\;\Omega,\\ &\mathfrak{B}_{2}v=0&&\mbox{on}\;\;\partial\Omega,\end{aligned}\right.\]
the condition \(\mu>\sigma_{0,2}\equiv\sigma_{0}[\mathfrak{L}_{2},\mathfrak{B}_{2},\Omega]\) is imperative so that (1.8) can admit a coexistence state. Otherwise, \(v=0\) for any component-wise nonnegative solution, \((w,v)\), of (1.8). Thus, throughout this section, we assume that \(\mu>\sigma_{0,2}\). In such case, by Theorem 2.3, for every coexistence state \((w,v)\) of (1.8), necessarily \(v=\theta_{[\mathfrak{L}_{2},\mu,d]}\gg_{2}0\), and \(w\gg_{1}0\) is a positive solution of the associated problem
\[\left\{\begin{aligned} &\mathfrak{L}_{1}w=\lambda w-b(x)\theta_{[ \mathfrak{L}_{2},\mu,d]}\frac{w}{1+m(x)w}&&\mbox{in}\;\; \Omega,\\ &\mathfrak{B}_{1}w=0&&\mbox{on}\;\;\partial\Omega.\end{aligned}\right. \tag{4.1}\]
Note that, as soon as \(b(x)\) and \(m(x)\) have disjoint supports, i.e., \(bm=0\), one has that
\[\left(1-\chi_{{}_{\rm int\,supp\,}m}\right)b=\frac{b}{1+mw}\]
and, hence, (4.1) becomes into the linear problem
\[\left\{\begin{aligned} &\left[\mathfrak{L}_{1}+\left(1-\chi_{{}_{\rm int \,supp\,}m}\right)b\theta_{[\mathfrak{L}_{2},\mu,d]}\right]w=\lambda w&& \mbox{in}\;\;\Omega,\\ &\mathfrak{B}_{1}w=0&&\mbox{on}\;\;\partial\Omega.\end{aligned}\right.\]
Therefore, when \(bm=0\), (4.1) has a positive solution if, and only of, \(\lambda=\varphi_{0}(\mu)\) (see (3.19)) and, in such case, \(w\) is a positive solution if, and only if, \(w=sw_{0}\) for some \(s>0\), where \(w_{0}\gg_{1}0\) stands for any principal eigenfunction associated to \(\lambda=\varphi_{0}(\mu)\). The next result collects some useful properties of (4.1)
**Lemma 4.1**.: _Suppose \(w\neq 0\) is a positive solution of (4.1). Then, \(w\gg_{1}0\) and_
\[\lambda=\sigma_{0}\Big{[}\mathfrak{L}_{1}+\tfrac{b(x)\theta_{[\mathfrak{L}_{2 },\mu,d]}}{1+m(x)w},\mathfrak{B}_{1},\Omega\Big{]}. \tag{4.2}\]
_Thus,_
\[\sigma_{0,1}\leq\varphi_{0}(\mu)\leq\lambda<\Phi(\mu), \tag{4.3}\]
_where \(\varphi_{0}(\mu)\) and \(\Phi(\mu)\) are the functions defined in (3.19) and (3.16), respectively. More precisely,_
\[\left\{\begin{aligned} &\sigma_{0,1}\leq\varphi_{0}(\mu)< \lambda<\Phi(\mu)\qquad\text{ if }\,\text{bm}\geq 0,\\ &\sigma_{0,1}<\varphi_{0}(\mu)=\lambda<\Phi(\mu)\qquad\text{ if }\,\text{bm}=0.\end{aligned}\right.\]
_In other words, either \((\lambda,\mu)\) lies in the wedges region between the curves \(\varphi_{0}(\mu)\) and \(\Phi(\mu)\) in Figure 2 if \(bm\gtrsim 0\), or \(\lambda=\varphi_{0}(\mu)\) if \(bm=0\)._
Proof.: Since
\[\left(\mathfrak{L}_{1}+\tfrac{b(x)\theta_{[\mathfrak{L}_{2},\mu,d]}}{1+m(x)w} \right)w=\lambda w\quad\text{in }\,\Omega,\]
and \(\mathfrak{B}_{1}w=0\), with \(w\gtrsim 0\), the identity (4.2) is a direct consequence of the uniqueness of \(\sigma_{0}\), and \(w\gg_{1}0\), by the properties of the positive principal eigenfunctions.
On the other hand, by our assumptions on \(m(x)\),
\[1-\chi_{{}_{\operatorname{int}\operatorname{supp}m}}\lesssim\frac{1}{1+mw} \quad\text{in }\,\Omega.\]
Thus, since \(b\theta_{[\mathfrak{L}_{2},\mu,d]}\gtrsim 0\), it is apparent that
\[0\leq\left(1-\chi_{{}_{\operatorname{int}\operatorname{supp}m}}\right)b \theta_{[\mathfrak{L}_{2},\mu,d]}\leq\frac{b\theta_{[\mathfrak{L}_{2},\mu,d]} }{1+mw}\lesssim b\theta_{[\mathfrak{L}_{2},\mu,d]}\quad\text{in }\,\Omega. \tag{4.4}\]
Therefore, by (4.4) and Theorem 2.1, (4.3) holds.
Now, note that \(\varphi_{0}(\mu)=\lambda\) unless \(b(x)>0\) for some \(x\in\operatorname{supp}m\). Thus, \(\varphi_{0}(\mu)<\lambda\) if \(bm\gtrsim 0\), and \(\varphi_{0}(\mu)=\lambda>\sigma_{0,1}\) if \(bm=0\). Moreover, in case \(bm\gtrsim 0\), we have that, for every \(\mu>\sigma_{0,2}\),
\[\varphi_{0}(\mu)=\sigma_{0}\left[\mathfrak{L}_{1}+\left(1-\chi_{{}_{ \operatorname{int}\operatorname{supp}m}}\right)b\theta_{[\mathfrak{L}_{2},\mu, d]},\mathfrak{B}_{1},\Omega\right]=\sigma_{0,1}\]
if, and only if,
\[\left(1-\chi_{{}_{\operatorname{int}\operatorname{supp}m}}\right)b=0,\]
and this occurs provided \(m(x)>0\) for all \(x\in\Omega\), like in the Holling-Tanner model, or \(m^{-1}(0)\subset b^{-1}(0)\neq\emptyset\), i.e., \(\operatorname{supp}\,b\subset\operatorname{supp}\,m\neq\emptyset\). This ends the proof.
It should not be forgotten that throughout this paper we are assuming that \(m=0\) in \(\bar{\Omega}_{0}\) but \(m\gtrsim 0\). Therefore, we are in a rather hybrid (different) situation between the Lotka-Volterra and the Holling-Tanner models.
Throughout the rest of this paper, we will assume that \(bm\gtrsim 0\), even if not expressed within text. This entails that \(\varphi_{0}(\mu)<\lambda<\Phi(\mu)\) if (4.1) has some positive solution. The next results collects two important qualitative properties of the positive solutions of (4.1).
**Lemma 4.2**.: _Let \(\{(\lambda_{n},w_{n})\}_{n\geq 1}\) be a sequence of positive solutions of (4.1) such that_
\[\lim_{n\to+\infty}\lambda_{n}=\lambda_{*}.\]
_Then \(\lambda^{*}\in[\varphi_{0}(\mu),\Phi(\mu)]\). Moreover:_
* \(\lim_{n\to\infty}\|w_{n}\|_{\infty}=+\infty\) _if, and only if,_ \(\lambda_{*}=\varphi_{0}(\mu)\)_;_
* \(\lim_{n\to\infty}\|w_{n}\|_{\infty}=0\) _if, and only if,_ \(\lambda_{*}=\Phi(\mu)\)_._
Proof.: By (4.3), necessarily,
\[\varphi_{0}(\mu)<\lambda_{n}<\Phi(\mu)\quad\text{for all}\;\;n\geq 1. \tag{4.5}\]
Thus, letting \(n\to\infty\), yields to \(\lambda^{*}\in[\varphi_{0}(\mu),\Phi(\mu)]\). Now, in order to prove the necessity of Part (a), suppose that
\[\lim_{n\to+\infty}\|w_{n}\|_{\infty}=+\infty. \tag{4.6}\]
Then, by expressing (4.1) as a fixed point equation and dividing by \(\|w_{n}\|_{\infty}\), it becomes apparent that, for every \(e>\sigma_{0,1}\) and \(n\geq 1\),
\[\frac{w_{n}}{\|w_{n}\|_{\infty}}=(\mathfrak{L}_{1}+e)^{-1}\Big{[}(\lambda_{n} +e)\frac{w_{n}}{\|w_{n}\|_{\infty}}-b\theta_{[\mathfrak{L}_{2},\mu,d]}\frac{ w_{n}}{\|w_{n}\|_{\infty}(1+m\frac{w_{n}}{\|w_{n}\|_{\infty}}\|w_{n}\|_{ \infty})}\Big{]}. \tag{4.7}\]
Since the sequence of continuous functions
\[(\lambda_{n}+e)\frac{w_{n}}{\|w_{n}\|_{\infty}}-b\theta_{[\mathfrak{L}_{2}, \mu,d]}\frac{w_{n}}{\|w_{n}\|_{\infty}(1+m\frac{w_{n}}{\|w_{n}\|_{\infty}}\|w _{n}\|_{\infty})},\qquad n\geq 1,\]
is bounded in \(\mathcal{C}(\bar{\Omega})\) and \((\mathfrak{L}_{1}+e)^{-1}\) is a compact operator, it follows from (4.7) that there exists \(\psi\in\mathcal{C}^{1}_{\mathfrak{B}_{1}}(\bar{\Omega})\) such that, along some subsequence, labeled by \(n_{\ell}\),
\[\lim_{\ell\to+\infty}\frac{w_{n_{\ell}}}{\|w_{n_{\ell}}\|_{\infty}}=\psi\quad \text{in}\;\;\mathcal{C}^{1}_{\mathfrak{B}_{1}}(\bar{\Omega}). \tag{4.8}\]
By (4.8), \(\psi\geq 0\) and \(\|\psi\|_{\infty}=1\). Moreover, by elliptic regularity, particularizing (4.7) at \(n=n_{\ell}\) and letting \(\ell\to+\infty\) in the resulting identity, shows that \(\psi\in\mathscr{W}_{1}\) and that it solves the problem
\[\left\{\begin{aligned} &\left[\mathfrak{L}_{1}+(1-\chi_{{ \rm int\,supp}\,m})b\theta_{[\mathfrak{L}_{2},\mu,d]}\right]\psi=\lambda_{*} \psi&\quad\text{in}\;\;\Omega,\\ &\mathfrak{B}_{1}\psi=0&\quad\text{on}\;\;\partial \Omega.\end{aligned}\right. \tag{4.9}\]
From (4.9) it is apparent that \(\psi\gg_{1}0\) and that
\[\lambda_{*}=\sigma_{0}\left[\mathfrak{L}_{1}+(1-\chi_{{}_{\rm int\,supp\,}m})b \theta_{[\mathfrak{L}_{2},\mu,d]},\mathfrak{B}_{1},\Omega\right]\equiv\varphi_{ 0}(\mu).\]
This shows that, indeed, \(\lambda_{*}=\varphi_{0}(\mu)\) if (4.6) holds.
Adapting the previous argument, it is easily seen that
\[\lim_{n\to+\infty}\|w_{n}\|_{\infty}=0 \tag{4.10}\]
guarantees the existence of some \(\psi\in\mathscr{W}_{1}\), with \(\|\psi\|_{\infty}=1\), such that
\[\left\{\begin{aligned} &\left(\mathfrak{L}_{1}+b\theta_{[ \mathfrak{L}_{2},\mu,d]}\right)\psi=\lambda_{*}\psi&\text{in}\; \;\Omega,\\ &\mathfrak{B}_{1}\psi=0&\text{on}\;\;\partial \Omega.\end{aligned}\right.\]
Therefore, (4.10) implies that
\[\lambda_{*}=\sigma_{0}[\mathfrak{L}_{1}+b\theta_{[\mathfrak{L}_{2},\mu,d]}, \mathfrak{B}_{1},\Omega]\equiv\Phi(\mu),\]
which ends the proof of the necessity in Part (b).
To prove the sufficiency in Part (a), assume that \(\lambda_{*}=\varphi_{0}(\mu)\) and that (4.6) fails. Then, there exists a constant, \(C>0\), such that, along some subsequence of \(\{w_{n}\}_{n\geq 1}\),
\[\|w_{n_{\ell}}\|_{\infty}\leq C\quad\text{for all}\;\;\ell\geq 1. \tag{4.11}\]
By the necessity of Part (b), \(\{w_{n_{\ell}}\}_{n\geq 1}\) cannot admit any subsequence converging to zero in \(\mathcal{C}(\bar{\Omega})\), because, in such case, \(\varphi_{0}(\mu)=\lambda_{*}=\Phi(\mu)\), which contradicts \(\varphi_{0}(\mu)<\Phi(\mu)\) (see (4.5)). On the other hand, since
\[w_{n_{\ell}}=(\mathfrak{L}_{1}+e)^{-1}\left[(\lambda_{n_{\ell}}+e)w_{n_{\ell} }-b\theta_{[\mathfrak{L}_{2},\mu,d]}\frac{w_{n_{\ell}}}{1+mw_{n_{\ell}}} \right]\quad\text{for all}\;\;\ell\geq 1 \tag{4.12}\]
and, due to (4.11), the sequence
\[(\lambda_{n_{\ell}}+e)w_{n_{\ell}}-b\theta_{[\mathfrak{L}_{2},\mu,d]}\frac{w _{n_{\ell}}}{1+mw_{n_{\ell}}},\qquad\ell\geq 1,\]
is bounded, by the compactness of \((\mathfrak{L}_{1}+e)^{-1}\), we can extract a subsequence of \(\{w_{n_{\ell}}\}_{n\geq 1}\), relabeled by \(n_{\ell}\), such that, for some \(\Psi\in\mathcal{C}^{1}_{\mathfrak{B}_{1}}(\bar{\Omega})\),
\[\lim_{\ell\to\infty}w_{n_{\ell}}=\Psi\quad\text{in}\;\;\mathcal{C}^{1}_{ \mathfrak{B}_{1}}(\bar{\Omega}). \tag{4.13}\]
As we already know that \(\{w_{n_{\ell}}\}_{n\geq 1}\) cannot converge to zero in \(\mathcal{C}(\bar{\Omega})\), it becomes apparent that \(\Psi\geq 0\). Moreover, letting \(\ell\to\infty\) in (4.12) shows that
\[\Psi=(\mathfrak{L}_{1}+e)^{-1}\left[(\varphi_{0}(\mu)+e)\Psi-b\theta_{[ \mathfrak{L}_{2},\mu,d]}\frac{\Psi}{1+m\Psi}\right]\quad\text{for all}\;\; \ell\geq 1. \tag{4.14}\]
By elliptic regularity, it follows from (4.14) that \(\Psi\in\mathscr{W}_{1}\) and that it provides us with a positive solution of
\[\left\{\begin{aligned} &\left(\mathfrak{L}_{1}+\frac{b\theta_{[ \mathfrak{L}_{2},\mu,d]}}{1+m\Psi}\right)\Psi=\varphi_{0}(\mu)\Psi& \quad\text{in}\;\;\Omega,\\ &\mathfrak{B}_{1}\Psi=0&\quad\text{on}\;\;\partial \Omega.\end{aligned}\right.\]
Therefore, \(\Psi\gg_{1}0\) and, by the uniqueness of \(\sigma_{0}\), it becomes apparent that
\[\varphi_{0}(\mu)=\sigma_{0}\left[\mathfrak{L}_{1}+\tfrac{b\theta_{[ \mathfrak{L}_{2},\mu,d]}}{1+m\Psi},\mathfrak{B}_{1},\Omega\right]. \tag{4.15}\]
On the other hand, since \(bm\geq 0\),
\[\left(1-\chi_{{}_{\rm int\,supp\,}m}\right)b\lesssim\tfrac{b}{1+m\Psi}\quad \text{in}\;\;\Omega,\]
it follows from Theorem 2.1 and the definition of \(\varphi_{0}(\mu)\) that, for every \(\mu>\sigma_{0,2}\),
\[\varphi_{0}(\mu):=\sigma_{0}\left[\mathfrak{L}_{1}+\left(1-\chi_{{}_{\rm int \,supp\,}m}\right)b\theta_{[\mathfrak{L}_{2},\mu,d]},\mathfrak{B}_{1},\Omega \right]<\sigma_{0}\left[\mathfrak{L}_{1}+\tfrac{b\theta_{[\mathfrak{L}_{2},\mu, d]}}{1+m\Psi},\mathfrak{B}_{1},\Omega\right].\]
As this estimate contradicts (4.15), (4.11) fails. Therefore, (4.6) holds. This ends the proof of Part (a).
To complete the proof of Part (b), suppose that \(\lambda_{*}=\Phi(\mu)\). Then, since \(\varphi_{0}(\mu)<\Phi(\mu)\) for all \(\mu>\sigma_{0,2}\), it follows from Part (a) that there exists a constant \(C>0\) such that
\[\|w_{n}\|_{\infty}\leq C\quad\text{for all}\;\;n\geq 1. \tag{4.16}\]
In such case, adapting the previous compactness arguments, it becomes apparent that there exist \(\Psi\in\mathscr{W}_{1}\), with \(\Psi\geq 0\), and a subsequence of \(\{w_{n}\}_{n\geq 1}\), relabeled by \(n_{\ell}\), \(\ell\geq 1\), such that (4.13) holds. Thus, since \(\lambda_{*}=\Phi(\mu)\), necessarily
\[\left\{\begin{aligned} &\left(\mathfrak{L}_{1}+\tfrac{b\theta_{[ \mathfrak{L}_{1},\mu,d]}}{1+m\Psi}\right)\Psi=\Phi(\mu)\Psi&\quad \text{in}\;\;\Omega,\\ &\mathfrak{B}_{1}\Psi=0&\quad\text{on}\;\;\partial \Omega.\end{aligned}\right. \tag{4.17}\]
Suppose that \(\Psi\gtrsim 0\). Then, \(\Psi\gg_{1}0\) and (4.17) implies that
\[\Phi(\mu)=\sigma_{0}\left[\mathfrak{L}_{1}+\tfrac{b\theta_{[ \mathfrak{L}_{1},\mu,d]}}{1+m\Psi},\mathfrak{B}_{1},\Omega\right]. \tag{4.18}\]
On the other hand, by Theorem 2.2 and the definition of \(\Phi(\mu)\), we have that
\[\Phi(\mu)\equiv\sigma_{0}[\mathfrak{L}_{1}+b\theta_{[\mathfrak{L}_{2},\mu,d]},\mathfrak{B}_{1},\Omega\right]>\sigma_{0}\left[\mathfrak{L}_{1}+\tfrac{b \theta_{[\mathfrak{L}_{1},\mu,d]}}{1+m\Psi},\mathfrak{B}_{1},\Omega\right],\]
which contradicts (4.18). Thus, \(\Psi=0\) and hence,
\[\lim_{\ell\to\infty}w_{n_{\ell}}=0\quad\text{in}\;\;\mathcal{C}_{\mathfrak{B} _{1}}^{1}(\bar{\Omega}). \tag{4.19}\]
As this argument can be repeated along any subsequence of \(\{w_{n}\}_{n\geq 1}\), Part (b) holds. This ends the proof of the lemma.
Particularizing (3.8) at \(\varepsilon=0\) yields to
\[\mathfrak{F}_{0}(\lambda,\mu,w,v)\equiv\mathfrak{F}(\lambda,\mu,0,w,v):=\left( \begin{array}{c}w-(\mathfrak{L}_{1}+e)^{-1}\left[(\lambda+e)w-b\frac{wv}{1+mw} \right]\\ v-(\mathfrak{L}_{2}+e)^{-1}\left[(\mu+e)v-dv^{2}\right]\end{array}\right). \tag{4.20}\]
As the \(v\)-component component of (4.20) vanishes at \(v=\theta_{[\mathfrak{L}_{2},\mu,d]}\), it becomes apparent that \(w\) is a positive solution of (4.1) if, and only if, the \(w\)-component of
\[\mathfrak{F}_{0}(\lambda,\mu,w,\theta_{[\mathfrak{L}_{2},\mu,d]}):=\left( \begin{array}{c}w-(\mathfrak{L}_{1}+e)^{-1}\left[(\lambda+e)w-b\theta_{[ \mathfrak{L}_{2},\mu,d]}\frac{w}{1+mw}\right]\\ 0\end{array}\right),\]
vanishes. Naturally, this is also a rather direct consequence of (4.1). Therefore, when applying Theorem 3.2 to (1.8) at \(\varepsilon=0\) it becomes apparent that \(v(s)=\theta_{[\mathfrak{L}_{2},\mu,d]}\) and that \(\mathfrak{F}_{0}(\lambda(s),\mu,w(s),\theta_{[\mathfrak{L}_{2},\mu,d]})=0\) for all \(s\in(-\delta,\delta)\). Moreover, particularizing (3.9) at \(\varepsilon=0\) provides us with
\[\lambda^{\prime}(0)=-\int_{\Omega}b\theta_{[\mathfrak{L}_{2},\mu,d]}w_{1}^{2}w _{1}^{*}<0. \tag{4.21}\]
Therefore, there is a bifurcation to positive solutions of (4.1) from \((w,v)=(0,\theta_{[\mathfrak{L}_{2},\mu,d]})\) at \(\lambda=\Phi(\mu)\) and the bifurcation is subcritical, because of (4.21), or (4.3). Naturally, this entails the existence of an \(\varepsilon_{0}>0\) such that \(\lambda^{\prime}(0)<0\) in (1.7) if \(|\varepsilon|<\varepsilon_{0}\).
Subsequently, we denote by \(\mathscr{S}_{0}\) the set of nontrivial solutions of (4.1) defined by
\[\mathscr{S}_{0}:=\{(\lambda,\mu,w,\theta_{[\mathfrak{L}_{2},\mu,d]})\in \mathfrak{F}_{0}^{-1}(0)\,:\,w\neq 0\}\cup\{(\lambda,\mu,0,\theta_{[\mathfrak{L}_{2 },\mu,d]})\,:\,\lambda\in\Sigma(\mathscr{L}(\lambda,0))\},\]
where \(\Sigma(\mathscr{L}(\lambda,0))\) stands for the generalized spectrum of the Fredholm curve \(\mathscr{L}(\lambda,0)\) introduced in the proof of Theorem 3.2. And \(\mathscr{C}_{0}^{+}\) stands for the subcomponent of positive solutions of \(\mathscr{S}_{0}\) such that \((\Phi(\mu),\mu,0,\theta_{[\mathfrak{L}_{2},\mu,d]})\in\bar{\mathscr{C}}_{0}^{+}\). The next result provides us with some useful properties of \(\mathscr{C}_{0}^{+}\). We are denoting by \(\mathcal{P}_{\lambda}\) the \(\lambda\)-projection operator,
\[\mathcal{P}_{\lambda}(\lambda,\mu,w,\theta_{[\mathfrak{L}_{2},\mu,d]})=\lambda.\]
**Theorem 4.1**.: _The component \(\mathscr{C}_{0}^{+}\) satisfies_
\[\mathcal{P}_{\lambda}(\mathscr{C}_{0}^{+})=(\varphi_{0}(\mu),\Phi(\mu)). \tag{4.22}\]
_Moreover, for every sequence of positive solutions in \(\mathscr{C}_{0}^{+}\), \(\{(\lambda_{n},\mu,w_{n},\theta_{[\mathfrak{L}_{2},\mu,d]})\}_{n\geq 1}\), such that \(\lim_{n\to\infty}\lambda_{n}=\varphi_{0}(\mu)\), necessarily_
\[\lim_{n\to\infty}\|w_{n}\|_{\infty}=+\infty. \tag{4.23}\]
_In other words, \(\mathscr{C}_{0}^{+}\) is unbounded at \(\lambda=\varphi_{0}(\mu)\)._
Proof.: Owing to Lemma 4.2(b), \((\Phi(\mu),\mu,0,\theta_{[\mathfrak{L}_{2},\mu,d]})\) is the unique bifurcation point to positive solutions from \((\lambda,\mu,0,\theta_{[\mathfrak{L}_{2},\mu,d]})\). The existence of \(\mathscr{C}_{0}^{+}\) follows from Theorem 3.2 and the Zorn-Kuratowski lemma. By [21, Th.7.1.3], \(\mathscr{C}_{0}^{+}\) is unbounded in \(\mathbb{R}^{2}\times\mathcal{C}(\bar{\Omega})\). Since \(\mu>\sigma_{0,2}\) is fixed and, due to Lemma 4.1, \(\lambda\in(\varphi_{0}(\lambda),\Phi(\mu))\) if \(bm\geq 0\), \(\mathscr{C}_{0}^{+}\) must be unbounded in \(w\). Thus, thanks to Lemma 4.2(a), (4.22) and (4.23) hold.
The next result provides us with the fine structure of the component \(\mathscr{C}_{0}^{+}\) near \(\lambda=\Phi(\mu)\) and \(\lambda=\varphi_{0}(\mu)\). It is a pivotal result in getting the main multiplicity result of this paper for (1.8) with sufficiently small \(\varepsilon>0\).
**Theorem 4.2**.: _In a neighborhood of \((\lambda,\mu,w,\theta_{[\mathfrak{L}_{2},\mu,d]})=(\Phi(\mu),\mu,0,\theta_{[ \mathfrak{L}_{2},\mu,d]})\) in \(\mathbb{R}\times\mathbb{R}\times\mathscr{W}_{1}\times\{\theta_{[\mathfrak{L}_ {2},\mu,d]}\}\), \(\mathscr{C}_{0}^{+}\) consists of the analytic curve \((\lambda(s),\mu,w(s),\theta_{[\mathfrak{L}_{2},\mu,d]})\) given by Theorem 3.2. Moreover, the following properties are satisfied:_
* _For sufficiently small_ \(r>0\) _and every_ \(\lambda\in[\Phi(\mu)-r,\Phi(\mu))\)_, (_4.1_) has a unique positive solution, which is linearly unstable with one-dimensional unstable manifold._
* _There exists_ \(r>0\) _such that, for every_ \(\lambda\in(\varphi_{0}(\mu),\varphi_{0}(\mu)+r]\)_, (_4.1_) has a unique positive solution,_ \((\lambda,\mu,w_{\lambda},\theta_{[\mathfrak{L}_{2},\mu,d]})\)_, which is non-degenerate. Thus, for these values of_ \(\lambda\)_,_ \(\mathscr{C}_{0}^{+}\) _consists of an analytic curve of positive solutions bifurcating from_ \(+\infty\) _at_ \(\lambda=\varphi_{0}(\mu)\)_, in the sense that_ \[\lim_{\lambda\downarrow\varphi_{0}(\mu)}w_{\lambda}(x)=+\infty\quad\text{for all}\;\;x\in\Omega.\] (4.24)
_Furthermore, these solutions have local Poincare index \(-1\), calculated through the Leray-Schauder degree._
Proof.: According to (4.21), \(\mathscr{C}_{0}^{+}\) bifurcates subcritically from \(w=0\) at \(\lambda=\Phi(\mu)\). Combining this feature together with the uniqueness of the bifurcated curve in Theorem 3.2 and Lemma 4.2 (b), it becomes apparent the existence of a \(r>0\) such that (4.1) has a unique solution for each \(\lambda\in[\Phi(\mu)-r,\Phi(\mu))\). The fact that \(\mathscr{C}_{0}^{+}\) is analytic for \(\lambda\) sufficiently close to \(\Phi(\mu)\) is a byproduct of Theorem 3.2, since \(\mathscr{C}_{0}^{+}\) can be parameterized by \(\lambda\), and \(\mathfrak{F}\), or \(\mathfrak{F}_{0}\), is an analytic function of \(\lambda\). Furthermore, since \(\lambda^{\prime}(0)<0\), it follows from Theorem 3.3 that, for sufficiently small \(r>0\) and every \(\lambda\in[\Phi(\mu)-r,\Phi(\mu))\), the positive solution \((\lambda,\mu,w(\lambda))\) is linearly unstable with one-dimensional unstable manifold. In particular, by the Schauder formula, its local index as a fixed point of the compact operator \(I-\mathfrak{F}_{0}\) equals \(-1\).
On the other hand, by Lemma 4.2 (a), for every \(\lambda\in(\varphi_{0}(\mu),\Phi(\mu))\), there exists \(M_{\lambda}>0\) such that any positive solution, \((\tilde{\lambda},\tilde{w})\), of (4.1) with \(\tilde{\lambda}\in[\lambda,\Phi(\mu))\) satisfies
\[\tilde{w}\in W_{\lambda}:=\left\{w\in\mathscr{W}_{1}\,:\,0<\|w\|_{\infty}<M_{ \lambda}\right\}.\]
Thus, combining the homotopy invariance with the excision property of the Leray-Schauder degree, it becomes apparent that
\[\operatorname{Deg}\left(\mathfrak{F}_{0}(\tilde{\lambda},\mu,\cdot),W_{ \lambda}\right)=\operatorname{Deg}\left(\mathfrak{F}_{0}(\Phi(\mu)-\tfrac{r}{ 2},\mu,\cdot),W_{\lambda}\right)=-1\]
for all \(\tilde{\lambda}\in[\lambda,\Phi(\mu))\). In particular,
\[\mbox{Deg }(\mathfrak{F}_{0}(\lambda,\mu,\cdot,),W_{\lambda})=-1\quad\mbox{ for all }\;\lambda\in(\varphi_{0}(\mu),\Phi(\mu)). \tag{4.25}\]
Therefore, for every \(\lambda\in(\varphi_{0}(\mu),\Phi(\mu))\), the total sum of the local Poincare indices of the (4.1) positive solutions, calculated through the Leray-Schauder degree, equals \(-1\).
Subsequently, we will carry out the (sharp) analysis of \(\mathscr{C}_{0}^{+}\) in a neighborhood of \(\lambda=\varphi_{0}(\mu)\). Let \(\{(\lambda_{n},w_{n})\}_{n\geq 1}\) be a sequence of positive solutions of \(\mathscr{C}_{0}^{+}\) such that
\[\lim_{n\to+\infty}\lambda_{n}=\varphi_{0}(\mu). \tag{4.26}\]
Then, by Lemma (4.2) (a), we already know that
\[\lim_{n\to\infty}\|w_{n}\|_{\infty}=+\infty.\]
Note that, in particular, this implies that \(\lim_{n\to\infty}M_{\lambda_{n}}=+\infty\). Moreover, according to the proof of Lemma (4.2) (a), there exists a subsequence, labeled again by \(n\), such that
\[\lim_{n\to+\infty}\frac{w_{n}}{||w_{n}||_{\infty}}=\psi\]
for some \(\psi\in\mathscr{W}_{1}\) solving (4.9). Since \(\psi\gg_{1}0\) is a principal eigenfunction associated with \(\varphi_{0}(\mu)\), it becomes apparent that
\[\lim_{n\to+\infty}w_{n}(x)=+\infty\quad\mbox{for all }\;x\in\Omega. \tag{4.27}\]
As this holds for every sequence of positive solutions, once established the uniqueness of \(w_{\lambda}\), (4.24) holds. In order to prove the uniqueness of the positive solution for \(\lambda\) in a right-neighborhood of \(\varphi_{0}(\mu)\), we will show that, for sufficiently large \(n\), \((\lambda_{n},w_{n})\) must be non-degenerate with a one-dimensional unstable manifold. Thanks again to the Schauder formula, this entails that the local index of these positive solutions equals \(-1\) and therefore, combining (4.25) with the additivity property of the Leray-Schauder degree, (4.1) has a unique positive solution for \(\lambda\) sufficiently close to \(\varphi_{0}(\lambda)\), denoted by \((\lambda,\mu,w_{\lambda},\theta_{[\mathfrak{L}_{2},\mu,d]})\) in the statement of the theorem. According to (4.22), necessarily \((\lambda,\mu,w_{\lambda},\theta_{[\mathfrak{L}_{2},\mu,d]})\in\mathscr{C}_{0} ^{+}\) for \(\lambda\sim\varphi_{0}(\mu)\).
The spectrum of the linearization of \(\mathfrak{F}_{0}\) at \((\lambda_{n},\mu,w_{n},\theta_{[\mathfrak{L}_{2},\mu,d]})\) is given by the eigenvalues of the boundary value problem
\[\left\{\begin{aligned} &\left(\mathfrak{L}_{1}+b\frac{\theta_{[ \mathfrak{L}_{2},\mu,d]}}{(1+mw_{n})^{2}}-\lambda_{n}\right)w=\tau w& \mbox{ in }\;\Omega,\\ &\mathfrak{B}_{1}w=0&\mbox{ on }\;\partial\Omega. \end{aligned}\right.\]
Since \(1+mw_{n}\geq 1\) for all \(n\geq 1\), it follows from Theorem 2.1 and the identity (4.2) applied to \((\lambda,w)=(\lambda_{n},w_{n})\) that, for every \(n\geq 1\),
\[\tau_{0}(n)\equiv\sigma_{0}\left[\mathfrak{L}_{1}+b\frac{\theta_{[\mathfrak{L} _{2},\mu,d]}}{(1+mw_{n})^{2}}-\lambda_{n},\mathfrak{B}_{1},\Omega\right]< \sigma_{0}\left[\mathfrak{L}_{1}+b\frac{\theta_{[\mathfrak{L}_{2},\mu,d]}}{1 +mw_{n}},\mathfrak{B}_{1},\Omega\right]-\lambda_{n}=0. \tag{4.28}\]
On the other hand, it follows from (4.27) that
\[\lim_{n\to+\infty}\frac{b\theta_{[\mathfrak{L}_{2},\mu,d]}}{(1+mw_{n})^{2}}=\left( 1-\chi_{{}_{\rm int\,supp\,}m}\right)b\theta_{[\mathfrak{L}_{2},\mu,d]}.\]
Thus, thanks to (3.19) and (4.26), by letting \(n\to\infty\) in (4.28), we find that
\[\lim_{n\to+\infty}\tau_{0}(n)=\varphi_{0}(\mu)-\varphi_{0}(\mu)=0, \tag{4.29}\]
though, due to (4.28), \(\tau_{0}(n)<0\) for all \(n\geq 1\). Similarly, by the strict dominance of the principal eigenvalues, any other eigenvalue, say \(\tau_{j}(n)\), \(j\geq 1\), satisfies
\[\lim_{n\to\infty}\operatorname{Re}\tau_{j}(n)=\operatorname{Re}\sigma_{j} \left[\mathfrak{L}_{1}+\left(1-\chi_{{}_{\rm int\,supp\,}m}\right)b\theta_{[ \mathfrak{L}_{2},\mu,d]},\mathfrak{B}_{1},\Omega\right]-\varphi_{0}(\mu)>0.\]
Therefore, there exists \(r>0\) such that any positive solution, \((\lambda,w)\), of (4.1) with \(\lambda\in(\varphi_{0}(\mu),\varphi_{0}(\mu)+r]\) is non-degenerate with one-dimensional unstable manifold. This ends the proof.
Figure 3 shows an admissible component \(\mathscr{C}_{0}^{+}\) of positive solutions of (4.1) respecting Theorems 4.1 and 4.2. Although (4.1) has a unique positive solution for \(\lambda\) sufficiently close to either \(\Phi(\mu)\), or \(\varphi_{0}(\mu)\), the problem might possess an arbitrarily large number of positive solutions for some intermediate range of values of the parameter \(\lambda\), as illustrated in Figure 3. Actually, besides \(\mathscr{C}_{0}^{+}\), (4.1) might have some additional component of positive solutions not plotted in the figure. In spite of all these circumstances, thanks to Theorems 4.1 and 4.2, near the ends of the existence interval, \((\varphi_{0}(\mu),\Phi(\mu))\), the unique positive solution of (4.1) must be unstable with one-dimensional unstable manifold. It remains an open problem in this paper to analyze the fine structure of the global bifurcation diagram.
An optimal multiplicity result for the original model
The next multiplicity result is the main theorem of this section. Remember that, owing to Theorem 3.4, for every \(\mu>\sigma_{0,2}\), (1.7) has a coexistence state if \(\lambda>\Phi(\mu)\). Moreover, in such case, \(\lambda>\varphi_{\varepsilon}(\mu)\), because \(\Phi(\mu)>\varphi_{\varepsilon}(\mu)\).
**Theorem 5.1**.: _Fix \(\lambda^{*}\in(\varphi_{0}(\mu),\Phi(\mu))\). Then, there exists \(\varepsilon_{0}\equiv\varepsilon_{0}(\lambda^{*})>0\) such that, for every \(\varepsilon\in(0,\varepsilon_{0})\), (1.7) possesses a component \(\mathscr{C}_{\varepsilon}^{+}\) of coexistence states satisfying the following properties:_
* \(\mathcal{P}_{\lambda}\left(\mathscr{C}_{\varepsilon}^{+}\right)=[\lambda_{T}, +\infty)\) _for some_ \(\lambda_{T}\equiv\lambda_{T}(\varepsilon)\in(\varphi_{\varepsilon}(\mu), \lambda^{*})\)_._
* _For every_ \(\lambda\in[\lambda^{*},\Phi(\mu))\)_, (_1.7_) has, at least, two (different) coexistence states._
* \(\mathscr{C}_{\varepsilon}^{+}\) _is an analytic_ \(\lambda\)_-curve in a neighborhood of_ \((\lambda,\mu,w,v)=(\Phi(\mu),\mu,0,\theta_{[\mathfrak{L}_{2},\mu,d]})\)_._
Naturally, \(\mathscr{C}_{\varepsilon}^{+}\) is the perturbation of the component \(\mathscr{C}_{0}^{+}\) constructed in Section 4 as \(\varepsilon>0\) leaves \(\varepsilon=0\). It turns out that, as \(\varepsilon\) perturbs from zero, the component \(\mathscr{C}_{0}^{+}\) bends backwards towards the right providing us with a perturbed component like the one sketched in Figure 4.
The proof of Theorem 5.1 is based on Theorem 4.2, Theorem 7.2.2 of [21], and on the existence of a priori bounds for the coexistence states of (1.7) established by the following lemma.
**Lemma 5.1**.: _Suppose \(\varepsilon>0\) and let \((w,v)\) be a coexistence state of (1.7). Then,_
\[0\ll_{1}w\ll_{1}\theta_{[\mathfrak{L}_{1},\lambda,\varepsilon a]},\qquad \theta_{[\mathfrak{L}_{2},\mu,d]}\ll_{2}v\ll_{2}\theta_{[\mathfrak{L}_{2}, \mu+\varepsilon c}\frac{\theta_{[\mathfrak{L}_{1},\lambda,\varepsilon a]}}{1 +m\theta_{[\mathfrak{L}_{1},\lambda,\varepsilon a]}},d]\lx@note{footnote}{$\cdot$} \tag{5.1}\]
Proof.: Since \(w\gneq 0\), by the uniqueness of the principal eigenvalue, it follows from the \(w\)-equation of (1.7) that \(w\gg_{1}0\) and that
\[\lambda=\sigma_{0}\left[\mathfrak{L}_{1}+\varepsilon aw+b\frac{v}{1+mw}, \mathfrak{B}_{1},\Omega\right].\]
Moreover, it follows from \(w\gg_{1}0\) that
\[\mathfrak{L}_{1}w=\lambda w-\varepsilon aw^{2}-b\frac{wv}{1+mw}\gneq\lambda w -\varepsilon aw^{2}.\]
Thus, \(w\) is a positive strict subsolution of the problem
\[\left\{\begin{aligned} &\mathfrak{L}_{1}w=\lambda w-\varepsilon a(x)w^{2 }&\text{ in }\;\Omega,\\ &\mathfrak{B}_{1}w=0&\text{ on }\;\partial\Omega.\end{aligned}\right.\]
Hence, since \(\lambda>\sigma_{0,1}\), it follows from Theorem 2.3 that
\[w\ll_{1}\theta_{[\mathfrak{L}_{1},\lambda,\varepsilon a]}. \tag{5.2}\]
This completes the proof of the first two estimates of (5.1). Similarly, by (5.2),
\[\mu v-dv^{2}\gneq\mathfrak{L}_{2}v =\mu v-dv^{2}+\varepsilon c\frac{wv}{1+mw}\] \[\leq\left(\mu+\varepsilon c\frac{\theta_{[\mathfrak{L}_{1}, \lambda,\varepsilon a]}}{1+m\theta_{[\mathfrak{L}_{1},\lambda,\varepsilon a] }}\right)v-dv^{2},\]
which implies that \(v\) is a positive strict supersolution of
\[\left\{\begin{aligned} &\mathfrak{L}_{2}v=\mu v-dv^{2}& \text{ in }\;\Omega,\\ &\mathfrak{B}_{2}v=0&\text{ on }\;\partial\Omega,\end{aligned}\right.\]
as well as a positive strict subsolution of
\[\left\{\begin{aligned} &\mathfrak{L}_{2}v=\mu v-dv^{2}+\varepsilon c \frac{\theta_{[\mathfrak{L}_{1},\lambda,\varepsilon a]}v}{1+m\theta_{[ \mathfrak{L}_{1},\lambda,\varepsilon a]}}&\text{ in }\;\Omega,\\ &\mathfrak{B}_{2}v=0&\text{ on }\;\partial\Omega.\end{aligned}\right.\]
Therefore, the last two estimates of (5.1) also follow from Theorem 2.3.
The rest of this section is devoted to the proof of Theorem 5.1. Throughout it, we fix \(\mu>\sigma_{0,2}\) and \(\lambda^{*}\in(\varphi_{0}(\mu),\Phi(\mu))\), consider a sufficiently small \(r>0\) satisfying the conclusions of Theorem 4.2, and pick \(\lambda_{0},\lambda_{1}\in(\varphi_{0}(\mu),\Phi(\mu))\) such that
\[\lambda_{0}<\varphi_{0}(\mu)+r<\lambda^{*}<\Phi(\mu)-r<\lambda_{1}<\Phi(\mu).\]
Naturally, \(r>0\) can be shortened as much as necessary. Moreover, for every \(t,s\in(\varphi_{0}(\mu),\Phi(\mu))\) with \(t<s\), we denote by \(\mathscr{C}^{+}_{0,[t,s]}\) the restriction of the component \(\mathscr{C}^{+}_{0}\) to the interval \([t,s]\), i.e.,
\[\mathscr{C}^{+}_{0,[t,s]}\equiv\left\{(\lambda,\mu,w,\theta_{[\mathfrak{L}_{2 },\mu,d]})\in\mathscr{C}^{+}_{0}\ :\ \lambda\in[t,s]\right\}.\]
By the choice of \(\lambda_{0}\) and \(\lambda_{1}\), Theorem 4.2 guarantees that \(\mathscr{C}^{+}_{0,[\lambda_{0},\lambda_{1}]}\) has a unique non-degenerate positive solution for every
\[\lambda\in[\lambda_{0},\varphi_{0}(\mu)+r]\cup[\Phi(\mu)-r,\lambda_{1}]. \tag{5.3}\]
Actually, by the implicit function theorem, each of the components \(\mathscr{C}^{+}_{0,[\lambda_{0},\varphi_{0}(\mu)+r]}\) and \(\mathscr{C}^{+}_{0,[\Phi(\mu)-r,\lambda_{1}]}\) consists of an analytic arc of \(\lambda\)-curve. This is a pivotal feature in the proof given here. As these solutions are non-degenerate, once again by the implicit function theorem, these two arcs perturb into two \(\lambda\)-arcs of non-degenerate solutions of (1.7) for sufficiently small \(\varepsilon>0\).
Now, we consider the bounded set
\[\mathcal{C}_{\eta}:=\mathscr{C}^{+}_{0,[\lambda_{0},\lambda_{1}]}+B_{\eta},\]
where \(B_{\eta}\) stands for the open ball of radius \(\eta\) centered at \((\mu,w,v)=(\mu,0,0)\) in the product space
\[\mathscr{X}\equiv\mathbb{R}\times\mathcal{C}^{1}_{\mathfrak{B}_{1}}(\bar{ \Omega})\times\mathcal{C}^{1}_{\mathfrak{B}_{2}}(\bar{\Omega}).\]
Then, \(\mathcal{C}_{\eta}\) is a \(\eta\)-neighborhood of \(\mathscr{C}^{+}_{0,[\lambda_{0},\lambda_{1}]}\) with side covers
\[\{\lambda_{0}\}\times[(\mu,w_{\lambda_{0}},\theta_{[\mathfrak{L}_{2},\mu,d]})+ B_{\eta}],\qquad\{\lambda_{1}\}\times[(\mu,w_{\lambda_{1}},\theta_{[\mathfrak{L}_{2}, \mu,d]})+B_{\eta}],\]
where \(w_{\lambda}\) denotes the unique positive solution of (4.1) for every \(\lambda\) satisfying (5.3). By construction, \(\mathscr{C}^{+}_{0,[\lambda_{0},\lambda_{1}]}\subset\mathcal{C}_{\eta}\). Moreover, for sufficiently small \(\eta>0\),
\[(\lambda,\mu,w,v)=(\lambda,\mu,w_{\lambda},\theta_{[\mathfrak{L}_{2},\mu,d]})\]
is the unique solution of (1.8) in \(\mathcal{C}_{\eta}\) for each \(\lambda\) satisfying (5.3). Furthermore, since the \(w\)-components of the elements of \(\mathscr{C}^{+}_{0,[\lambda_{0},\lambda_{1}]}\) are separated away from zero, because \(\lambda=\Phi(\mu)\) is the unique bifurcation value to coexistence states from \(w=0\), \(\bar{\mathcal{C}}_{\eta}\) cannot admit any solution of the form \((\lambda,\mu,0,v)\) with \(v=0\) or \(v=\theta_{[\mathfrak{L}_{2},\mu,d]}\) for sufficiently small \(\eta>0\).
Next, we will adapt the proof of [21, Th. 6.3.1], through a well known lemma of Whyburn [28, Ch. 1] on compact continua, to show that, if necessary, \(\mathcal{C}_{\eta}\) can be shortened in the interval \([\varphi_{0}(\mu)+r,\Phi(\mu)-r]\) up to obtain an _isolating neighborhood_ of \(\mathscr{C}^{+}_{0}\), denoted by \(\mathcal{O}\), in the sense that, besides the previous properties of \(\mathcal{C}_{\eta}\), \(\partial_{L}\mathcal{O}\cap\mathscr{S}_{0}\) cannot admit any positive solution of (1.8), \((\lambda,\mu,w,\theta_{[\mathfrak{L}_{2},\mu,d]})\), with \(\lambda\in[\varphi_{0}(\mu)+r,\Phi(\mu)-r]\). We are denoting by \(\partial_{L}\mathcal{O}\) the set \(\partial\mathcal{O}\), except for the two lateral side covers at \(\lambda=\lambda_{0}\) and \(\lambda=\lambda_{1}\), where \(\mathscr{S}_{0}\) has
exactly two non-degenerate coexistence states. Indeed, should \(\mathcal{C}_{\eta}\) satisfy this property we can take \(\mathcal{O}=\mathcal{C}_{\eta}\). Otherwise, we consider the non-empty compact sets
\[M :=\left\{(\lambda,\mu,w,\theta_{[\mathcal{C}_{2},\mu,d]})\in\bar{ \mathcal{C}}_{\eta}\cap\mathscr{S}_{0}\;:\;\lambda\in[\varphi_{0}(\mu)+r, \Phi(\mu)-r]\right\},\] \[A :=\left\{(\lambda,\mu,w,\theta_{[\mathcal{C}_{2},\mu,d]})\in \partial\mathcal{C}_{\eta}\cap\mathscr{S}_{0}\;:\;\lambda\in[\varphi_{0}(\mu) +r,\Phi(\mu)-r]\right\},\] \[B :=\mathscr{C}_{0,[\varphi_{0}(\mu)+r,\Phi(\mu)-r]}^{+}.\]
These sets are compact because they are closed and bounded sets consisting of fixed points of a compact operator. Moreover, \(A\) and \(B\) are disjoint. Thus, according to Whyburn [28, Ch.1], since \(B\) is a connected component, there are two disjoint compact subsets of \(M\), \(M_{A}\) and \(M_{B}\), such that \(A\subset M_{A}\), \(B\subset M_{B}\) and \(M=M_{A}\cup M_{B}\). Thus, setting \(\delta:=\mathrm{dist}(M_{A},M_{B})>0\), it is easily seen that
\[\mathcal{O}:=\mathcal{C}_{\eta}\setminus\overline{M_{A}+B_{\frac{\delta}{2}}}\]
satisfies similar properties as \(\mathcal{C}_{\eta}\) and, in addition, by construction,
\[\partial_{L}\mathcal{O}\cap\mathscr{S}_{0}=\emptyset, \tag{5.4}\]
This construction has been sketched in Figure 5, where an admissible \(\mathcal{O}\) has been plotted when \(\mathcal{O}=\mathcal{C}_{\eta}\).
Subsequently, for sufficiently small \(\varepsilon>0\), we denote by \(\mathscr{S}_{\varepsilon}\) the set of nontrivial solutions of (1.7),
\[\mathscr{S}_{\varepsilon}:=\{(\lambda,\mu,w,v)\in\mathfrak{F}^{-1}(0)\,:\,(w,v )\neq(0,\theta_{[\mathcal{C}_{2},\mu,d]})\}\cup\{(\lambda,\mu,0,\theta_{[ \mathcal{C}_{2},\mu,d]})\;:\;\lambda\in\Sigma(\mathscr{L}(\lambda,\varepsilon ))\},\]
where \(\Sigma(\mathscr{L}(\lambda,\varepsilon))\) is the generalized spectrum of \(\mathscr{L}(\lambda,\varepsilon)\), as discussed in [21]. By [21, Th. 7.2.2], there exists a component of \(\mathscr{S}_{\varepsilon}\), denoted by \(\mathscr{C}_{\varepsilon}^{+}\), consisting of coexistence states of (1.7) such that
\[(\Phi(\mu),\mu,0,\theta_{[\mathfrak{C}_{2},\mu,d]})\in\mathscr{C}_{\varepsilon }^{+}.\]
However, contrarily to what happens with \(\mathscr{C}_{0}^{+}\), Lemma 5.1 entails that, for every \(\varepsilon>0\) and \(\hat{\lambda}>\Phi(\mu)\), the set of coexistence states
\[\left\{(\lambda,\mu,w,v)\in\mathscr{C}_{\varepsilon}^{+}\ :\ \lambda\in( \varphi_{\varepsilon}(\mu),\hat{\lambda}]\right\}\]
is bounded, whereas, thanks to [21, Th. 7.2.2], \(\mathscr{C}_{\varepsilon}^{+}\) is unbounded. Consequently, as soon as \(\lambda^{\prime}(0)<0\), which holds true for sufficiently small \(\varepsilon>0\), there exists \(\lambda_{T}\equiv\lambda_{T}(\varepsilon)\in(\varphi_{\varepsilon}(\mu), \Phi(\mu))\) such that
\[\mathcal{P}_{\lambda}\left(\mathscr{C}_{\varepsilon}^{+}\right)=[\lambda_{T}( \varepsilon),+\infty).\]
We claim that \(\lambda_{T}(\varepsilon)<\lambda_{0}\) for sufficiently small \(\varepsilon>0\). Since \(\lambda_{0}<\lambda^{*}\), this ends the proof of Part (a). Note that \(\lambda_{T}>\varphi_{\varepsilon}(\mu)\) by Theorem 3.4. To prove \(\lambda_{T}(\varepsilon)<\lambda_{0}\), we first show that
\[[\lambda_{0},\Phi(\mu))\subset\mathcal{P}_{\lambda}\left(\mathscr{C}_{ \varepsilon}^{+}\right)\quad\text{for sufficiently small }\ \varepsilon>0. \tag{5.5}\]
This holds true thanks to the crucial feature that the isolating neighborhood of \(\mathscr{C}_{0}^{+}\) in \([\lambda_{0},\lambda_{1}]\), \(\mathcal{O}\), also provides us with an isolating neighborhood of \(\mathscr{C}_{\varepsilon}^{+}\) in \([\lambda_{0},\lambda_{1}]\) for sufficiently small \(\varepsilon>0\) if \(\lambda_{1}\) is sufficiently close to \(\Phi(\mu)\). Indeed, thanks to Theorem 3.2, one can choose \(\lambda_{1}\) to be sufficiently close to \(\Phi(\mu)\) so that, for sufficiently small \(\varepsilon>0\), \(\mathscr{C}_{\varepsilon}^{+}\) has a unique non-degenerate coexistence state close to \((w,v)=(0,\theta_{[\mathfrak{C}_{2},\mu,d]})\) for all \(\lambda\in[\lambda_{1},\Phi(\mu))\), say
\[(\lambda,\mu,w,v)=(\lambda,\mu,w_{\lambda,\varepsilon},v_{\lambda, \varepsilon}),\qquad\lambda\in[\lambda_{1},\Phi(\mu)),\ \ \varepsilon\in[0, \varepsilon_{0}).\]
Naturally, as Theorem 3.2 shows that \(\mathscr{C}_{\varepsilon}^{+}\) is a regular perturbation of \(\mathscr{C}_{0}^{+}\) through the implicit function theorem in a neighborhood of the bifurcation point, there exists \(\varepsilon_{0}>0\) such that, for every \(\varepsilon\in[0,\varepsilon_{0})\), the coexistence state \((\lambda_{1},\mu,w_{\lambda_{1},\varepsilon},v_{\lambda_{1},\varepsilon})\) lies in the interior of the right side cover of \(\mathcal{O}\); actually, it is the unique coexistence state of (1.7) on \(\partial\mathcal{O}\) for \(\lambda=\lambda_{1}\). This argument combined with the local uniqueness of Theorem 3.2 shows Part (c). Figure 6 sketches this behavior. As in the remaining bifurcation diagramas plotted in this section, the dashed curve represents \(\mathscr{C}_{0}^{+}\), while the continuous curve shows \(\mathscr{C}_{\varepsilon}^{+}\) for sufficiently small \(\varepsilon>0\). According to Theorem 3.2, these are the unique solutions of the model in a neighborhood of the bifurcation point for sufficiently small \(\varepsilon\geq 0\). All are non-degenerate; actually, linearly unstable with one-dimensional unstable manifold by the exchange stability principle.
Once shown that \(\mathscr{C}_{\varepsilon}^{+}\) reaches \(\mathcal{O}\) at \(\lambda=\lambda_{1}\), and so enters into \(\mathcal{O}\), we claim that these components must abandone \(\mathcal{O}\) passing through some point with \(\lambda=\lambda_{0}\), as illustrated by the left picture of Figure 7, so concluding the proof of (5.5). Since they must abandone \(\mathcal{O}\) because they are unbounded, in order to prove our claim, it suffices to make sure that \(\mathscr{C}_{\varepsilon}^{+}\) cannot leave \(\mathcal{O}\) through \(\partial_{L}\mathcal{O}\) for sufficiently small \(\varepsilon>0\), as illustrated by the right
picture of Figure 7. Our proof of this fact proceeds by contradiction. Assume that there is a sequence \(\varepsilon_{n}\), \(n\geq 2\), such that \(\lim_{n\uparrow\infty}\varepsilon_{n}=0\), and, for every \(n\geq 2\), \(\lambda^{\prime}(\varepsilon_{n})<0\) and the problem (1.7) has some coexistence state, \((\lambda_{n},\mu,w_{n},v_{n})\in\partial_{L}\mathcal{O}\), for \(\varepsilon=\varepsilon_{n}\) and some \(\lambda_{n}\in[\varphi_{0}(\mu)+r,\Phi(\mu)-r]\), as sketched on the right picture of Figure 7.
Then, since \(\{(\lambda_{n},\mu,w_{n},v_{n})\}_{n\geq 2}\) is bounded in \([\lambda_{0},\lambda_{1}]\times\{\mu\}\times\mathcal{C}^{1}_{\mathfrak{B}_{1 }}(\bar{\Omega})\times\mathcal{C}^{1}_{\mathfrak{B}_{2}}(\bar{\Omega})\) and it consists of fixed points of a sequence of associated compact operators depending continuously on \(\varepsilon\), \(\varepsilon\sim 0\), by a rather standard compactness argument, we can extract a subsequence, relabeled by \(n\geq 2\), such that
\[\lim_{n\to\infty}(\lambda_{n},\mu,w_{n},v_{n})=(\lambda_{\omega},\mu,w_{ \omega},v_{\omega})\in\partial_{L}\mathcal{O}\]
for some \(w_{\omega}\geq 0\), \(v_{\omega}\geq 0\) and \(\lambda_{\omega}\in[\lambda_{0},\lambda_{1}]\) such that \((\lambda_{\omega},\mu,w_{\omega},v_{\omega})\) solves (1.8). Since \(\mathcal{O}\) is an isolating neighborhood of \(\mathscr{C}^{+}_{0}\), it becomes apparent that \(w_{\omega}\gg_{1}0\) and \(v_{\omega}\gg_{2}0\). But this contradicts (5.4). Therefore, (5.5) holds true. Consequently, for every \(\varepsilon\in[0,\varepsilon_{0})\), we have that \(\lambda_{T}(\varepsilon)\leq\lambda_{0}<\lambda^{*}\), which ends the proof of Part (a).
Note that, as \(\varepsilon>0\) perturbs from zero, a further application of the implicit function theorem shows that the analytic arcs of \(\lambda\)-curve \(\mathscr{C}^{+}_{0,[\lambda_{0},\varphi_{0}(\mu)+r]}\) and \(\mathscr{C}^{+}_{0,[\Phi(\mu)-r,\lambda_{1}]}\) perturb into two \(\lambda\)-arcs of \(\mathscr{C}^{+}_{\varepsilon}\) within \(\mathcal{O}\), denoted by \(\mathscr{C}^{+}_{\varepsilon,[\lambda_{0},\varphi_{0}(\mu)+r]}\) and \(\mathscr{C}^{+}_{\varepsilon,[\Phi(\mu)-r,\lambda_{1}]}\), and that these arcs consist of non-degenerate solutions of (1.7) for sufficiently small \(\varepsilon\geq 0\). By a further application of the implicit function theorem at the unique solution of \(\mathscr{C}^{+}_{\varepsilon}\) on \(\partial\mathcal{O}\) at \(\lambda_{0}\), say \((\lambda_{0},\mu,w_{\lambda_{0},\varepsilon},v_{\lambda_{0},\varepsilon})\), this entails that actually for sufficiently small \(\varepsilon>0\) there exists \(\delta(\varepsilon)>0\) such that
\[[\lambda_{0}-\delta(\varepsilon),\Phi(\mu))\subset\mathcal{P}_{\lambda}\left( \mathscr{C}^{+}_{\varepsilon}\right).\]
Figure 6: The ball where the solutions of (1.7) are analytic \(\lambda\)-curves
Moreover,
\[\lim_{\varepsilon\downarrow 0}(w_{\lambda_{0},\varepsilon},v_{\lambda_{0}, \varepsilon})=(w_{\lambda_{0}},\theta_{[\mathfrak{L}_{2},\mu,d]}).\]
Therefore, since
\[\mathscr{C}^{+}_{\varepsilon}\setminus\mathscr{C}^{+}_{\varepsilon,[\lambda_{ 0},\varphi_{0}(\mu)+r]}\]
is unbounded, it follows from Lemma 5.1 that, for every \(\lambda\in[\lambda_{0},\Phi(\mu))\), (1.7) has, at least, two coexistence states for sufficiently small \(\varepsilon>0\). This proves Part (b) and concludes the proof of Theorem 5.1.
Another proof of the multiplicity result of Part (b) can be given by using the topological degree. Although this proof does not allow to show that each of the components \(\mathscr{C}^{+}_{\varepsilon}\) bend backwards at some supercritical turning point for sufficiently small \(\varepsilon>0\), it provides with the local index of the additional solutions, which is imperative to ascertain their local stability character. The alternative proof proceeds as follows. Thanks to Theorem 4.2, it follows from the invariance by homotopy of the Leray-Schauder degree, that, for every \(\varepsilon\in[0,\varepsilon_{0})\) and \(\lambda\in[\lambda_{0},\lambda_{1}]\),
\[\operatorname{Deg}\left(\mathfrak{F}(\lambda,\mu,\varepsilon,\cdot,\cdot), \mathcal{O}_{\lambda}\right)=-1, \tag{5.6}\]
where \(\mathfrak{F}\) is the operator defined in (3.8) and, for every \(\lambda\in[\lambda_{0},\lambda_{1}]\), we are denoting
\[\mathcal{O}_{\lambda}:=\{(\mu,w,v)\in\mathscr{X}\ :\ (\lambda,\mu,w,v)\in \mathcal{O}\}.\]
Subsequently we will use the fixed point index in cones, as axiomatized by Amann [1] and Dancer [8], which was applied by the first time to the classical diffusive Lotka-Volterra models by Lopez-Gomez and Pardo [25], Lopez-Gomez [20] and, more recently, by Fernandez-Rincon and Lopez-Gomez [15], among many others. First, we consider, for every \(i=1,2\), the positive cone of \(\mathscr{W}_{i}\),
\[\mathscr{P}_{\mathscr{W}_{i}}:=\{u\in\mathscr{W}_{i}\;:\;u\geq 0\;\;\text{in} \;\Omega\}\]
and the associated system to (1.7)
\[\left\{\begin{aligned} &\mathfrak{L}_{1}w=\lambda w-\varepsilon a(x)w^{2 }-\alpha b(x)\frac{wv}{1+m(x)w}&&\text{in}\;\;\Omega,\\ &\mathfrak{L}_{2}v=\mu v-d(x)v^{2}+\alpha\varepsilon c(x)\frac{ wv}{1+m(x)w}&&\text{in}\;\;\Omega,\\ &\mathfrak{B}_{1}w=\mathfrak{B}_{2}v=0&&\text{ on}\;\;\partial\Omega,\end{aligned}\right. \tag{5.7}\]
where \(\alpha\in[0,1]\) is an homotopy parameter to uncouple (1.7) into two semilinear boundary value problems. By applying Lemma 5.1 uniformly in \(\alpha\in[0,1]\), it is easily seen that there exists a bounded open subset \(\mathcal{W}\times\mathcal{V}\subset\mathscr{W}_{1}\times\mathscr{W}_{2}\), independent of \(\alpha\in[0,1]\), such that \((w,v)\in\mathcal{W}\times\mathcal{V}\) if \((w,v)\in P_{\mathscr{W}_{1}}\times P_{\mathscr{W}_{2}}\) solves (5.7) for some \(\alpha\in[0,1]\).
Subsequently, we choose a sufficiently large \(e\geq 0\) such that
\[\sigma_{0}[\mathfrak{L}_{i}+e,\mathfrak{B}_{i},\Omega]>1,\qquad i=1,2, \tag{5.8}\]
and, for every \(\alpha\in[0,1]\), \(w\in\mathcal{W}\), and \(v\in\mathcal{V}\),
\[\lambda-a\varepsilon w-\alpha b\frac{v}{1+mw}+e>0,\qquad\mu-dv+\alpha \varepsilon c\frac{w}{1+mw}+e>0\qquad\text{in}\;\bar{\Omega}. \tag{5.9}\]
Then, thanks to (5.8) and (5.9), the map
\[\mathcal{H}:[0,1]\times\mathcal{W}\times\mathcal{V}\to\mathscr{W}_{1}\times \mathscr{W}_{2}\]
defined by
\[\mathcal{H}(\alpha,w,v)=\left(\begin{array}{l}(\mathfrak{L}_{1}+e)^{-1}[( \lambda-\varepsilon aw-\alpha b\frac{v}{1+mw}+e)w]\\ (\mathfrak{L}_{2}+e)^{-1}[(\mu-dv+\alpha\varepsilon c\frac{w}{1+mw}+e)v] \end{array}\right),\]
is a compact order preserving operator whose non-negative fixed points are the solutions of (5.7) in \(P_{\mathscr{W}_{1}}\times P_{\mathscr{W}_{2}}\). Adapting the analysis of Steps i)-v) of the proof of [20, Th. 4.1], or Lemmas 5.6-5.9 of [15], one can find out the fixed point indices of the non-negative solutions of (1.7) as fixed points of \(\mathcal{H}(1,\cdot,\cdot)\). It turns out that
\[i_{P_{\mathscr{W}_{1}}\times P_{\mathscr{W}_{2}}}\left(\mathcal{H}(1,\cdot, \cdot),\mathcal{W}\times\mathcal{V}\right)=1, \tag{5.10}\]
whereas
\[i_{P_{\mathscr{W}_{1}}\times P_{\mathscr{W}_{2}}}\left(\mathcal{H}(1,\cdot, \cdot),(0,0)\right)=0\quad\text{if}\;\;\lambda>\sigma_{0,1}\;\;\text{or}\;\; \mu>\sigma_{0,2}. \tag{5.11}\]
Moreover,
\[\left\{\begin{aligned} i_{P_{\mathscr{W}_{1}}\times P_{\mathscr{W}_{2}}} \left(\mathcal{H}(1,\cdot,\cdot),(\theta_{[\mathfrak{L}_{1},\lambda,a]},0) \right)&=0&\qquad\text{if}\ \ \mu>\Psi_{\varepsilon}(\lambda),\\ i_{P_{\mathscr{W}_{1}}\times P_{\mathscr{W}_{2}}}\left( \mathcal{H}(1,\cdot,\cdot),(0,\theta_{[\mathfrak{L}_{2},\mu,d]})\right)& =1&\qquad\text{if}\ \,\lambda<\Phi(\mu).\end{aligned}\right. \tag{5.12}\]
Thus, for every \(\lambda\in(\sigma_{0,1},\Phi(\mu))\) and \(\mu>\sigma_{0,2}\),
\[1=i_{P_{\mathscr{W}_{1}}\times P_{\mathscr{W}_{2}}}\left( \mathcal{H}(1,\cdot,\cdot),\mathcal{W}\times\mathcal{V}\right)=i_{P_{\mathscr{ W}_{1}}\times P_{\mathscr{W}_{2}}}\left(\mathcal{H}(1,\cdot,\cdot),(0,0)\right)\] \[\qquad\qquad+i_{P_{\mathscr{W}_{1}}\times P_{\mathscr{W}_{2}}} \left(\mathcal{H}(1,\cdot,\cdot),(\theta_{[\mathfrak{L}_{1},\lambda,a]},0) \right)+i_{P_{\mathscr{W}_{1}}\times P_{\mathscr{W}_{2}}}\left(\mathcal{H}(1, \cdot,\cdot),(0,\theta_{[\mathfrak{L}_{2},\mu,d]})\right).\]
Consequently, the global index of the coexistence states, as fixed points of \(\mathcal{H}(1,\cdot,\cdot)\), equals zero and, since (5.6) entails
\[i_{P_{\mathscr{W}_{1}}\times P_{\mathscr{W}_{2}}}\left(\mathcal{H}(1,\cdot, \cdot),\mathcal{O}_{\lambda}\right)=-1\quad\text{for all}\ \ \lambda\in[\lambda_{0},\lambda_{1}],\]
the existence of a second coexistence state follows for every \(\lambda\in[\lambda_{0},\lambda_{1}]\). Taking into account that \(\lambda_{0}<\lambda^{*}\) and that \(\lambda_{1}\) can be chosen arbitrarily close to \(\Phi(\mu)\), the multiplicity result of Theorem 5.1(b) readily follows.
**Remark 5.1**.: The multiplicity result of Theorem 5.1(b) holds as soon as \(\lambda^{\prime}(\varepsilon)<0\), which occurs for \(\varepsilon\in[0,\varepsilon^{*})\), where \(\lambda^{\prime}(\varepsilon^{*})=0\). It remains an open problem to ascertain whether, or not, (1.7) can admit a coexistence state for some \(\lambda\in(\varphi_{\varepsilon}(\mu),\lambda_{T}(\varepsilon))\). This might depend on the nature of the spatial heterogeneities of the several coefficients involved in the setting of (1.7).
## 6 A simple illustrative example
This section considers (1.7) in the special case when:
* \(c_{1}=c_{2}=0\) in \(\Omega\).
* \(\Gamma_{1}=\partial\Omega\) (i.e., \(\Gamma_{0}=\emptyset\)), and \(\beta_{1}=\beta_{2}=0\) on \(\partial\Omega\).
* \(a\), \(b\), \(c\) and \(d\) are positive constants, and \(m=1\) in \(\Omega\).
Then, since \(\mathfrak{B}_{\kappa}=\frac{\partial}{\partial\nu_{\kappa}}\equiv\partial_{ \nu_{\kappa}}\) for \(\kappa=1,2\), it turns out that we are imposing non-flux boundary conditions on \(\partial\Omega\). Thus,
\[\sigma_{0,\kappa}:=\sigma_{0}[\mathfrak{L}_{\kappa},\partial_{\nu_{\kappa}}, \Omega]=0,\qquad\kappa=1,2.\]
Consequently, throughout this section we assume that \(\lambda>0\) and fix \(\mu>0\). As in the preceding sections, \(\lambda>0\) is regarded as a bifurcation parameter.
By the special nature of (1.7) under these conditions, any component-wise positive solution \((w,v)\) of the algebraic system
\[\left\{\begin{aligned} &\lambda-\varepsilon aw-bv\frac{1}{1+w}=0,\\ &\mu-dv+\varepsilon c\frac{w}{1+w}=0,\end{aligned}\right. \tag{6.1}\]
provides us with a coexistence state of (1.7). By the uniqueness of Theorem 2.3, it follows that \(\theta_{[\mathcal{E}_{2},\mu,d]}=\frac{\mu}{d}\). So,
\[\Phi(\mu)=\sigma_{0}\left[\mathcal{E}_{1}+b\theta_{[\mathcal{E}_{2},\mu,d]}, \partial_{\nu_{1}},\Omega\right]=b\frac{\mu}{d}. \tag{6.2}\]
Eliminating \(v\) from the first equation of (6.1), we obtain that
\[v=\frac{1}{b}(1+w)(\lambda-\varepsilon aw), \tag{6.3}\]
and, substituting (6.3) into the second equation of (6.1), yields to
\[P(w,\lambda)\equiv P(w):=w^{3}+\left(2-\frac{\lambda}{\varepsilon a}\right)w^{ 2}+\left(1+\frac{bc}{ad}+\frac{b\mu-2d\lambda}{\varepsilon ad}\right)w+\frac{ b\mu-d\lambda}{\varepsilon ad}=0. \tag{6.4}\]
Therefore, \((w,v)\) is component-wise positive solution of the system (6.1) if, and only if, \(w\) is a positive root of \(P(w)\) with
\[\lambda-\varepsilon aw>0. \tag{6.5}\]
Thus, to find out the coexistence states of (1.7) for this prototype, one should first ascertain the positive roots of \(P(w)\). In this section, we are going to accomplish this task for \(\lambda>\Phi(\mu)\) sufficiently close to \(\Phi(\mu)\). Note that, according to the analysis of the previous sections, we already know that \((\lambda,w,v)=\left(\Phi(\mu),0,\theta_{[\mathcal{E}_{2},\mu,d]}\right)\) is a bifurcation point to a component of coexistence states of (1.7).
Suppose \(\lambda>\Phi(\mu)\). Then, by (6.2), \(\lambda>b\frac{\mu}{d}\). Thus,
\[P(0)=\frac{b\mu-d\lambda}{\varepsilon ad}<0,\]
and hence, since \(\lim_{w\uparrow+\infty}P(w)=+\infty\), \(P(w)\) admits, at least, a positive real root. Similarly, the polynomial
\[P^{\prime}(w)=3w^{2}+2\left(2-\frac{\lambda}{\varepsilon a}\right)w+1+\frac{ bc}{ad}+\frac{b\mu-2d\lambda}{\varepsilon ad}\]
satisfies
\[P^{\prime}(0)=1+\frac{bc}{ad}+\frac{b\mu-2d\lambda}{\varepsilon ad}<0\]
if, and only if,
\[0<\varepsilon<\varepsilon^{*}(\lambda)\equiv\frac{2d\lambda-b\mu}{bc+ad}. \tag{6.6}\]
So, since \(\lim_{w\uparrow+\infty}P^{\prime}(w)=+\infty\), also \(P^{\prime}(w)\) possesses, at least, one positive root for every \(\varepsilon\in(0,\varepsilon^{*})\). Finally, since
\[P^{\prime\prime}(w)=6w+2\left(2-\frac{\lambda}{\varepsilon a}\right),\]
it is obvious that \(w_{c}\equiv\frac{1}{3}\left(\frac{\lambda}{\varepsilon a}-2\right)\) is the unique root of \(P^{\prime\prime}\). Suppose that
\[\varepsilon<\min\left\{\frac{\lambda}{2a},\varepsilon^{*}(\lambda)\right\}. \tag{6.7}\]
Then \(w_{c}>0\), \(P^{\prime\prime}(w)<0\) if \(w\in[0,w_{c})\), and \(P^{\prime\prime}(w)>0\) if \(w>w_{c}\). Thus, \(P^{\prime}(w)\) is decreasing in \((0,w_{c})\) and increasing in \((w_{c},+\infty)\). Moreover, by (6.6), \(P^{\prime}(0)<0\), because \(\varepsilon<\varepsilon^{*}(\lambda)\). Consequently, there exists \(w^{*}>0\) such that \(P^{\prime}<0\) in \([0,w^{*})\), \(P^{\prime}(w^{*})=0\), and \(P^{\prime}(w)>0\) for all \(w>w^{*}\). Therefore, \(P(w)\) is decreasing in \((0,w^{*})\) and increasing in \((w^{*},+\infty)\), and, since \(P(0)<0\) and \(P^{\prime}(0)<0\), it becomes apparent that, under condition (6.7), \(P(w)\) has a unique positive root, say \(w_{r}>w^{*}>0\). Finally, since
\[P\left(\frac{\lambda}{\varepsilon a}\right) =\left(\frac{\lambda}{\varepsilon a}\right)^{3}+\left(2-\frac{ \lambda}{\varepsilon a}\right)\left(\frac{\lambda}{\varepsilon a}\right)^{2}+ \left(1+\frac{bc}{ad}+\frac{b\mu-2d\lambda}{\varepsilon ad}\right)\frac{ \lambda}{\varepsilon a}+\frac{b\mu-d\lambda}{\varepsilon ad}\] \[=2\left(\frac{\lambda}{\varepsilon a}\right)^{2}+\left(1+\frac {bc}{ad}+\frac{b\mu-2d\lambda}{\varepsilon ad}\right)\frac{\lambda}{ \varepsilon a}+\frac{b\mu-d\lambda}{\varepsilon ad}\] \[=\frac{1}{\varepsilon^{2}}\Big{[}2\frac{\lambda^{2}}{a^{2}}+ \frac{b\mu-2\lambda d}{a^{2}d}\lambda+O(\varepsilon)\Big{]}=\varepsilon^{-2} \Big{[}\frac{\lambda}{a^{2}}\Phi(\mu)+O(\varepsilon)\Big{]}>0\]
as \(\varepsilon\downarrow 0\), necessarily \(w_{r}<\frac{\lambda}{\varepsilon a}\) for sufficiently small \(\varepsilon>0\) and, in particular, \(w=w_{r}\) satisfies (6.5). Therefore, for sufficiently small \(\varepsilon>0\), (6.1) has a unique coexistence state for every \(\lambda>\Phi(\mu)\).
Note that, at \(\varepsilon=0\), (6.1) reduces to
\[\left\{\begin{aligned} &\lambda-bv\frac{1}{1+w}=0,\\ &\mu-dv=0,\end{aligned}\right.\]
whose unique solution coexistence state is
\[(w,v)=\left(\frac{b\mu}{d\lambda}-1,\frac{\mu}{d}\right)\qquad\lambda>0.\]
As \(\lambda\in(0,\Phi(\mu))\), \(w(\lambda)=\frac{b\mu}{d\lambda}-1\) decays from \(+\infty\) to \(0\), while \(v\) remains constant. This is the component \(\mathscr{C}_{0}^{+}\) studied in Section 4 for this so special example. According to the previous analysis, for sufficiently small \(\varepsilon>0\), the component \(\mathscr{C}_{0}^{+}\) must perturb into a new component, \(\mathscr{C}_{\varepsilon}^{+}\), having a unique coexistence state for all \(\lambda>\Phi(\mu)\). Thus, \(\mathscr{C}_{\varepsilon}^{+}\) has a supercritical turning point at some \(\lambda_{T}(\varepsilon)\) such that \(\lim_{\varepsilon\downarrow 0}\lambda_{T}(\varepsilon)=0\).
However, the uniqueness of the coexistence state for \(\lambda>\Phi(\mu)\) can be lost when (6.7) fails and \(bc>ad\), giving rise to a \(S\)-shaped bifurcation diagram. Indeed, at the critical value \(\lambda=\Phi(\mu)\), the cubic polynomial \(P(w)\) becomes
\[P(w)=P(w,\Phi(\mu))=w^{3}+\left(2-\frac{\lambda}{\varepsilon a}\right)w^{2}+ \left(1+\frac{bc}{ad}-\frac{b\mu}{\varepsilon ad}\right)w=Q(w)w, \tag{6.8}\]
where
\[Q(w):=w^{2}+\left(2-\frac{\lambda}{\varepsilon a}\right)w+1+\frac{bc}{ad}- \frac{b\mu}{\varepsilon ad}.\]
Thus, at \(\lambda=\Phi(\mu)\), the roots of \(P(w)\) are \(w=0\) plus the two roots of \(Q(w)\). A direct calculation shows that, as soon as
\[\varepsilon>\frac{b\mu}{bc+ad}=\varepsilon^{*},\]
the polynomial \(P(w)\) satisfies
\[P(0)=0,\quad P^{\prime}(0)=1+\frac{bc}{ad}-\frac{b\mu}{\varepsilon ad}>0.\]
Suppose \(\varepsilon>\varepsilon^{*}\) and \(Q(w)\) has two positive roots, \(w_{+}>w_{-}>0\). Then, at \(\lambda=\Phi(\mu)\), the polynomial \(P(w)\) has three simple roots. Thus, since the coefficients of \(P(w,\lambda)\) are analytic functions of the parameter \(\lambda\), for sufficiently small \(\eta>0\), there are three analytic functions
\[z,w_{+},w_{-}:J_{\eta}\equiv(\Phi(\mu)-\eta,\Phi(\mu)+\eta)\to\mathbb{R},\]
such that
\[\lim_{\lambda\to\Phi(\mu)}z(\lambda)=0,\qquad\lim_{\lambda\to\Phi(\mu)}w_{\pm} (\lambda)=w_{\pm} \tag{6.9}\]
and, for every \(\lambda\in J_{\eta}\), \(z(\lambda)\) and \(w_{\pm}(\lambda)\) provide us with the three simple roots of \(P(w)\). Consequently, since \(P(0)=0\), \(P^{\prime}(0)>0\) at \(\lambda=\Phi(\mu)\), \(P(0)<0\) if \(\lambda>\Phi(\mu)\), and \(P(0)>0\) if \(\lambda<\Phi(\mu)\), it becomes apparent that, for sufficiently small \(\eta>0\),
\[0<z(\lambda)<w_{-}(\lambda)<w_{+}(\lambda)\quad\text{if}\;\;\lambda\in(\Phi( \mu),\Phi(\mu)+\eta),\]
while
\[z(\lambda)<0<w_{-}(\lambda)<w_{+}(\lambda)\quad\text{if}\;\;\lambda\in(\Phi( \mu)-\eta,\Phi(\mu)).\]
Therefore, \(P(w,\lambda)\) has three simple positive roots if \(\lambda\in(\Phi(\mu),\Phi(\mu)+\eta)\) and two if \(\lambda\in(\Phi(\mu)-\eta,\Phi(\mu))\), as illustrated in the first picture of Figure 8, where we are plotting the polynomials \(P(w,\lambda)\) for \(\lambda=\Phi(\mu)\) (using a dashed line) and \(\lambda_{\pm}=\Phi(\mu)\pm\delta_{\pm}\) for some \(\delta_{+},\delta_{-}\in(0,\eta)\) (using continuous lines).
Obviously, the roots of \(Q(w)\) are
\[w_{\pm}:=\frac{\lambda}{2\varepsilon a}-1\pm\sqrt{\left(\frac{\lambda}{2 \varepsilon a}-1\right)^{2}-1-\frac{bc}{ad}+\frac{b\mu}{\varepsilon ad}}= \frac{\lambda}{2\varepsilon a}-1\pm\frac{1}{\varepsilon a}\sqrt{\lambda^{2}- 4\varepsilon^{2}\frac{abc}{d}}.\]
Thus, if we further impose that
\[\frac{b\mu}{bc+ad}=\varepsilon^{*}<\varepsilon<\frac{\lambda}{2a},\]
with \(\varepsilon\) sufficiently close to \(\varepsilon^{*}\), then \(w_{+}>w_{-}>0\) and, hence, \(P(w,\lambda)\) has three simple positive roots if \(\lambda\in(\Phi(\mu),\Phi(\mu)+\eta)\) and two if \(\lambda\in(\Phi(\mu)-\eta,\Phi(\mu))\), provided \(bc>ad\) and \(\varepsilon>\varepsilon^{*}\) is sufficiently close to \(\varepsilon^{*}\). The assumption \(bc>ad\) is necessary and sufficient so that \(\frac{b\mu}{bc+ad}<\frac{\lambda}{2a}\). Finally, since
\[\lambda-\varepsilon aw_{+}=\frac{\lambda}{2}+\varepsilon a-\frac{1}{2}\sqrt{ \lambda^{2}-4\varepsilon^{2}\frac{abc}{d}}>\varepsilon a>0\qquad\text{if}\;\; \lambda=\Phi(\mu),\]
by (6.9) and (6.3), it becomes apparent that, if \(bc>ad\) and \(\varepsilon>\varepsilon^{*}\) is sufficiently close to \(\varepsilon^{*}\), then (6.1) has three coexistence states if \(\lambda\in(\Phi(\mu),\Phi(\mu)+\eta)\) and only two if \(\lambda\in(\Phi(\mu)-\eta,\Phi(\mu))\). This phenomenology has been illustrated in Figure 8, whose right picture shows a paradigmatic \(S\)-shaped component \(\mathscr{C}_{\varepsilon}^{+}\) for \(\varepsilon>\varepsilon^{*}\), \(\varepsilon\sim\varepsilon^{*}\), when \(bc>ad\).
According to (6.4), the coefficients of \(P(w,\lambda)\) are decreasing with respect to \(\lambda\). Thus, in the region \(w\geq 0\), the bigger is \(\lambda>\Phi(\mu)\), the smaller are the graphs of the polynomials \(P(w,\lambda)\) (see the first picture of Figure 8). Therefore, there exists \(\lambda^{*}>\Phi(\mu)\) such that \(z(\lambda^{*})=w_{-}(\lambda^{*})\), which corresponds with the subcritical turning point of the \(S\)-shaped component \(\mathscr{C}_{\varepsilon}^{+}\).
|
2310.17873 | Periodic jumps in binary lattices with a static force | We investigate the dynamics of a particle in a binary lattice with staggered
on-site energies. An additional static force is introduced which further
adjusts the on-site energies. The binary lattice appears to be unrelated to the
semiclassical Rabi model, which describes a periodically driven two-level
system. However, in a certain parity subspace, the Floquet Hamiltonian of the
semiclassical Rabi model can be exactly mapped to that of the binary lattice.
These connections provide a different perspective for analyzing lattice
systems. At resonance, namely that the mismatch of on-site energies between
adjacent sites is nearly multiple of the strength of the static force, the
level anticrossing occurs. This phenomenon is closely related to the
Bloch-Siegert shift in the semiclassical Rabi model. At the $n$th order
resonance, an initially localized particle exhibits periodic jumps between site
$0$ and site $(2n+1)$, rather than continuous hopping between adjacent sites.
The binary lattice with a static force serves as a bridge linking condensed
matter physics and quantum optics, due to its connection with the semiclassical
Rabi model. | Liwei Duan | 2023-10-27T03:28:49Z | http://arxiv.org/abs/2310.17873v2 | # Periodic jumps in binary lattices with a static force
###### Abstract
We investigate the dynamics of a particle in a binary lattice with staggered on-site energies. An additional static force is introduced which further adjusts the on-site energies. The binary lattice appears to be unrelated to the semiclassical Rabi model, which describes a periodically driven two-level system. However, in a certain parity subspace, the Floquet Hamiltonian of the semiclassical Rabi model can be exactly mapped to that of the binary lattice. These connections provide a different perspective for analyzing lattice systems. At resonance, namely that the mismatch of on-site energies between adjacent sites is nearly multiple of the strength of the static force, the level anticrossing occurs. This phenomenon is closely related to the Bloch-Siegert shift in the semiclassical Rabi model. At the \(n\)th order resonance, an initially localized particle exhibits periodic jumps between site \(0\) and site \((2n+1)\), rather than continuous hopping between adjacent sites. The binary lattice with a static force serves as a bridge linking condensed matter physics and quantum optics, due to its connection with the semiclassical Rabi model.
## I Introduction
The propagation of a particle in periodic potentials is a fundamental problem in quantum mechanics and condensed matter physics. Solutions to the Schrodinger equation for such systems satisfy Bloch's theorem Bloch (1954), which yield the periodic Bloch band and delocalized eigenstates. The introduction of an additional static force can profoundly influence the behaviors of the particle, which provides a versatile platform for studying various dynamical behaviors, such as the Bloch oscillation Bloch (1954), Bloch-Zener oscillation Bloch (1954) and Rabi oscillation between two Bloch bands Bloch (1954).
Previous studies on the influence of the static force mainly concentrated on an exact solvable single band approximation, which captures some essential physics in real systems Bloch (1956). When a static force is present, the continuous Bloch band transforms into equally spaced discrete energy levels, forming the well-known Wannier-Stark ladder Bloch (1954). In the meanwhile, the eigenstates become more localized as the strength of the static force increases Bloch (1956). The wavepacket exhibits a periodic oscillation, known as the Bloch oscillation, rather than the expected unbounded acceleration towards infinity Bloch (1954). The Bloch oscillations are rarely observable in conventional bulk solids due to the much longer Bloch period compared to the electron scattering time caused by lattice defects Bloch (1954); Bloch (1955); Bloch (1955). However, it has been experimentally observed in various artificially physical systems, such as the semiconductor superlattice Bloch (1954); Bloch (1955); Bloch (1955), ultracold atoms in an optical potential Bloch (1956), waveguide arrays and photonic crystals Bloch (1957); Bloch (1958); Bloch (1959), acoustic-cavity structures Bloch (1959); Bloch (1959) and even a Bose liquid without built-in periodicity Bloch (1959). Recently, a form of energy Bloch oscillations is proposed for a periodically driven quantum system characterized by evenly spaced adiabatic energy levels Bloch (1959). In this case, the system's energy will be oscillating, instead of exhibiting a typical real-space oscillation.
Under specific conditions, such as a strong external field, the tunneling between Bloch bands becomes non-negligible Bloch (1956); Bloch (1955), exceeding the capabilities of the single-band approximation. A binary lattice, described by the period-doubled tight-binding model, possesses two Bloch bands and serves as one of the simplest platforms to investigate the interband tunneling effect Bloch (1958). The competitions between the Bloch oscillation and the interband tunneling lead to the Bloch-Zener oscillation Bloch (1954); Bloch (1954); Bloch (1955); Bloch (1955), which has also been observed in the waveguide-based superlattice Bloch (1955). The Bloch-Zener oscillation paves a way to perform quantum walks Bloch (1954); Bloch (1955) and generate widely tunable matter wave beam splitters and Mach-Zehnder interferometers Bloch (1955).
Recently, in quantum optical systems, the concept of a Fock-state lattice has emerged, where the lattice-like structure emerges by identifying the different Fock states as the lattice sites Bloch (1954); Bloch (1955); Bloch (1955). As a paradigmatic model in quantum optics, the quantum Rabi model describes the simplest interaction between a two-level atom and a quantized light field. It has been mapped into a Fock-state lattice to explore a different type of topological phases arising from quantized light Bloch (1955); Bloch (1955) and amplitude-modulated Bloch oscillations Bloch (1955). The semiclassical Rabi model, on the other hand, describes a two-level atom driven by a periodic classical light field Bloch (1955); Bloch (1955). It cannot be mapped into a Fock-state lattice due to the classical field. Nevertheless, the time-dependent semiclassical Rabi model can be transformed into a time-independent one with an infinite-dimensional Hilbert space according to Floquet's theory Floquet (1954); Floquet (1955); Floquet (1955). The Floquet states can be regarded as the lattice sites, which provide an opportunity to create a latticelike structure. The latticelike structure formed by the Floquet states may, in a sense, be reminiscent of the Floquet topological systems Floquet (1956); Floquet (1957); Floquet (1958), which enhance the flexibility of the Hamiltonian and broaden the general classification of topological phases by introducing the periodicity in the
time domain. Nonetheless, significant distinctions exist. We illustrate that a basic two-level system exhibits a latticelike structure under periodic driving, whereas the systems they studied constitute a lattice even in the absence of driving.
In this paper, we investigate the correspondence between the binary lattice subjected to a static external force and the semiclassical Rabi model. Our primary focus is on the periodic jumps within the binary lattice, a phenomenon closely associated with resonance phenomena and the Bloch-Siegert shift in the semiclassical Rabi model. The paper is structured as follows. In Sec. II, we introduce the Hamiltonian of the binary lattice with a static force. In Sec. III, we provide a brief overview of the Floquet Hamiltonian of the semiclassical Rabi model and introduce a parity operator that divides the entire Hilbert space into two distinct subspaces with even and odd parities, respectively. We then demonstrate the exact equivalence between the Floquet Hamiltonian of the semiclassical Rabi model and the Hamiltonian of the binary lattice. The development of various approaches and the discovery of numerous phenomena in the semiclassical Rabi model can be readily extended to that in the binary lattice. In Sec. IV, we present the level anticrossing at the resonance, as well as the periodic jumps between different sites. Finally, a brief summary is given is Sec. V.
## II Binary lattice with a static force
In this paper, we consider a tight-binding model that describes a binary lattice subjected to a static force as follows:
\[\hat{H} = -V\sum_{n=-\infty}^{+\infty}\left(\left|n\right\rangle\left\langle n +1\right|+\left|n+1\right\rangle\left\langle n\right|\right)\] \[+\sum_{n=-\infty}^{+\infty}\left(Fn+\frac{\epsilon}{2}(-1)^{n} \right)\left|n\right\rangle\left\langle n\right|,\]
where \(\left|n\right\rangle\) is the Wannier state localized at site \(n\). \(V\) and \(\epsilon\) denote the hopping rate and on-site energy mismatch between nearest-neighbor sites, respectively, which together give rise to two Bloch bands [3]. \(F\) corresponds to the external static force. Alternately, Hamiltonian (II) can be written in a matrix form as follows:
\[\hat{H}=\left(\begin{array}{cccccc}\ddots&\ddots&\ddots&&&\\ &-V&-\frac{\epsilon}{2}-F&-V&&\\ &&-V&\frac{\epsilon}{2}&-V&\\ &&&-V&-\frac{\epsilon}{2}+F&-V&\\ &&&&\ddots&\ddots&\ddots\end{array}\right). \tag{2}\]
The corresponding level structure is shown in Fig. 1(a). In the absence of the energy mismatch \(\epsilon\), Eq. (II) reduces to the famous Wannier-Stark Hamiltonian, whose eigenenergies take the form of the Wannier-Stark ladder [2; 6].
For clarity, we can introduce three operators \(\hat{E}_{0}\) and \(\hat{E}_{\pm}\), as follows [3; 38; 2]:
\[\hat{E}_{0} = \sum_{n=-\infty}^{+\infty}n\left|n\right\rangle\left\langle n \right|, \tag{3a}\] \[\hat{E}_{+} = \sum_{n=-\infty}^{+\infty}\left|n+1\right\rangle\left\langle n \right|,\] (3b) \[\hat{E}_{-} = \sum_{n=-\infty}^{+\infty}\left|n\right\rangle\left\langle n+1 \right|, \tag{3c}\]
which correspond to the generators of Euclidean algebra, satisfying the following commutation relations [33]:
\[\left[\hat{E}_{0},\hat{E}_{\pm}\right] = \pm\hat{E}_{\pm}, \tag{4a}\] \[\left[\hat{E}_{+},\hat{E}_{-}\right] = 0. \tag{4b}\]
It is important to note that the Wannier state \(\left|n\right\rangle\) is the eigenstate of \(\hat{E}_{0}\) with eigenvalue \(n\). Additionally, \(\hat{E}_{\pm}\) act as raising and lowering operators, respectively, as indicated by
\[\hat{E}_{\pm}\left|n\right\rangle=\left|n\pm 1\right\rangle. \tag{5}\]
In terms of \(\hat{E}_{0}\) and \(\hat{E}_{\pm}\), Hamiltonian (II) can be rewritten as
\[\hat{H}=-V\left(\hat{E}_{+}+\hat{E}_{-}\right)+F\hat{E}_{0}+\frac{\epsilon}{2} (-1)^{\hat{E}_{0}}. \tag{6}\]
Figure 1: Level structures for (a) the binary lattice and (b) the odd parity chain of the semiclassical Rabi model. Each lattice site is represented by the basis state along the horizontal axis, with the blue line indicating the respective on-site energy. The black arrows denote the hopping between adjacent sites. The other parameters are \(F=\omega=1\) and \(\epsilon=\Omega=0.3\).
## III Relations between the binary lattice and the semiclassical Rabi model
The semiclassical Rabi model, serving as a prototype in quantum optics, has consistently attracted attention since its inception [30; 31; 34]. Its Hamiltonian can be expressed as
\[\hat{H}(t)=\frac{\Omega}{2}\hat{\sigma}_{z}-2\lambda\hat{\sigma}_{x}\cos\omega t, \tag{7}\]
where \(\hat{\sigma}_{x,y,z}\) represent the Pauli matrices which are employed to describe the two-level system. \(\Omega\) denotes the energy difference of the two-level system, \(\omega\) is the frequency of the classical light field, and \(\lambda\) stands for the coupling strength between them.
According to Floquet's theory [32; 33], the time-dependent Hamiltonian can be replaced by a time-independent counterpart with an infinite-dimensional Hilbert space as follows:
\[\hat{\mathcal{H}}_{F}=\frac{\Omega}{2}\hat{\sigma}_{z}+\omega\hat{E}_{0}- \lambda\hat{\sigma}_{x}\left(\hat{E}_{+}+\hat{E}_{-}\right). \tag{8}\]
\(\hat{E}_{0}\) and \(\hat{E}_{\pm}\) are given by Eq. (3), with \(n\) now corresponding to the Fourier exponent.
The Floquet Hamiltonian (8) is of infinite dimensions, whose exact analytical solutions have remained elusive up to now. Nevertheless, its dimensions can be reduced by exploiting its symmetry. We begin by introducing a parity operator defined as
\[\hat{\Pi} = \exp\left[\mathrm{i}\pi\left(\hat{\sigma}_{+}\hat{\sigma}_{-}+ \hat{E}_{0}\right)\right]\] \[= -\hat{\sigma}_{z}(-1)^{\hat{E}_{0}},\]
with \(\hat{\sigma}_{\pm}=\left(\hat{\sigma}_{x}\pm\mathrm{i}\hat{\sigma}_{y}\right)/2\). It can be easily demonstrated that \(\hat{\Pi}\hat{\mathcal{H}}_{F}\hat{\Pi}^{\dagger}=\hat{\mathcal{H}}_{F}\), which indicates that the Floquet Hamiltonian \(\hat{\mathcal{H}}_{F}\) admits the parity symmetry. The parity operator \(\hat{\Pi}\) possesses eigenvalues \(\Pi=\pm 1\), which separate the whole Hilbert space into two independent subspaces characterized by even and odd parities respectively. These are commonly referred to as the parity chains [39; 40], illustrated as follows:
\[\cdots\leftrightarrow\left|+,-1\right\rangle\leftrightarrow\left|-, 0\right\rangle\leftrightarrow\left|+,1\right\rangle\leftrightarrow\ldots\left( \Pi=+1\right), \tag{10}\] \[\cdots\leftrightarrow\left|-,-1\right\rangle\leftrightarrow\left|+,0\right\rangle\leftrightarrow\left|-,1\right\rangle\leftrightarrow\ldots\left( \Pi=-1\right), \tag{11}\]
where the basis state is \(\left|s,n\right\rangle=\left|s\right\rangle\left|n\right\rangle\) with \(\hat{\sigma}_{z}\left|s\right\rangle=s\left|s\right\rangle\) (\(s=\pm\)) and \(\left|n\right\rangle\) the Floquet states.
In the basis of \(\left\{\left|s,n\right\rangle\right\}\), the matrix elements of the Floquet Hamiltonian are given by
\[\left\langle s,n\right|\hat{\mathcal{H}}_{F}\left|s^{\prime},n^{ \prime}\right\rangle = \left(s\frac{\Omega}{2}+\omega n\right)\delta_{s,s^{\prime}} \delta_{n,n^{\prime}}\] \[-\lambda\delta_{s-s^{\prime}}\delta_{n,n^{\prime}\pm 1}.\]
In the odd parity subspace (\(\Pi=-1\)), the matrix form of the Floquet Hamiltonian can be written as
\[\hat{\mathcal{H}}_{-}=\left(\begin{array}{cccccccc}\ddots&\ddots&\ddots&&&& \\ &-\lambda&-\frac{\Omega}{2}-\omega&-\lambda&&&\\ &&-\lambda&\frac{\Omega}{2}&-\lambda&&\\ &&&-\lambda&-\frac{\Omega}{2}+\omega&-\lambda&\\ &&&&\ddots&\ddots\\ \end{array}\right). \tag{13}\]
A transformation of \(\Omega\) to \(-\Omega\) results in the Floquet Hamiltonian matrix for the even parity subspace (\(\Pi=1\)). Obviously, \(\hat{\mathcal{H}}_{-}\) [Eq. (13)] is exactly the same as \(\hat{H}\) [Eq. (2)], as long as we choose \(\omega=F\), \(\Omega=\epsilon\) and \(\lambda=V\). Inspired by the Fock-state lattice [26], one can interpret the diagonal elements of the Floquet Hamiltonian as on-site energies of the lattice. Meanwhile, the off-diagonal elements represent the hopping rates between these sites, as illustrated in Fig. 1 (b). An alternate approach is presented in Appendix A, which utilizes the Fulton-Gouterman transformation to establish the equivalence of the Hamiltonians between the binary lattice and the semiclassical Rabi model. It is important to note that the time evolution of two models does not exhibit a simple and straightforward correspondence as that of the Hamiltonians, as discussed in Appendix B. Nevertheless, the quasienergy of the semiclassical Rabi model and the eigenenergy of the binary lattice are comparable; so are the corresponding eigenstates. From them, some dynamical behaviors are predictable, such as the periodic jump.
## IV Results and discussions
As demonstrated in Sec. III, the Hamiltonian matrix of the binary lattice with a static force is equivalent to that of the semiclassical Rabi model in the odd parity subspace. Consequently, analytical and numerical solutions developed for the semiclassical Rabi model can be readily extended to those for the binary lattice, and vice versa.
One of the most studied phenomena in the semiclassical Rabi model is the Bloch-Siegert shift [32; 41; 42; 43; 44; 45; 46; 47; 48; 49; 50; 51; 52; 53; 54; 55; 56; 57; 58; 59; 60; 61; 62; 63; 64; 65; 66; 67; 68; 69; 70; 71; 72; 73; 74; 75; 76; 77; 78; 79; 80; 81; 82; 83; 84; 85; 86; 87; 88; 89; 90; 82; 84; 86; 87; 89; 91; 88; 92; 89; 93; 94; 95; 96; 97; 98; 99; 100; 10; 101; 102; 103; 104; 105; 106; 107; 108; 109; 111; 111; 112; 113; 114; 115; 116; 117; 118; 119; 120; 1217; 122; 123; 124; 125; 126; 127; 128; 129; 130; 131; 132; 133; 134; 135; 136; 137; 138; 139; 140; 141; 142; 143; 1444; 145; 146; 147; 148; 149; 150; 151; 152; 153; 154; 155; 156; 157; 158; 159; 160; 161; 162; 163; 164; 165; 166; 167; 168; 169; 170; 171; 172; 173; 174; 175; 176; 177; 178; 179; 180; 181; 182; 183; 184; 185; 186; 187; 188; 189; 190; 191; 192; 193; 194; 195; 196; 197; 198; 199; 200; 201; 202; 203; 204; 205; 206; 207; 208; 209; 210; 211; 224; 225; 226; 227; 228; 231; 232; 233; 240; 251; 252; 253; 254; 256; 257; 258; 268; 279; 281; 285; 286; 287; 288; 290; 289; 291; 292; 288; 289; 39; 39; 40; 41; 42; 43; 44; 45; 46; 47; 48; 49; 50; 51; 52; 54; 53; 55; 56; 57; 58; 59; 60; 61; 62; 63; 64; 65; 66; 67; 68; 69; 70; 71; 72; 74; 75; 76; 77; 78; 79; 80; 83; 84; 85; 86; 87; 88; 89; 939; 94; 95; 96; 97; 98; 99; 101; 102; 103; 104; 105; 106; 107; 108; 109; 111; 113; 114; 115; 1116; 117; 118; 119; 1219; 132; 133; 134; 135; 136; 137; 138; 142; 143; 144; 145; 146; 147; 158; 159; 161; 179; 182; 1939; 194; 195; 196; 197; 183; 198; 199; 201; 212; 22; 230; 231; 248; 253; 26; 271; 28; 293; 28; 294; 295; 206; 207; 208; 209; 211; 231; 240; 232; 241; 25; 261; 28; 296; 297; 298; 39; 38; 39; 41; 40; 41; 43; 44; 45; 46; 47; 48; 49; 51; 50; 52; 53; 54; 54; 556; 57; 58; 59; 61; 70; 71; 72; 73; 74; 75; 76; 78; 79; 81; 82; 89; 93; 95; 96; 97; 98; 99; 99; 100; 111; 12; 123; 134; 145; 156; 157; 168; 179; 180; 199; 199; 113; 197; 198; 199; 201; 213; 224; 253; 26; 27
denoted as \(V\), breaks this degeneracy. According to the degenerate perturbation theory, we obtain a \(2\times 2\) effective Hamiltonian matrix as follows:
\[\hat{H}_{2}=\left(\begin{array}{cc}\frac{\epsilon}{2}&-V\\ -V&-\frac{\epsilon}{2}+F\end{array}\right), \tag{14}\]
whose eigenenergies and eigenstates are
\[e_{\pm} = \frac{F\pm\Delta}{2}, \tag{15a}\] \[\left|\phi_{+}\right\rangle = \cos\frac{\theta}{2}\left|1\right\rangle-\sin\frac{\theta}{2} \left|0\right\rangle,\] (15b) \[\left|\phi_{-}\right\rangle = \cos\frac{\theta}{2}\left|0\right\rangle+\sin\frac{\theta}{2} \left|1\right\rangle, \tag{15c}\]
with \(\Delta=\sqrt{\left(\epsilon-F\right)^{2}+4V^{2}}\) and \(\theta=\arcsin\frac{2V}{\Delta}\). Clearly, the gap between two eigenstates is given by \(\Delta\). It reaches the minimum at the resonance \(\epsilon=F\), which also determines the level anticrossing point. Near resonance, the eigenstates tend to be equally distributed between \(\left|0\right\rangle\) and \(\left|1\right\rangle\), while away from resonance they tend to localize on either \(\left|0\right\rangle\) or \(\left|1\right\rangle\). Near resonance, if the particle is initially localized at \(\left|0\right\rangle\), it will oscillate between \(\left|0\right\rangle\) and \(\left|1\right\rangle\) with a period of \(2\pi/\Delta\). Finally, the probability that the particle transfers to \(\left|1\right\rangle\) is given by
\[P_{0\to 1}=\frac{4V^{2}}{\Delta^{2}}\sin^{2}\left(\frac{\Delta t}{2} \right). \tag{16}\]
The amplitude of the oscillation is largest at resonance when \(F=\epsilon\).
Figure 2 displays the eigenenergies of \(\hat{H}_{2}\) [Eq. (14)], as well as the corresponding numerical exact results of \(\hat{H}\) [Eq. (2)]. Although we can confirm the existence of the level anticrossing at \(\epsilon/F=1\) from \(\hat{H}_{2}\), it fails to predict the Bloch-Siegert shift, as depicted in the inset. For \(V/F=0.2\), numerical exact results indicate that the resonance or level anticrossing occurs at \(\epsilon/F\approx 0.9579\), with a corresponding energy gap of \(\Delta_{\min}/F\approx 0.3958\).
Here we concentrate on the dynamics at resonance. Without loss of generality, we assume the initial state to be \(\left|\psi(0)\right\rangle=\left|0\right\rangle\). The probability of finding the particle at site \(n\) is given by \(P_{n}(t)=\left|\left\langle n|\psi(t)\right\rangle\right|^{2}\), as shown in Fig. 3. The dynamical behavior aligns with our earlier perturbative analysis, specifically, it exhibits periodic oscillations between sites \(0\) and \(1\) with a period of \(T=2\pi/\Delta_{\min}\).
To study the higher order resonance, we introduce the inverse participation ratio (IPR) [44], which is defined as
\[\text{IPR}=\frac{\sum_{n=-\infty}^{+\infty}\left|\left\langle n|\phi\right\rangle \right|^{4}}{\left(\sum_{n=-\infty}^{+\infty}\left|\left\langle n|\phi\right\rangle \right|^{2}\right)^{2}}, \tag{17}\]
Figure 4: IPR as a function of \(V\) and \(\epsilon\). The dashed lines correspond to the resonant condition determined using the Bloch-Siegert shift derived by Shirley [32].
Figure 3: Dynamics of the probability distribution \(P_{n}(t)\) at the zeroth order resonance with \(\epsilon/F=0.9579\) and \(V/F=0.2\). The particle is initially located at site \(0\).
Figure 2: Eigenenergies as a function of \(\epsilon/F\) at \(V/F=0.2\). Blue solid lines represent the numerical exact results, while the red dotted lines correspond to the perturbative analytical results from Eq. (15a). The black dashed lines represent the on-site energies of \(\left|0\right\rangle\) and \(\left|1\right\rangle\). The inset provides a detailed view of the upper branch near the level anticrossing.
where \(|\phi\rangle\) represents an arbitrary eigenstate of \(\hat{H}\) [Eq. (2)]. As demonstrated in Ref. [3], different eigenstates can be transformed into each other by translation and inversion operators, which do not influence the IPR. The IPR as a function of \(V\) and \(\epsilon\) is shown in Fig. 4. In general, IPR tends to decrease with an increase in the hopping rate \(V\), suggesting that the eigenstates tend to become delocalized. On the contrary, IPR tends to increase with an increase in the on-site energy mismatch \(\epsilon\), indicating that the eigenstates tend to become localized. Therefore, when \(V\) is small and \(\epsilon\) is large, the eigenstate tends to become localized with IPR\(\to 1\) as shown by the yellow region in the lower-right corner of Fig. 4. However, particular attention should be paid to the vicinity of the resonance \(\epsilon\approx(2n+1)F\). The eigenstates tend to be the superposition of two nearly degenerate states, which leads to IPR\(\to 1/2\) at resonance. Given that the energy mismatch required to attain the \(n\)-th order resonance is \(\epsilon=(2n+1)F-\delta\), \(\delta\) corresponds to the Bloch-Siegert shift in the semiclassical Rabi model. Shirley determined the Bloch-Siegert shift in the semiclassical Rabi model by Salwen's perturbation theory [32], which can also be employed to describe the current model,
\[\delta=\left\{\begin{array}{cc}\frac{V^{2}}{F},&\text{for }n=0,\\ \frac{2n+1}{n(n+1)}\frac{V^{2}}{F},&\text{for }n\geq 1.\end{array}\right. \tag{18}\]
The dashed line in Fig. 4 corresponds to the resonant condition by employing the Bloch-Siegert shift derived by Shirley, which is consistent with the numerical results, especially for \(V/F\ll 1\).
Numerical calculation indicates that the second order resonance occurs at \(\epsilon/F\approx 4.11467\) for \(V/F=1\) and the corresponding energy gap is \(\Delta_{\text{min}}/F\approx 0.03208\). The dynamics of the probability distribution \(P_{n}(t)\) is shown in Fig. 5. Instead of continuous transfer between adjacent sites, the dynamics shows a periodic jump between site 0 and site 5 with a period of \(T=2\pi/\Delta_{\text{min}}\). At the \(n\)th order resonance, we expect that the periodic jump between site 0 and site \((2n+1)\) will occur.
## V Conclusions
In this paper, we conducted both analytical and numerical investigations of a binary lattice subjected to a static external force. We began by establishing the connections between the binary lattice and the semiclassical Rabi model - a periodic driving two-level system: the Floquet Hamiltonian of the semiclassical Rabi model within a specific parity subspace is precisely equivalent to the Hamiltonian of the binary lattice subjected to a static force. Consequently, solutions derived for the semiclassical Rabi model can be readily extended to the binary lattice and vice versa.
Here we concentrated on the resonance and level anticrossing phenomena in the binary lattice subjected to a static force, which are closely related with the Bloch-Siegert shift observed in the semiclassical Rabi model. At the \(n\)th order resonance [\(\epsilon\approx(2n+1)F\)], the eigenstates tend to be a superposition of Wannier states \(|0\rangle\) and \(|2n+1\rangle\), while becoming localized on one of the Wannier states away from the resonance. This phenomenon can be confirmed through the IPR. When a particle initially resides at site 0, it presents a periodic jump between site 0 and site \((2n+1)\), rather than a continuous hopping between adjacent sites. The period of jumps is determined by the energy gap.
The correspondence between the binary lattice subjected to a static force and the semiclassical Rabi model provides insights into bridging condensed matter physics and quantum optics.
Appendix A Equivalence of the Hamiltonians between the binary lattice and the semiclassical Rabi model
In the basis state of \(|\pm x\rangle=\left(|+\rangle\pm|-\rangle\right)/\sqrt{2}\) which satisfy \(\hat{\sigma}_{x}\,|\pm x\rangle=\pm\,|\pm x\rangle\), the Floquet Hamiltonian (8) of the semiclassical Rabi model can be rewritten in a matrix form as follows:
\[\hat{\mathcal{H}}_{F}=\left(\begin{array}{cc}\omega\hat{E}_{0}-\lambda \left(\hat{E}_{+}+\hat{E}_{-}\right)&-\frac{\Omega}{2}\\ -\frac{\Omega}{2}&\omega\hat{E}_{0}+\lambda\left(\hat{E}_{+}+\hat{E}_{-} \right)\end{array}\right). \tag{19}\]
Furthermore, we can introduce the Fulton-Gouterman transformation [45; 46],
\[\hat{U}=\frac{1}{\sqrt{2}}\left(\begin{array}{cc}1&1\\ (-1)^{\hat{E}_{0}}&-(-1)^{\hat{E}_{0}}\end{array}\right), \tag{20}\]
with which Eq. (19) can be transformed into a diagonal form, namely, \(\hat{U}^{\dagger}\hat{\mathcal{H}}_{F}\hat{U}=\text{diag}\left(\hat{H}_{+}, \hat{H}_{-}\right)\). \(\hat{H}_{+}\) and \(\hat{H}_{-}\)
Figure 5: Dynamics of the probability distribution \(P_{n}(t)\) at the second order resonance with \(\epsilon/F=4.11467\) and \(V/F=1\). The particle is initially located at site 0.
correspond to even \((\Pi=1)\) and odd \((\Pi=-1)\) parities, respectively. They are defined as
\[\hat{H}_{\pm} = \omega\tilde{E}_{0}-\lambda\left(\hat{E}_{+}+\hat{E}_{-}\right)\mp \frac{\Omega}{2}(-1)^{\hat{E}_{0}}\] \[= \sum_{n=-\infty}^{+\infty}\left(\omega n\mp\frac{\Omega}{2}(-1)^ {n}\right)\left|n\right\rangle\left\langle n\right|\] \[-\lambda\left(\left|n\right\rangle\left\langle n+1\right|+\left| n+1\right\rangle\left\langle n\right|\right).\]
It is obvious that the Hamiltonian (20) in the odd parity subspace is equivalent to Eqs. (1) and (6), which is just the Hamiltonian of the binary lattice subjected to a static force.
Appendix B Differences in the time evolution between the binary lattice and the semiclassical Rabi model
For the binary lattice subjected to a static force described by Hamiltonian (1), we can assume that one of the eigenstates is denoted as
\[\left|\phi_{0}^{(L)}\right\rangle=\sum_{n=-\infty}^{+\infty}c_{n}\left|n \right\rangle, \tag{22}\]
with the corresponding eigenenergy \(e_{0}^{(L)}\). It is straightforward to confirm that
\[\left|\phi_{m}^{(L)}\right\rangle=\hat{E}_{+}^{2m}\left|\phi_{0}^{(L)}\right\rangle =\sum_{n=-\infty}^{+\infty}c_{n}\left|n+2m\right\rangle \tag{23}\]
are also eigenstates with eigenenergies \(e_{m}^{(L)}=e_{0}^{(L)}+2mF\) (\(m=0,\pm 1,\pm 2,\dots\)) [3], which form an equally spaced energy ladder. Given that the initial state is \(\left|\phi_{m}^{(L)}\right\rangle\), the time evolution is governed by the time-dependent wave function
\[\left|\psi_{m}^{(L)}(t)\right\rangle=\mathrm{e}^{-\mathrm{i}e_{m}^{(L)}t} \left|\phi_{m}^{(L)}\right\rangle, \tag{24}\]
which is obviously dependent on \(m\).
In the semiclassical Rabi model with Floquet Hamiltonian (8), we can assume that one of the eigenstates is denoted as
\[\left|\phi_{0}^{(R)}\right\rangle=\sum_{s=\pm}\sum_{n=-\infty}^{+\infty}c_{s, n}\left|s,n\right\rangle, \tag{25}\]
with the corresponding quasienergy \(e_{0}^{(R)}\). Similar to that in the binary lattice, we can also obtain a set of eigenstates written as
\[\left|\phi_{m}^{(R)}\right\rangle=\hat{E}_{+}^{2m}\left|\phi_{0}^{(R)}\right\rangle =\sum_{s=\pm}\sum_{n=-\infty}^{+\infty}c_{s,n}\left|s,n+2m\right\rangle, \tag{26}\]
with quasienergies \(e_{m}^{(R)}=e_{0}^{(R)}+2m\omega\). According to Floquet's theory, we need to introduce
\[\left|\phi_{m}^{(R)}\right\rangle=\sum_{s=\pm}\sum_{n=-\infty}^{+\infty}c_{s, n}\left|s,n+2m\right\rangle\rightarrow\left|\phi_{m}^{(R)}(t)\right\rangle= \sum_{s=\pm}\sum_{n=-\infty}^{+\infty}c_{s,n}\mathrm{e}^{\mathrm{i}(n+2m) \omega t}\left|s\right\rangle. \tag{27}\]
The time evolution corresponding to \(\left|\phi_{m}^{(R)}\right\rangle\) is given by
\[\left|\psi_{m}^{(R)}(t)\right\rangle = \mathrm{e}^{-\mathrm{i}e_{m}^{(R)}t}\left|\phi_{m}^{(R)}(t)\right\rangle\] \[= \mathrm{e}^{-\mathrm{i}\left(e_{0}^{(R)}+2m\omega\right)t}\sum_{ s=\pm}\sum_{n=-\infty}^{+\infty}\mathrm{e}^{\mathrm{i}(n+2m)\omega t}c_{s,n} \left|s\right\rangle\] \[= \mathrm{e}^{-\mathrm{i}e_{0}^{(R)}t}\sum_{s=\pm}\sum_{n=-\infty} ^{+\infty}\mathrm{e}^{\mathrm{i}n\omega t}c_{s,n}\left|s\right\rangle\] \[= \left|\psi_{0}^{(R)}(t)\right\rangle,\]
which does not depend on \(m\).
The difference in the time evolution is easy to understand. Despite the infinite-dimensional nature of the Floquet Hamiltonian in the semiclassical Rabi model, the original Hamiltonian is fundamentally two-dimensional, in stark contrast to that of the binary lattice. For the basis state \(\left|s,n\right\rangle=\left|s\right\rangle\left|n\right\rangle\), the spin component \(\left|s\right\rangle\) represents the physical state, while the Floquet state \(\left|n\right\rangle\) serves a purely auxiliary role. The inclusion of the Floquet state \(\left|n\right\rangle\) is essential for constructing the Floquet Hamiltonian, determining the corresponding quasienergies, and identifying the eigenstates. Nevertheless, it is invisible in the time evolution. Therefore, there exist differences in the time evolution between the binary lattice and the semiclassical Rabi model. Dynamical phenomena observed in the binary lattice with a static force, like Bloch-Zener oscillations, are challenging to detect in the semiclassical Rabi model and vice versa.
###### Acknowledgements.
This research was supported by the National Natural Science Foundation of China (NSFC) under Grant No.
12305032 and Zhejiang Provincial Natural Science Foundation of China under Grant No. LQ23A050003.
|
2304.13552 | Finite State Automata Design using 1T1R ReRAM Crossbar | Data movement costs constitute a significant bottleneck in modern machine
learning (ML) systems. When combined with the computational complexity of
algorithms, such as neural networks, designing hardware accelerators with low
energy footprint remains challenging. Finite state automata (FSA) constitute a
type of computation model used as a low-complexity learning unit in ML systems.
The implementation of FSA consists of a number of memory states. However, FSA
can be in one of the states at a given time. It switches to another state based
on the present state and input to the FSA. Due to its natural synergy with
memory, it is a promising candidate for in-memory computing for reduced data
movement costs. This work focuses on a novel FSA implementation using resistive
RAM (ReRAM) for state storage in series with a CMOS transistor for biasing
controls. We propose using multi-level ReRAM technology capable of
transitioning between states depending on bias pulse amplitude and duration. We
use an asynchronous control circuit for writing each ReRAM-transistor cell for
the on-demand switching of the FSA. We investigate the impact of the
device-to-device and cycle-to-cycle variations on the cell and show that FSA
transitions can be seamlessly achieved without degradation of performance.
Through extensive experimental evaluation, we demonstrate the implementation of
FSA on 1T1R ReRAM crossbar. | Simranjeet Singh, Omar Ghazal, Chandan Kumar Jha, Vikas Rana, Rolf Drechsler, Rishad Shafik, Alex Yakovlev, Sachin Patkar, Farhad Merchant | 2023-04-26T13:21:17Z | http://arxiv.org/abs/2304.13552v2 | # Finite State Automata Design using 1T1R ReRAM Crossbar
###### Abstract
Data movement costs constitute a significant bottleneck in modern machine learning (ML) systems. When combined with the computational complexity of algorithms, such as neural networks, designing hardware accelerators with low energy footprint remains challenging. Finite state automata (FSA) constitute a type of computation model used as a low-complexity learning unit in ML systems. The implementation of FSA consists of a number of memory states. However, FSA can be in one of the states at a given time. It switches to another state based on the present state and input to the FSA. Due to its natural synergy with memory, it is a promising candidate for in-memory computing for reduced data movement costs. This work focuses on a novel FSA implementation using resistive RAM (ReRAM) for state storage in series with a CMOS transistor for biasing controls. We propose using multi-level ReRAM technology capable of transitioning between states depending on bias pulse amplitude and duration. We use an asynchronous control circuit for writing each ReRAM-transistor cell for the on-demand switching of the FSA. We investigate the impact of the device-to-device and cycle-to-cycle variations on the cell and show that FSA transitions can be seamlessly achieved without degradation of performance. Through extensive experimental evaluation, we demonstrate the implementation of FSA on 1T1R ReRAM crossbar.
FSA, Machine Learning, ReRAM, Memristors, In-Memory Computing
## I Introduction
In-memory computing (IMC) using memristive devices has become popular in elevating the von Neumann bottleneck by storing and processing data in memory, especially for machine learning (ML) applications [1, 2, 3, 4, 5]. Memristive devices connected in a crossbar structure allow the program to run in parallel, making the IMC comparable with conventional computing in terms of energy efficiency and performance [6]. Memristive devices, such as resistive random access memory (ReRAM) [7], can be configured as multi-level cell [8], where the device has multiple intermediate states between low resistive (LRS) and high resistive state (HRS). Still, modern ML workloads require massive storage resources and parallelism to accelerate. However, finite state automata (FSA) capture the real-world constraint of finite memory to implement learning applications [9]. The multi-level behavior of ReRAM can be used to design FSA in memory.
FSA is an abstract machine that can be at exactly one of the finite number of states at any given time. FSA changes its state from one to another, called a transition, when a specific event occurs [10]. FSA has a significant edge on applications that require low-latency or real-time processing, such as automated verification [11], autonomous vehicles [12], and stellin machine [13]. The energy efficiency, high density, and IMC properties of ReRAM devices make them suitable for FSA implementation [14][15]. Multiple states of ReRAM, between LRS and HRS, can be mapped to the states of FSA. An FSA cell can be represented as a single ReRAM device with a CMOS transistor in series (1T1R), as shown in Fig. 1(a). The material stack of the memristive cell is shown in Fig. 1(b), along with the I-V characteristics of the 1T1R cell in Fig. 1(c). Fig. 1(c) shows the multiple states in SET (switching to LRS) and RESET (switching to HRS) states, which have been utilized in this study to implement the FSA on ReRAM devices [16].
However, transitions of FSA from one state to another with accurate detection of the current FSA state under device variations remains challenging. In this paper, we propose an architecture to implement the FSA on the ReRAM crossbar for IMC. We show in the proposed architectures how a single 1T1R cell can be used to design a six-state FSA. Next, we evaluate the proposed architecture in terms of energy efficiency and performance. Moreover, we assess the architecture under device variations such as device-to-device (D2D) and cycle-to-cycle (C2C) [17]. To summarize the main contributions:
* The integration of 1T1R ReRAM technology into FSA design by utilizing the multi-level behavior of ReRAM. The gradual RESET method has been utilized to achieve the multi-level behavior.
* Investigation of the impact of D2D and C2C variations on state transitions and detection.
* Extensive evaluation of the efficiency of the proposed architecture in terms of energy efficiency and latency.
The remainder of the paper is organized as follows: Section II presents the proposed architecture to design FSA on ReRAM. Section III presents experimental results and validation for the design. Finally, Section IV concludes the paper.
Fig. 1: 1T1R cell for FSA (a) cell structure, (b) device material stack, and (c) multi-level characteristic of a device.
## II Proposed Architecture
This section discusses the architecture to implement the FSA on the 1T1R ReRAM crossbar, depicted in Fig. 2. At the core of the proposed architecture, 1T1R cells connected in a crossbar structure called finite automaton (FA) are used.
### _FA using 1T1R cell_
The FA cell is constructed using one memristor and one NMOS transistor in series. The structure of a single FA cell is shown in Fig. 1.The multi-level characteristics of the FA, which have been mapped to different states, are shown in the I-V characteristics displayed in Fig. 1(c). In this study, the FA has seven states ('\(s\)') from \(S_{0}\) to \(S_{6}\), starting from LRS to HRS. \(S_{0}\) has a minimum resistance of around \(7.8\)K\(\Omega\) and \(S_{6}\) has maximum resistance of around \(1.5\)M\(\Omega\). All other states are mapped into the intermediate values between \(S_{0}\) to \(S_{6}\). Each FA in the crossbar represents six states (excluding \(S_{0}\)), and they can be independently programmed. However, multiple FAs can be combined to run a complex application that needs more than six states. For this work, we will limit our study to the working of independent FA in the crossbar. The state transitions of FA are examined next.
### _State transitions in FA_
An FA has a finite number of states, seven in this study, and it changes the state from one to another or the next state (\(q_{i+1}\)) based on the input (\(i_{i}\)) and current state (\(q_{i}\)) similar to a mealy machine. A pulse generation module in Fig. 2 can generate the different pulse widths of a fixed voltage amplitude. The control circuit selects the appropriate signal for the next state based on the present state and input. The parameter to switch the state from \(S_{0}\) to any possible state is given in Table I. Next, the analog demultiplexer (DeMUX) and bit-line encoder select a FA in the crossbar by applying an ON voltage to the NMOS transistor and the required pulse signal at the row of the crossbar.
For the functional correctness of the FSA transition, it is important to identify the present state correctly. FSA can jump from the present state to any other possible state in FA. So, it is expected from the FSA that it should give the same current value for the same state transition. However, the gradual RESET method limits the transitions of the states only in the forward direction (\(S_{1}\to S_{6}\)). In order to change the state which is less than the current state (backward direction), FA needs to switch to \(S_{0}\) (intermediate state) before switching to the next desired state. Also, it is expected that states after switching can correctly be identified during forward or backward direction switching. Therefore, an intermediate state is added in every state transition in FA, which provides three main advantages; (a) switching to any state in FA, (b) state retention while looping in the same state, and (c) reducing the complexity of control circuitry.
The state transition graph of single FA is shown in Fig. 3(a). It can be used for learning applications such as Krinsky automaton [18]. The mapping of a Krinsky learning automaton has been shown in Fig. 3(b). The unfavorable response (\(\beta=1\)) gradually moves toward the boundary states separating the two actions, behaving as binary states. In another response (\(\beta=0\)), \(S_{i}\) switches to \(S_{1}\) and \(S_{4}\) for \((1\leq i\leq 3)\) and \((4\leq i\leq 6)\), respectively. The proposed architecture is highly flexible regarding its control circuit and switching characteristics. It provides the facility to transit from the current state to any next state via \(S_{0}\) with an adaptive STG control unit. The proposed approach can accommodate FSM with more than 6 states by utilizing multiple 1T1R cells arranged to represent different states and encoded into binary form, offering flexibility for varying numbers of states.
### _Peripherals to control FSA_
Various peripherals around the crossbar are required to implement FSA on the ReRAM crossbar, as shown in Fig. 2. The control unit decided the transitions in FA, which are functions of \(x_{i}\) and \(q_{i}\).
_Pulse generation module_ generates the voltage pulses with a required duration for transitioning the state from one to another according to Table I.
Since state \(S_{0}\) is an intermediate state,
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline \multicolumn{2}{|c|}{\(V_{\text{fixed}}=1.8\)V, \(\text{V}_{\text{SET}}=-2V\), \(\text{V}_{\text{READ}}=0.1V\)} \\ \hline State & \(\text{P}_{\text{width}}\) (\(ns\)) & \(I_{\text{d}}\) (\(\mu\)A) & Resistance (K\(\Omega\)) \\ \hline S0 & 10ns at -2V & 12.8 & 7.8 \\ S1 & 5ns & 12.6 & 8.0 \\ S2 & 10ns & 1.6 & 95.2 \\ S3 & 15ns & 0.56 & 196.1 \\ S4 & 30ns & 0.3 & 342.5 \\ S5 & 60ns & 0.2 & 588.2 \\ S6 & 150ns & 0.07 & 1492.5 \\ \hline \end{tabular}
\end{table} TABLE I: State transition of 1T1R cell
Fig. 3: State transitions in a FA cell (a) and (b) shows the use of FSA as Krinsky learning automaton [18].
Fig. 2: Architecture to train the FSA using 1T1R cell (\(FA_{mn}\)), where ‘\(m\)’ and ‘\(n\)’ represent rows and columns in a crossbar, respectively, and ‘\(p\)’ is the number of ADC bits, which is given as \(\lceil\log_{2}(s)\rceil\), ‘\(s\)’ is the number of states in a FA cell.
the small gap between \(S_{0}\) and \(S_{1}\) will not pose an issue in state estimation.
_Multiplexer (MUX), demultiplexer (DeMUX), and bit-line encoder_ are the selection peripherals. For a given (\(m\times n\)) size crossbar, (\(1\times m\)) sized DeMUX are attached to select a row of the crossbar to apply a pulse for transition. Bit-line encoder enables the transistor of selected FA. At column, (\(n\times 1\)) MUX is connected to read the state of a FA.
_Current sense amplifier (CSA) and Analog-to-digital converter (ADC)_ are sensing peripherals. The CSA converts the current to an amplified voltage, which is further used by the ADC to detect the current state of FA. A common CSA and ADC have been used in the proposed architecture, where a FA can be read in each cycle. A '\(p\)' bit ADC is required to correctly detect the '\(s\)' number of states in FA.
_Control unit_ contains the algorithms to switch the states of FA in sequence. The control unit manipulates the select lines of MUX and DeMUX to select an FA and generate the required pulses for transitions via the pulse generation module. It takes input from the digital interface (\(x_{i}\)) and ADC (\(q_{i}\)) and calculates the required control signal to switch the state to \(q_{i+1}\); alternatively, \(q_{i+1}(q_{i},x_{i})\), where \(0<=|x_{i}|<=1\) and \(S_{1}<=q_{i}<=S_{6}\).
## III Experimental Results
This section evaluates the proposed methodology in terms of energy efficiency and area. First, we study the switching characteristics of a FA and the state transitions from one to another. Next, we look at the impact of D2D and C2C variations on state transitions.
### _1T1R cell characterization_
1T1R cell used for this study is designed using a P/Tu/TiO\({}_{x}\)/HfO\({}_{2}\)/Pt material stack memristive devices in series with a 45nm transistor. The material stack used in the memristive device adheres to the characteristics of the experimental devices. Fig. 1 shows the configuration of the 1T1R cell along with the material stack of the device and I-V characteristics. Table II shows the parameters used for the ReRAM device model. The device has multiple-state characteristics, and the gradual RESET method has been used to achieve this behavior. In the gradual RESET method, the device is initialized to an LRS state by applying a positive voltage of a specific duration
Fig. 4: Multi-states behavior using gradual RESET method.
Fig. 5: State switching from \(S_{0}\) to \(S_{6}\). For the desired state, the device is first switched to \(S_{0}\) and then directly switched to the required state by applying the appropriate pulse.
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline Symbol & Value & Symbol & Value \\ \hline \(\mathrm{l_{cell}}\) & 3 \(nm\) & \(\mathrm{l_{det}}\) & 4 \(nm\) \\ \(\mathrm{r_{det}}\) & 20 \(nm\) & \(\mathrm{N_{ping}}\) & \(20\times 10^{6}\)\(m^{-3}\) \\ \(a\) & 0.25 \(nm\) & \(\mu_{n}\) & \(1\times 10^{6}\)\(m^{2}/V_{S}\) \\ \(\epsilon\) & 17 \(t_{0}\) & \(\mathrm{N_{slice,min}}\) & 0.008 \(\times 10^{6}\)\(m^{-3}\) \\ \(\epsilon\phi\beta\) & 5.5 \(\epsilon_{0}\) & \(\mathrm{N_{slice,max}}\) & \(20\times 10^{26}\)\(m^{-3}\) \\ \(e\phi_{\beta n0}\) & 0.3 \(eV\) & \(e\phi_{\beta n}\) & 0.1 \(eV\) \\ \(\Delta\mathrm{WA}\) & 0.7 \(eV\) & A & 0.00392 \(\mathrm{I\Omega}\) \\ \(\mathrm{R_{series}}\) & 650 \(\Omega\) & \(\mathrm{R_{o}}\) & \(\mathrm{7_{19.244}\)\(\Omega\) \\ \(\mathrm{R_{th,line}}\) & 90.47 \(K\Omega\) & \(\mathrm{R_{th0}}\) & \(1.572\times 10^{7}\)\(\Omega\) \\ \hline \end{tabular}
\end{table} TABLE II: Model parameters
reduce the complexity of the control circuit and accurate state detection, the intermediate state has been used in forward, and backward switching. Fig. 5 shows the switching of the states through the intermediate state. Table I also shows the states' current and resistance, indicating enough margin between the state's current/resistance for accurate state detection.
### _Impact of variations_
The D2D and C2C variation in ReRAM devices can affect the switching behavior. To simulate D2D variations, the random set of values for device parameters such as radius, length, and minimum and maximum oxygen vacancy in the disc are drawn from the experiment-verified Gaussian distribution [17]. These variations were then independently applied to the available devices in the crossbar. C2C variations are simulated by changing the variable parameters in a period of a single cycle. It has been observed that states from \(S_{0}\) to \(S_{3}\) (low states) have a larger impact of the variations compared to high states (\(S_{4}\) to \(S_{6}\)), which is around \(\pm\)50% change in the read current for low states and \(\pm\)20% for high states. However, for low states, the margin between the state is more than five times which enables the accurate detection of the state even if the variations have a larger impact. Moreover, switching through the intermediate state prevent error accumulation over time and reduces the impact of variations.
### _Control circuitry_
An asynchronous digital controller has been designed using Workcraft [19] to coordinate data flow between different components in the architecture. Faster operation and lower power consumption are some of the advantages of asynchronous circuits over global clocked-based circuits. The control unit's signal transition graphs (STG) [20] to perform state detection (Read cycle) and state transitions (write cycle) are shown in Fig. 6. When there is a request to read the state of FA, the control unit receives a data reading strobe (_DR+_) from the digital interface environment, and a reading cycle begins. It activates MUX and bit line encoder to select a device (MEN+). Next, it starts reading the data for the \(n_{th}\) FA (DN+) after receiving acknowledgment from MUX (MACK+). Lastly, the data is read, and the next read cycle is prepared by resetting DN-, MEN-, and MACK-. The final ACK signal is sent out as an acknowledgment for the digital interface.
The control circuit for state transitions provides a facility to transition the FA state from any present state to any other possible state. This increases the flexibility of the proposed architecture to run any FSA application. However, every state transition in FA is done via \(S_{0}\) to maintain functional correctness and reduce controller complexity. The control circuitry handles this situation by generating an acknowledgment signal, which includes the transition of \(S_{0}\) and desired state (\(S_{n}\)). The transition cycle is initiated whenever there is a request on a DW+ signal from the environment. The first step for state transition includes switching to the \(S_{0}\) state. The FA is selected by enabling the row multiplexer and bit-line encoder (MEN+). The FA changes the state from the \(n_{th}\) state to \(S_{0}\) and disables the row MUX before sending the acknowledgment for the final transition. At this time of the cycle, FA is in \(S_{0}\) and ready to be switched to the desired state (\(S_{n}\)). Similar to \(S_{0}\) switching, the final transition starts by enabling the peripherals. Before the transition cycle is finished, the signal DN- initiates the ACK+ that will be delivered to the digital interface, resetting DW-, MEN-, and MACK-. The final acknowledgment for the digital interface is ACK- signal, which indicates a successful state transition.
### _Energy and latency analysis_
Each state transition in FA consumes different energy, which is given in Table III. In every transition in FA, 7.5pJ energy is consumed on average. An FA has to switch to the intermediate state before the desired transition, which increases energy consumption. However, the intermediate state makes the state switching robust against D2D and C2C variations. As the proposed architecture used different pulse duration to state transitions, state \(S_{6}\) takes 150ns pulse. Hence the pulse generation module generates each pulse for a 150ns period with varying widths. However, latency can be improved further by increasing the voltage amplitude of the applied pulse, which increases energy consumption. So, there is a trade-off between energy consumption and latency.
## IV Conclusions
In this work, for the first time, we proposed the architecture to implement the FSA using a 1T1R ReRAM crossbar. This paper offers insights into the scope of FSA utilizing ReRAM and CMOS technology. We use the multi-level characteristics of ReRAM, achieved using the gradual RESET method, to implement FSA on the crossbar. We studied the impact of variation on state transitions. Finally, we evaluated the proposed framework in terms of latency and energy consumption. The results are encouraging and demonstrate the potential for using ReRAM-based FSA designs. We will explore the prototyping of the proposed designs and test the architecture with learning automaton applications in the future.
## Acknowledgments
This work was supported in part by the Federal Ministry of Education and Research (BMBF, Germany) in the project NEUROTEC II under Project 16ME0398K, Project 16ME0399, German Research Foundation (DFG) within the Project PLiM (DR 287/35-1, DR 287/35-2) and through Dr. Suhas Pai Donation Fund at IIT Bombay.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|} \hline State & Intermediate & Energy & State & Intermediate & Energy \\ Switching & State & (\(pJ\)) & Switching & State & (\(pJ\)) \\ \hline \(S_{0}\to S_{1}\) & – & 1.74 & \(S_{1}\to S_{2}\) & \(S_{0}\) & 8.2 \\ \(S_{2}\to S_{3}\) & \(S_{0}\) & 8.3 & \(S_{3}\to S_{4}\) & \(S_{0}\) & 8.5 \\ \(S_{4}\to S_{5}\) & \(S_{0}\) & 8.8 & \(S_{5}\to S_{6}\) & \(S_{0}\) & 9.25 \\ \hline \multicolumn{4}{|c|}{**Average energy**} & \multicolumn{4}{c|}{**7.5pJ**} \\ \hline \end{tabular}
\end{table} TABLE III: 1T1R cell energy consumption
Fig. 6: STG of the control unit, (a) state transition cycle, and (b) reading the present state in FA. |
2310.18598 | Domain Generalisation via Risk Distribution Matching | We propose a novel approach for domain generalisation (DG) leveraging risk
distributions to characterise domains, thereby achieving domain invariance. In
our findings, risk distributions effectively highlight differences between
training domains and reveal their inherent complexities. In testing, we may
observe similar, or potentially intensifying in magnitude, divergences between
risk distributions. Hence, we propose a compelling proposition: Minimising the
divergences between risk distributions across training domains leads to robust
invariance for DG. The key rationale behind this concept is that a model,
trained on domain-invariant or stable features, may consistently produce
similar risk distributions across various domains. Building upon this idea, we
propose Risk Distribution Matching (RDM). Using the maximum mean discrepancy
(MMD) distance, RDM aims to minimise the variance of risk distributions across
training domains. However, when the number of domains increases, the direct
optimisation of variance leads to linear growth in MMD computations, resulting
in inefficiency. Instead, we propose an approximation that requires only one
MMD computation, by aligning just two distributions: that of the worst-case
domain and the aggregated distribution from all domains. Notably, this method
empirically outperforms optimising distributional variance while being
computationally more efficient. Unlike conventional DG matching algorithms, RDM
stands out for its enhanced efficacy by concentrating on scalar risk
distributions, sidestepping the pitfalls of high-dimensional challenges seen in
feature or gradient matching. Our extensive experiments on standard benchmark
datasets demonstrate that RDM shows superior generalisation capability over
state-of-the-art DG methods. | Toan Nguyen, Kien Do, Bao Duong, Thin Nguyen | 2023-10-28T05:23:55Z | http://arxiv.org/abs/2310.18598v1 | # Domain Generalisation via Risk Distribution Matching
###### Abstract
We propose a novel approach for domain generalisation (DG) leveraging risk distributions to characterise domains, thereby achieving domain invariance. In our findings, risk distributions effectively highlight differences between training domains and reveal their inherent complexities. In testing, we may observe similar, or potentially intensifying in magnitude, divergences between risk distributions. Hence, we propose a compelling proposition: Minimising the divergences between risk distributions across training domains leads to robust invariance for DG. The key rationale behind this concept is that a model, trained on domain-invariant or stable features, may consistently produce similar risk distributions across various domains. Building upon this idea, we propose **R**isk **D**istribution **M**atching (RDM). Using the maximum mean discrepancy (MMD) distance, RDM aims to minimise the variance of risk distributions across training domains. However, when the number of domains increases, the direct optimisation of variance leads to linear growth in MMD computations, resulting in inefficiency. Instead, we propose an approximation that requires only one MMD computation, by aligning just two distributions: that of the worst-case domain and the aggregated distribution from all domains. Notably, this method empirically outperforms optimising distributional variance while being computationally more efficient. Unlike conventional DG matching algorithms, RDM stands out for its enhanced efficacy by concentrating on scalar risk distributions, sidestepping the pitfalls of high-dimensional challenges seen in feature or gradient matching. Our extensive experiments on standard benchmark datasets demonstrate that RDM shows superior generalisation capability over state-of-the-art DG methods.
## 1 Introduction
In recent years, deep learning (DL) models have witnessed remarkable achievements and demonstrated super-human performance on training distributions [27]. Nonetheless, this success is accompanied by a caveat - deep models are vulnerable to distributional shifts and exhibit catastrophic failures to unseen _out-of-domain_ data [12, 34]. Such limitations hinder the widespread deployment of DL systems in real-world applications, where _domain difference_ can be induced by several factors, such as spurious correlations [2] or variations in location or time [49].
In light of these challenges, domain generalisation (DG) aims to produce models capable of generalising to _unseen_ target domains by leveraging data from diverse sets of training domains or environments [38]. An effective approach involves exploring and establishing _domain invariance_[32], with the expectation that these invariances will similarly apply to related, yet distinct, test domains. To this end, prevailing research focuses on characterising domains through sample representation [31, 38]. The objective is to seek for domain-invariant features by aligning the distributions of hidden representations across various domains. CORAL [54] trains a non-linear transformation that can align the second-order statistics of representations across different layers within deep networks. More, CausIRL [11]
Figure 1: Risk distributions derived from training with ERM for the “Art” and “Photo” domains on the validation set of PACS dataset. Beyond low-risk samples, which may resemble training data, the “Photo” domain generally exhibits a larger distribution of risk values compared to “Art”, hinting at an inherent complexity in learning “Photo” samples. The figure indicates our motivation that _risk distributions_ can effectively highlight differences between domains.
aims to match representation distributions that have been intervened upon the spurious factors. While these methods show promise, they can face multiple challenges with the curse of dimensionality [6, 22]. The sparsity of high-dimensional representation spaces can lead to unreliable estimates of statistical properties, which in turn affects the quality of distribution matching techniques. Also, high-dimensional representations may contain many irrelevant or redundant dimensions, which can introduce noise to the true underlying similarities or differences between distributions. As dimensionality rises, computational complexity intensifies, reducing the efficacy of these methods [39]. Such challenges similarly present in DG methods that utilise gradients for domain alignment [46, 50].
In this paper, we propose to utilise _scalar risk distributions_ as a means to characterise domains, leading to successfully exploring and enforcing domain invariance. Our research reveals that risk distributions can be a reliable indicator of _domain variation_ as it effectively highlights differences between training domains. In Figure 1, we present a visual evidence through histograms, contrasting the risk distributions between the "Art" and "Photo" domains on the validation set of PACS dataset [30], derived from training with Empirical Risk Minimisation (ERM) [57]. The "Photo" domain generally exhibits a larger distribution of scalar risks than that of "Art". This suggests an inherent complexity in learning "Photo" samples, or possibly due to a more limited training dataset compared to "Art". During the testing phase, similar divergences between risk distributions may emerge, potentially intensifying in magnitude. Hence, we propose a compelling proposition: by _minimising the divergences between risk distributions across training domains_, we can achieve robust invariance for DG. The underlying rationale for this concept is that a model, when learning domain-invariant and stable features, tends to produce consistent risk distributions across domains.
Building upon this idea, we propose a novel matching approach for DG, namely _Risk_**D**_istribution_**M**_atching_ (RDM). RDM's objective is to minimise the _variance of risk distributions_ across all training domains. Inspired by [38], we redefine the distributional variance metric to focus specifically on risk distributions and propose to compute it via the maximum mean discrepancy (MMD) distance [19]. However, when the number of training domains increases, directly optimising the variance induces a linear growth in MMD computations, reducing efficiency. Instead, we propose an approximation that requires only _one MMD computation_ via aligning just two distributions: that of the _worst-case_ (or worst-performing) domain and the aggregated distribution from all domains. Empirically, this approach outperforms optimising distributional variance while significantly reducing computational complexity. Unlike prevailing matching algorithms, RDM can address the high-dimensional challenges and further improve efficacy by exclusively focusing on scalar risk distributions. Notably, our empirical studies show that RDM even exhibits enhanced generalisation while being more convenient to optimise. We summarise our contributions below:
* We propose RDM, a novel and efficient matching method for DG, based on our two hypotheses: i) risk distribution disparities offer insightful cues into domain variation; ii) reducing these divergences fosters a generalisable and invariant feature-learning predictor.
* We re-conceptualise the distributional variance metric to exclusively focus on risk distributions, with an objective to minimise it. We further provide an approximate version that aligns only the risk distribution of the worst-case domain with the aggregate from all domains, improving both performance and efficiency.
* Through extensive experiments on standard benchmark datasets, we empirically show that RDM consistently outperforms state-of-the-art DG methods, showcasing its remarkable generalisation capability.
## 2 Related Work
Domain Generalisation (DG)DG aims to develop models that can generalise well on unseen target domains by leveraging knowledge from multiple source domains. Typical DG methods include domain alignment [7, 32, 38], meta learning [4, 29], data augmentation [61, 63], disentangled representation learning [44, 55], robust optimisation [8, 48] and causality-based methods [41, 25, 13]. Our proposed method RDM is related to domain alignment, striving for _domain invariance_ to enhance OOD generalisation. Existing research focuses on characterising domains through sample representations and aligning their distributions across domains to achieve domain-invariant features [38, 1]. CORAL [54] matches mean and variance of representation distributions, while MMD-AAE [31] and FedKA [56] consider matching all moments via the maximum mean discrepancy (MMD) distance [19]. Other methods promote domain invariance by minimising contrastive loss [10] between representations sharing the same labels [37, 35]. Many studies bypass the representation focus, instead characterising domains via gradients and achieving invariance by reducing inter-domain gradient variance [46, 50, 60].
Despite their potential, aligning these high-dimensional distributions may be affected by data sparsity, diversity, and high computational demands [6, 22]. Unlike these methods, RDM offers enhanced efficacy by focusing on _scalar risk distributions_, overcoming the high-dimensional challenges. Further, RDM adopts a novel strategy by efficiently aligning only two distributions: that of the worst-case domain
with the aggregate from all domains. From our experiments, RDM generally exhibits better generalisation performance while being more convenient to optimise compared to competing matching techniques. To the best of our knowledge, the incarnation of risk distributions for domain matching in RDM is novel and sensible.
Distribution matchingDistribution matching has been an important topic with a wide range of applications in machine learning such as DG [31, 54], domain adaptation [9, 59], generative modelling [28, 33]. Early methods, like the MMD distance [19], leverage kernel-based approaches to quantify the distance between distributions, laying the foundation for many subsequent DG techniques [31, 38]. Further advancements have explored optimal transport methods, like the Wasserstein distance [3, 36], which provides a geometrically intuitive means to compare distributions. Other metrics, such as the Kullback-Leibler [26] or Jensen-Shannon [14, 16] divergences, can serve to measure the divergence between distributions and may require additional parameters for estimating the density ratio between the two distributions [53]. In this paper, we utilise the MMD distance to align risk distributions. Its inherent advantages include an analytical measure of the divergence between distributions without relying on distribution densities, and its non-parametric nature [19]. Alternative DG methods augment data by utilising distribution matching and style transfer to generate semantic-preserving samples [62, 63]. Our method differs as we emphasise _domain invariance via aligning risk distributions_, rather than augmenting representation distributions.
Invariance and Causality in DGCausal methods in DG assume that the causal mechanism of the target given causal input features is invariant while non-causal features may change across domains [2, 45, 25]. Based on this assumption, methods establish domain invariance to recover the causal mechanism, thereby improving generalisation. ICP [45] has shown that the causal predictor has an invariant distribution of residuals in regression models, however, is not suitable for deep learning. EQRM [13] and REx [25] leverage the invariance in the _average_ risks over samples across domains. In contrast to above methods, we consider matching _entire risk distributions_ over samples across domains, which, as our experiments demonstrate, is more powerful and enhances generalisation capability.
## 3 Preliminaries
Domain generalisation (DG) involves training a classifier \(f\) on data composed of multiple training domains (also called environments) so that \(f\) can perform well on unseen domains at test time. Mathematically, let \(\mathcal{D}=\left\{\mathcal{D}_{1},...,\mathcal{D}_{m}\right\}\) denote the training set consisting of \(m\) different domains/environments, and let \(\mathcal{D}_{e}:=\left\{\left(x_{e}^{i},y_{e}^{i}\right)\right\}_{i=1}^{n}\) denote the training data belonging to domain \(e\) (\(1\leq e\leq m\)). Given a loss function \(\ell\), the risk of a particular domain sample \(\left(x_{e}^{i},y_{e}^{i}\right)\) is denoted by \(R_{e}^{i}:=\ell\left(f\left(x_{e}^{i}\right),y_{e}^{i}\right)\), and the expected risk \(\overline{R}_{e}\) of domain \(e\) is defined as:
\[\overline{R}_{e}:=\mathbb{E}_{\left(x_{e},y_{e}\right)\sim\mathcal{D}_{e}} \left[\ell\left(f\left(x_{e}\right),y_{e}\right)\right]=\mathbb{E}_{\mathcal{ D}_{e}}\left[R_{e}\right] \tag{1}\]
A common approach to train \(f\) is Empirical Risk Minimisation (ERM) [57] which minimises the expected risks across all training domains. Its loss function, denoted by \(\mathcal{L}_{\text{ERM}}\), is computed as follows:
\[\mathcal{L}_{\text{ERM}} =\mathbb{E}_{e\sim\mathcal{E}}\mathbb{E}_{\left(x_{e},y_{e} \right)\sim\mathcal{D}_{e}}\left[\ell\left(f\left(x_{e}\right),y_{e}\right)\right] \tag{2}\] \[=\mathbb{E}_{e\sim\mathcal{E}}\left[\overline{R}_{e}\right] \tag{3}\]
where \(\mathcal{E}:=\left\{1,...,m\right\}\) denotes the set of all domains.
## 4 Risk Distribution Matching
A model \(f\) trained via ERM often struggles with generalisation to new test domains. This is because it tends to capture domain-specific features [41, 2], such as domain styles, to achieve low risks in training domains, rather than focusing on domain-invariant or semantic features. To overcome this issue, we present a novel training objective that bolsters generalisation through _domain invariance_. Our goal requires utilising a unique domain representative that both characterises each domain and provides valuable insights into domain variation. Specifically, we propose to leverage the _distribution of risks over all samples within a domain_ (or shortly _risk distribution_) as this representative. Unlike other domain representatives, like latent representation or gradient distributions [31, 50], the risk distribution sidesteps high-dimensional challenges like data sparsity and high computational demands [39, 6]. In essence, a model capturing stable, domain-invariant features may consistently yield similar risk distributions across all domains. In pursuit of invariant models, we propose _Risk Distribution Matching_ (RDM), a novel approach for DG that reduces the divergences between training risk distributions via _minimising the distributional variance across them_.
Let \(\mathcal{T}_{e}\) be the probability distribution over the risks of all samples in domain \(e\) (i.e., \(\left\{R_{e}^{i}\right\}_{i=1}^{n_{e}}\)). We refer to \(\mathcal{T}_{e}\) as the _risk distribution_ of domain \(e\), the representative that effectively captures the core characteristics of the domain. We denote \(\mathbb{V}_{\mathbb{R}}\left(\left\{\mathcal{T}_{1},...,\mathcal{T}_{m}\right\}\right)\) the distributional variance across the risk distributions \(\left\{\mathcal{T}_{1},...,\mathcal{T}_{m}\right\}\) in the real number space. We achieve our objective by minimising the following loss function:
\[\mathcal{L}_{\text{final}}:=\ \mathcal{L}_{\text{ERM}}+\lambda\mathbb{V}_{ \mathbb{R}}\left(\left\{\mathcal{T}_{1},...,\mathcal{T}_{m}\right\}\right) \tag{4}\]
where \((\lambda\geq 0)\) is a coefficient balancing between reducing the total training risks with enforcing invariance across domains. \(\lambda\) is set to 1 unless specified otherwise.
To compute \(\mathbb{V}_{\mathbb{R}}\left(\left\{\mathcal{T}_{1},...,\mathcal{T}_{m}\right\}\right)\), we require a suitable representation for the _implicit_ risk distribution \(\mathcal{T}_{e}\) of domain \(e\). Leveraging kernel mean embedding [51], we express \(\mathcal{T}_{e}\) as its embedding, \(\mu_{\mathcal{T}_{e}}\), within a reproducing kernel Hilbert space (RKHS) \(\mathcal{H}\) using a feature map \(\phi:\mathbb{R}\rightarrow\mathcal{H}\) below:
\[\mu_{\mathcal{T}_{e}}\coloneqq \mathbb{E}_{R_{e}\sim\mathcal{T}_{e}}\left[\phi\left(R_{e} \right)\right] \tag{5}\] \[= \mathbb{E}_{R_{e}\sim\mathcal{T}_{e}}\left[k\left(R_{e},\cdot \right)\right] \tag{6}\]
where a kernel function \(k\left(\cdot,\cdot\right):\mathbb{R}\times\mathbb{R}\rightarrow\mathbb{R}\) is introduced to bypass the explicit specification of \(\phi\). Assuming the condition \(\left(\mathbb{E}_{R_{e}\sim\mathcal{T}_{e}}\left(k\left(R_{e},R_{e}\right) \right)<\infty\right)\) holds, the mean map \(\mu_{\mathcal{T}_{e}}\) remains an element of \(\mathcal{H}\)[19, 31]. It is noteworthy that for a _characteristic_ kernel \(k\), the representation \(\mu_{\mathcal{T}_{e}}\) within \(\mathcal{H}\) is unique [19, 38]. Consequently, two distinct risk distributions \(\mathcal{T}_{u}\) and \(\mathcal{T}_{v}\) for any domains \(u,v\) respectively have different kernel mean embeddings in \(\mathcal{H}\). In this work, we use the RBF kernel, a well-known characteristic kernel defined as \(k\left(x,x^{\prime}\right):=\text{exp}\Big{(}-\frac{1}{2\sigma}\left\|x-x^{ \prime}\right\|^{2}\Big{)}\), where \(\sigma>0\) is the bandwidth parameter.
With the unique representation of \(\mathcal{T}_{e}\) established, our objective becomes computing the distributional variance between risk distributions within \(\mathcal{H}\), represented by \(\mathbb{V}_{\mathcal{H}}\left(\left\{\mathcal{T}_{1},...,\mathcal{T}_{m} \right\}\right)\). Inspired by [38], we redefine the variance metric to focus specifically on risk distributions across multiple domains below:
\[\mathbb{V}_{\mathcal{H}}\left(\left\{\mathcal{T}_{1},...,\mathcal{T}_{m} \right\}\right):=\ \frac{1}{m}\sum_{e=1}^{m}\left\|\mu_{\mathcal{T}_{e}}-\mu_{\mathcal{T}} \right\|_{\mathcal{H}}^{2} \tag{7}\]
where \(\mathcal{T}=\frac{1}{m}\sum_{e=1}^{m}\mathcal{T}_{e}\) denotes the probability distribution over the risks of all samples in the entire training set, or equivalently, the set of all \(m\) domains. Meanwhile, \(\mu_{\mathcal{T}_{e}}\) and \(\mu_{\mathcal{T}}\) represent the mean embeddings of \(\mathcal{T}_{e}\) and \(\mathcal{T}\), respectively, and are computed as in Eq. 5. Incorporating \(\mathbb{V}_{\mathcal{H}}\left(\left\{\mathcal{T}_{1},...,\mathcal{T}_{m} \right\}\right)\) into our loss function from Eq. 4, we get:
\[\mathcal{L}_{\text{final}}:=\ \mathcal{L}_{\text{ERM}}+\lambda\mathbb{V}_{ \mathcal{H}}\left(\left\{\mathcal{T}_{1},...,\mathcal{T}_{m}\right\}\right) \tag{8}\]
Minimising \(\mathbb{V}_{\mathcal{H}}\left(\left\{\mathcal{T}_{1},...,\mathcal{T}_{m} \right\}\right)\) in Eq. 8 facilitates our objective of equalising risk distributions across all domains, as proven by the theorem below.
**Theorem 1**.: _[_38_]_ _Given the distributional variance \(\mathbb{V}_{\mathcal{H}}\left(\left\{\mathcal{T}_{1},...,\mathcal{T}_{m} \right\}\right)\) is calculated with a characteristic kernel \(k\), \(\mathbb{V}_{\mathcal{H}}\left(\left\{\mathcal{T}_{1},...,\mathcal{T}_{m} \right\}\right)=0\) if and only if \(\mathcal{T}_{1}=...=\mathcal{T}_{m}\left(=\mathcal{T}\right)\)._
Proof.: Please refer to our appendix for the proof.
In the next part, we present how to compute the distributional variance using the Maximum Mean Discrepancy (MMD) distance [19], relying only on risk samples. Then, we propose an efficient approximation of optimising the distributional variance, yielding improved empirical performance.
### Maximum Mean Discrepancy
For domain \(e\), the squared norm, \(\left\|\mu_{\mathcal{T}_{e}}-\mu_{\mathcal{T}}\right\|_{\mathcal{H}}^{2}\), defined in Eq. 7, is identified as the squared MMD distance [18] between distributions \(\mathcal{T}_{e}\) and \(\mathcal{T}\). It is expressed as follows:
\[\text{MMD}^{2}\left(\mathcal{T}_{e},\mathcal{T}\right) =\left\|\mu_{\mathcal{T}_{e}}-\mu_{\mathcal{T}}\right\|_{\mathcal{H }}^{2} \tag{9}\] \[=\left\|\mathbb{E}_{R_{e}\sim\mathcal{T}_{e}}\left[\phi\left(R_{e }\right)\right]-\mathbb{E}_{R_{f}\sim\mathcal{T}}\left[\phi\left(R_{f}\right) \right]\right\|_{\mathcal{H}}^{2}\] (10) \[=\mathbb{E}_{R_{e},R_{e}^{\prime}\sim\mathcal{T}_{e}}\left\langle \phi\left(R_{e}\right),\phi\left(R_{e}^{{}^{\prime}}\right)\right\rangle\] \[\quad-2\mathbb{E}_{R_{e}\sim\mathcal{T}_{e};R_{f}\sim\mathcal{T}} \left\langle\phi\left(R_{e}\right),\phi\left(R_{f}\right)\right\rangle\] (11) \[\quad+\mathbb{E}_{R_{f},R_{f}^{\prime}\sim\mathcal{T}}\left\langle \phi\left(R_{f}\right),\phi\left(R_{f}^{{}^{\prime}}\right)\right\rangle\]
where \(\left\langle\cdot,\cdot\right\rangle\) denote the inner product operation in \(\mathcal{H}\). Through the kernel trick, we can compute these inner products via the kernel function \(k\) without an explicit form of \(\phi\) below:
\[\text{MMD}^{2}\left(\mathcal{T}_{e},\mathcal{T}\right) =\mathbb{E}_{R_{e},R_{e}^{\prime}\sim\mathcal{T}_{e}}k\left(R_{e},R_{e}^{{}^{\prime}}\right) \tag{12}\] \[\quad-2\mathbb{E}_{R_{e}\sim\mathcal{T}_{e};R_{f}\sim\mathcal{T}}k \left(R_{e},R_{f}\right)\] \[\quad+\mathbb{E}_{R_{f},R_{f}^{\prime}\sim\mathcal{T}}k\left( \mathcal{R}_{f},\mathcal{R}_{f}^{{}^{\prime}}\right)\]
We reformulate our loss function in Eq. 8 to incorporate MMD as follows:
\[\mathcal{L}_{\text{final}} :=\ \mathcal{L}_{\text{ERM}}+\frac{\lambda}{m}\sum_{e=1}^{m}\text{ MMD}^{2}\left(\mathcal{T}_{e},\mathcal{T}\right) \tag{13}\] \[=\ \mathcal{L}_{\text{ERM}}+\lambda\mathcal{L}_{\text{RDM}} \tag{14}\]
The loss function \(\mathcal{L}_{\text{RDM}}\) involves minimising \(\text{MMD}^{2}\left(\mathcal{T}_{e},\mathcal{T}\right)\) for every domain \(e\). Ideally, the distributional variance reaches its lowest value at \(0\) if \(\text{MMD}\left(\mathcal{T}_{e},\mathcal{T}\right)=0\), equivalent to \(\left(\mathcal{T}_{e}=\mathcal{T}\right)\)[18, 19], across \(e\) domains. The objective also entails aligning each individual risk distribution, \(\mathcal{T}_{e}\), with the aggregated distribution spanning all domains, \(\mathcal{T}\). With the characteristic RBF kernel, it can be viewed as _matching an infinite number of moments_ across all risk distributions.
We emphasise our choice of MMD owing to its benefits for effective risk distribution matching: i) MMD is an
important member of the Integral Probability Metric family [40] that offers an analytical solution facilitated through RKHS, and ii) MMD enjoys the property of quantifying the dissimilarity between two implicit distributions via their finite samples in a non-parametric manner.
### Further improvement of Rdm
We find that effective alignment of risk distributions across \(m\) domains can be achieved by matching the risk distribution of the _worst-case_ domain, denoted as \(w\), with the combined risk distribution of all domains, offering an approximation to the optimisation of risk distributional variance seen in Eq. 13. This approximate version significantly reduces the MMD distances computation in \(\mathcal{L}_{\text{RDM}}\) from \(O\left(m\right)\) to \(O\left(1\right)\), and further improves generalisation, as we demonstrate with empirical evidence in Section 5.
Denote by \(\left(w=\underset{e\in\mathcal{E}}{\text{argmax}}\ \overline{R}_{e}\right)\) the worst-case domain, i.e., the domain that has the largest expected risk in \(\mathcal{E}\). The approximate RDM's loss,\(\hat{\mathcal{L}}_{\text{RDM}}\), is computed as follows:
\[\hat{\mathcal{L}}_{\text{RDM}} =\text{MMD}^{2}\left(\mathcal{T}_{w},\mathcal{T}\right) \tag{15}\] \[\approx\mathcal{L}_{\text{RDM}} \tag{16}\]
In our experiments, we observed only a small gap between \(\hat{\mathcal{L}}_{\text{RDM}}\) and \(\mathcal{L}_{\text{RDM}}\), while optimising \(\hat{\mathcal{L}}_{\text{RDM}}\) proving to be more computationally efficient. The key insight emerges from \(\overline{R}_{e}\), the first moment (or mean) of \(\mathcal{T}_{e}\). Often, the average risk can serve as a measure of domain uniqueness or divergence [25, 46]. Specifically, a domain with notably distinct mean risk is more likely to diverge greatly from other risk distributions. Under such circumstances, \(\hat{\mathcal{L}}_{\text{RDM}}\) will be an upper-bound of \(\mathcal{L}_{\text{RDM}}\), as shown by: \(\mathcal{L}_{\text{RDM}}=\frac{1}{m}\sum_{e=1}^{m}\text{MMD}^{2}\left( \mathcal{T}_{e},\mathcal{T}\right)\ \leq\ \frac{1}{m}\sum_{e=1}^{m}\text{MMD}^{2}\left( \mathcal{T}_{w},\mathcal{T}\right)=\text{MMD}^{2}\left(\mathcal{T}_{w}, \mathcal{T}\right)=\hat{\mathcal{L}}_{\text{RDM}}\). By optimising \(\hat{\mathcal{L}}_{\text{RDM}}\), we can also potentially decrease \(\mathcal{L}_{\text{RDM}}\), thus aligning risk distributions across domains effectively. More, \(\hat{\mathcal{L}}_{\text{RDM}}\) drives the model to prioritise the worst-case domain's optimisation. This approach enhances the model's robustness to extreme training scenarios, which further improves generalisation as proven in [25, 48]. These insights shed light on the superior performance of optimising \(\hat{\mathcal{L}}_{\text{RDM}}\) over \(\mathcal{L}_{\text{RDM}}\). Therefore, we opted to use \(\hat{\mathcal{L}}_{\text{RDM}}\), simplifying our model's training and further bolstering its OOD performance.
## 5 Experiments
We evaluate and analyse RDM using a synthetic ColoredMNIST dataset [2] and multiple benchmarks from the DomainBed suite [20]. Each of our claims is backed by empirical evidence in this section. Our source code to reproduce results is available at: [https://github.com/nktoan/risk-distribution-matching](https://github.com/nktoan/risk-distribution-matching)
### Synthetic Dataset: ColoredMNIST
We evaluate all baselines on a synthetic binary classification task, namely ColoredMNIST [2]. This dataset involves categorising digits (0-9) into two labels: "zero" for 0 to 4 range and "one" for 5 to 9 range, with each digit colored either red or green. The dataset is designed to assess the generalisation and robustness of baseline models against the influence of spurious color features. The dataset contains two training domains, where the chance of red digits being classified as "zero" is \(80\)% and \(90\)%, respectively, while this probability decreases to only \(10\)% during testing. The goal is to train a predictor invariant to "digit color" features, capturing only "digit shape" features.
Following [13], we employ a two-hidden-layer MLP with 390 hidden units for all baselines. Optimised through the Adam optimiser [23] at a learning rate of \(0.0001\), with a dropout rate of \(0.2\), we train each algorithm for \(600\) iterations with a batch size of 25,000. We repeat the experiment ten times over different values of the penalty weight \(\lambda\). We find our matching penalty quite small, yielding optimal RDM's performance within the \(\lambda\) range of \([1000,10000]\). We provide more details about experimental settings in the supplementary material.
We compare RDM with ERM and three different types of algorithms: robust optimisation (GroupDRO [48], IGA [24]), causal methods learning invariance (IRM [2], VREx [25], EQRM [13]) and representation distribution matching (MMD [31], CORAL [54]). All algorithms are run using two distinct network configurations: (i) initialising the network randomly via Xavier method [17]; (ii) pre-training the network with ERM for \(400\) iterations prior to performing the algorithms. Table 1 shows that our pro
\begin{table}
\begin{tabular}{c c c} \hline \hline \multirow{2}{*}{Algorithm} & \multicolumn{2}{c}{Initialisation} \\ \cline{2-3} & Rand. & ERM \\ \hline \hline ERM & 27.9\(\pm\)1.5 & 27.9\(\pm\)1.5 \\ GroupDRO & 27.3\(\pm\)0.9 & 29.0\(\pm\)1.1 \\ IGA & 50.7\(\pm\)1.4 & 57.7\(\pm\)3.3 \\ IRM & 52.5\(\pm\)2.4 & 69.7\(\pm\)0.9 \\ VREx & 55.2\(\pm\)4.0 & 71.6\(\pm\)0.5 \\ EQRM & 53.4\(\pm\)1.7 & 71.4\(\pm\)0.4 \\ CORAL & 55.3\(\pm\)2.8 & 65.6\(\pm\)1.1 \\ MMD & 54.6\(\pm\)3.2 & 66.4\(\pm\)1.7 \\ \hline RDM (_ours_) & **56.3\(\pm\)1.5** & **72.4\(\pm\)1.0** \\ \hline Oracle & \multicolumn{2}{c}{72.1\(\pm\)0.7} \\ Optimum & \multicolumn{2}{c}{75.0} \\ \hline \hline \end{tabular}
\end{table}
Table 1: ColoredMNIST test accuracy where the best results are marked as bold. Results of other methods are referenced from [13].
posed method RDM surpasses all algorithms, irrespective of the network configuration. RDM exhibits improvements of \(1.0\)% and \(6.8\)% over CORAL, both without and with pre-trained ERM, respectively, underlining the effectiveness of aligning risk distributions instead of high-dimensional representations. VREx and EQRM, which pursue invariant predictors by equalising average training risks across domains, demonstrate suboptimal performance compared to our approach. This improvement arises from our consideration of the entire risk distributions and the matching of all moments across them, which inherently foster stronger invariance for DG. Notably, all methods experience enhanced performance with ERM initialisation. RDM even excels beyond oracle performance (ERM trained on grayscale digits with 50% red and 50% green) and converges towards optimality.
Figure 2 demonstrates histograms with their KDE curves [42] depicting the risk distributions of ERM and RDM across four domains. The figure confirms our hypothesis that the disparities among risk distributions could serve as a valuable signal of _domain variation_. ERM's histogram shows a clear difference between environments with \(90\)% and \(80\)% chance of red digits labelled "zero" and those with only \(50\)% or \(10\)%. More, ERM tends to overfit to training domains, which negatively impacts its generalisation to test domains. Remarkably, RDM effectively minimises the divergences between risk distributions across all domains, including _test domains with lower risks_. This also aligns with our motivation: an invariant or stable feature-learning predictor, by displaying similar risk distributions across domains, inherently boosts generalisation.
### DomainBed
Dataset and ProtocolFollowing previous works [13, 20], we extensively evaluate all methods on five well-known DG benchmarks: VLCS [15], PACS [30], OfficeHome [58], Terralncopnita [5], and DomainNet [43]. For a fair comparison, we reuse the training and evaluation protocol in DomainBed [20], including the dataset splits, training iterations, and model selection criteria. Our evaluation employs the leave-one-domain-out approach: each model is trained on all domains except one and then tested on the excluded domain. The final model is chosen based on its combined accuracy across all training-domain validation sets.
Implementation DetailsWe use ResNet-50 [21] pre-trained on ImageNet [47] as the default backbone. The model is optimised via the Adam optimiser for \(5,000\) iterations on every dataset. We follow [13, 25] to pre-train baselines with ERM for certain iterations before performing the algorithms. Importantly, we find that achieving accurate risk distribution matching using distribution samples requires larger batch sizes - details of which are examined in our ablation studies. For most datasets, the optimal batch size lies between \([70,100]\). However, for huge datasets like Terralncopnita and DomainNet, it is between \([30,60]\). Although computational resources limit us from testing larger batch sizes, these ranges consistently achieve strong performance on benchmarks. The matching coefficient \(\lambda\) in our method is set in \([0.1,10.0]\). Additional hyper-parameters like learning rate, dropout rate, or weight decay, adhere to the preset ranges as detailed in [13]. We provide more implementation details in the supplementary material. We repeat our experiments ten times with varied seed values and hyper-parameters and report the average results.
Experimental ResultsIn Table 2, we show the average out-of-domain (OOD) accuracies of state-of-the-art DG methods on five benchmarks. Due to space constraints, domain-specific accuracies are detailed in the supplementary material. We compare RDM with ERM and various types of algorithms: distributional robustness (GroupDRO),
Figure 2: Histograms with their KDE curves depicting the risk distributions of ERM and RDM across four domains on ColoredMNIST. Vertical ticks denote the mean values of all distributions.
causal methods learning invariance (IRM, VREx, EQRM), gradient matching (Fish [50], Fishr [46]), representation distribution matching (MMD, CORAL) and other variants (Mixup [61], MLDG [29]). To ensure fairness in our evaluations, we have used the same training data volume across all baselines, although further employing augmentations can enhance models' performance.
On average, RDM surpasses other baselines across all benchmarks, notably achieving a \(1.5\)% average improvement over ERM. The significant improvement of RDM on DomainNet, a large-scale dataset with 586,575 images across 6 domains, is worth mentioning. This suggests that characterising domains with risk distributions to achieve invariance effectively enhances OOD performance. Compared to distributional robustness methods, RDM notably outperforms GroupDRO with improvement of \(2.8\)% on PACS and a substantial \(10.1\)% on DomainNet. RDM consistently improves over causality-based methods that rely on the average risk for domain invariance. This superiority attributes to our novel adoption of risk distributions, achieving enhanced invariance for DG. Our remarkable improvement over MMD suggests that aligning _risk distributions_ via the MMD distance is more effective, easier to optimise than aligning representation distributions. While RDM typically outperforms CORAL and Fish in OOD scenarios, it only remains competitive or sometimes underperforms on certain datasets like OfficeHome. This decrease in performance may stem from the dataset's inherent tendency to overfit within our risk distribution alignment objective. OfficeHome has only average about 240 samples per class, significantly fewer than other datasets with at least 1,400. This reduced sample size may not provide sufficiently diverse risk distributions to capture stable class features, resulting in overfitting on the training set. Despite these limitations, our OfficeHome results still outperform several well-known baselines such as MLDG, VREx, or ERM. For a detailed discussion on this challenge, please refer to our supplementary material.
### Analysis
In this section, we provide empirical evidence backing our claims in Section 4. In Figure 2(a), we highlight a small gap when aligning the risk distribution of the worst-case domain with that of all domains combined (RDM with \(\hat{\mathcal{L}}_{\text{RDM}}\)), compared to directly optimising the distributional variance (RDM with \(\mathcal{L}_{\text{RDM}}\)). Notably, \(\hat{\mathcal{L}}_{\text{RDM}}\) consistently represents an upper bound of \(\mathcal{L}_{\text{RDM}}\), which is sensible since the worst-case domain often exhibits the most distinct risk distribution. This suggests that optimising \(\hat{\mathcal{L}}_{\text{RDM}}\) also helps reduce the distributional variance \(\mathcal{L}_{\text{RDM}}\), bringing the risk distributions across domains closer.
\begin{table}
\begin{tabular}{c c c c c c c} \hline \hline Algorithm & VLCS & PACS & OfficeHome & Terralnocognita & DomainNet & Avg \\ \hline \hline ERM & 77.5\(\pm\)0.4 & 85.5\(\pm\)0.2 & 66.5\(\pm\)0.3 & 46.1\(\pm\)1.8 & 40.9\(\pm\)0.1 & 63.3 \\ Mixup & 77.4\(\pm\)0.6 & 84.6\(\pm\)0.6 & 68.1\(\pm\)0.3 & **47.9\(\pm\)0.8** & 39.2\(\pm\)0.1 & 63.4 \\ MLDG & 77.2\(\pm\)0.4 & 84.9\(\pm\)1.0 & 66.8\(\pm\)0.6 & 47.7\(\pm\)0.9 & 41.2\(\pm\)0.1 & 63.6 \\ GroupDRO & 76.7\(\pm\)0.6 & 84.4\(\pm\)0.8 & 66.0\(\pm\)0.7 & 43.2\(\pm\)1.1 & 33.3\(\pm\)0.2 & 60.9 \\ IRM & 78.5\(\pm\)0.5 & 83.5\(\pm\)0.8 & 64.3\(\pm\)2.2 & 47.6\(\pm\)0.8 & 33.9\(\pm\)2.8 & 61.6 \\ VREx & 78.3\(\pm\)0.2 & 84.9\(\pm\)0.6 & 66.4\(\pm\)0.6 & 46.4\(\pm\)0.6 & 33.6\(\pm\)2.9 & 61.9 \\ EQRM & 77.8\(\pm\)0.6 & 86.5\(\pm\)0.2 & 67.5\(\pm\)0.1 & 47.8\(\pm\)0.6 & 41.0\(\pm\)0.3 & 64.1 \\ Fish & 77.8\(\pm\)0.3 & 85.5\(\pm\)0.3 & 68.6\(\pm\)0.4 & 45.1\(\pm\)1.3 & 42.7\(\pm\)0.2 & 64.0 \\ Fishr & 77.8\(\pm\)0.1 & 85.5\(\pm\)0.4 & 67.8\(\pm\)0.1 & 47.4\(\pm\)1.6 & 41.7\(\pm\)0.0 & 64.0 \\ CORAL & **78.8\(\pm\)0.6** & 86.2\(\pm\)0.3 & **68.7\(\pm\)0.3** & 47.6\(\pm\)1.0 & 41.5\(\pm\)0.1 & 64.6 \\ MMD & 77.5\(\pm\)0.9 & 84.6\(\pm\)0.5 & 66.3\(\pm\)0.1 & 42.2\(\pm\)1.6 & 23.4\(\pm\)9.5 & 63.3 \\ \hline RDM (_ours_) & 78.4\(\pm\)0.4 & **87.2\(\pm\)0.7** & 67.3\(\pm\)0.4 & 47.5\(\pm\)1.0 & **43.4\(\pm\)0.3** & **64.8** \\ \hline \hline \end{tabular}
\end{table}
Table 2: DomainBed test accuracy where the best results are marked as bold. Results of other methods are referenced from [13, 50]. Model selection: training-domain validation set.
Figure 3: Figure 2(a) supports our claim about the approximation of the distributional variance, while Figure 2(b) compares the OOD performance learning curves of RDM with other methods. These insights are visualised every 15 iterations during PACS dataset training, excluding the OOD Sketch domain. After RDM is pre-trained with ERM for \(100(\times 15)\) iterations, our visual analysis commences, ensuring a fair comparison.
When the number of training domains grows, especially with large-scale datasets like DomainNet, emphasising the risk distribution of the worst-case domain not only proves to be a more efficient approach but also significantly enhances OOD performance. In our exploration of training resources for DomainNet, we study three matching methods: Fish, CORAL and two variants of our RDM method. For a fair evaluation, all experiments were conducted with identical GPU resources, settings, and hyper-parameters, such as batch size or training iterations. Results can be seen in Table 3. Full details on training resources for these methods on other datasets are available in the supplementary material due to space constraints.
Our RDM with the \(\hat{\mathcal{L}}_{\text{RDM}}\) objective proves fastest in training and achieves the notably highest \(43.4\)% accuracy on DomainNet. While RDM demands more memory than Fish, due to the storage of MMD distance values, it can be trained in less time - under an hour - and still delivers a \(0.7\)% performance boost. This gain over Fish, a leading gradient maching method on DomainNet, is significant. Among two variants of RDM, the one using \(\hat{\mathcal{L}}_{\text{RDM}}\) is both the fastest and most accurate, justifying our claims on the benefits of aligning the risk distribution of the worst-case domain.
More, to further highlight the efficacy of risk distribution alignment for DG, we compare the OOD performance learning curves of RDM with competing baselines using representation (CORAL) and gradient (Fish) alignments, as depicted in Figure 2(b). Impressively, RDM consistently outperforms, demonstrating enhanced generalisation throughout the training process.
### Ablation studies
We explore the impact of the matching coefficient \(\lambda\) and training batch size on risk distribution matching, using primarily the PACS dataset for brevity. While other datasets exhibit similar trends, their detailed results are provided in the supplementary material.
Matching coefficient \(\lambda\)Figure 3(a) illustrates the performance of RDM on the PACS dataset for varying values of the matching coefficient \(\lambda\), spanning \(\{0.1,1.0,2.5,5.0,7.5,10.0\}\). Notably, as \(\lambda\) increases, RDM's accuracy consistently improves, justifying the significance of our risk distribution matching module in fostering generalisation. In particular, when \(\lambda=5.0\), RDM demonstrates a notable \(1.6\)% average accuracy boost across all domains, in contrast to when using only \(\lambda=0.1\). Across most datasets, a \(\lambda\) value within \([0.1,10.0]\) appears sufficient to produce good results.
Batch sizeWe study the impact of batch size on RDM's performance. Our assumption is that achieving accurate risk distribution matching through data samples would require larger batch sizes. Figure 3(b) validates this, revealing enhanced generalisation results on PACS with increased batch sizes. For PACS, sizes between \([70,100]\) yield promising, potentially optimal outcomes, despite computational limitations restrict our exploration of larger sizes.
## 6 Conclusion
We have demonstrated that RDM, a novel matching method for domain generalisation (DG), provides enhanced generalisation capability by aligning risk distributions across domains. RDM efficiently overcomes high-dimensional challenges of conventional DG matching methods. RDM is built on our observation that risk distributions can effectively represent the differences between training domains. By minimising these divergences, we can achieve an invariant and generalisable predictor. We further improve RDM by matching only the risk distribution of the worst-case domain with the aggregate from all domains, bypassing the need to directly compute the distributional variance. This approximate version not only offers computational efficiency but also delivers improved out-of-domain results. Our extensive experiments on several benchmarks reveal that RDM surpasses leading DG techniques. We hope our work can inspire further investigations into the benefits of risk distributions for DG.
Figure 4: Ablation studies on the effects of the matching coefficient \(\lambda\) and the training batch size on the PACS dataset.
\begin{table}
\begin{tabular}{c c c c} \hline \hline
**Algorithm** & Training (s) & Mem (GiB) & Acc (\%) \\ \hline \hline Fish & 11,502 & **5.26** & 42.7 \\ \hline CORAL & 11,504 & 17.00 & 41.5 \\ \hline RDM with \(\hat{\mathcal{L}}_{\text{RDM}}\) & 9,854 & 16.94 & 43.1 \\ \hline RDM with \(\hat{\mathcal{L}}_{\text{RDM}}\) & **7,749** & 16.23 & **43.4** \\ \hline \hline \end{tabular}
\end{table}
Table 3: Comparison between Fish, CORAL, and two variants of our method in terms of the training time (seconds), memory usage per iteration (GiB) and accuracy (%) on DomainNet. |
2305.04307 | Experimental and numerical investigations on heat transfer in fused
filament fabrication 3D-printed specimens | A good understanding of the heat transfer in fused filament fabrication is
crucial for an accurate stress prediction and subsequently for repetitive, high
quality printing. This work focuses on two challenges that have been presented
when it comes to the accuracy and efficiency in simulating the heat transfer in
the fused filament fabrication process. With the prospect of choosing correct
thermal boundary conditions expressing the natural convection between printed
material and its environment, values for the convective heat transfer
coefficient and ambient temperature were calibrated through numerical data
fitting of experimental thermal measurements. Furthermore, modeling
simplifications were proposed for an efficient numerical discretization of
infill structures. Samples were printed with varying infill characteristics,
such as varying air void size, infill densities and infill patterns. Thermal
measurements were performed to investigate the role of these parameters on the
heat transfer and based on these observations, possible modeling
simplifications were studied in the numerical simulations. | Nathalie Ramos, Christoph Mittermeier, Josef Kiendl | 2023-05-07T15:22:31Z | http://arxiv.org/abs/2305.04307v1 | # Experimental and numerical investigations
###### Abstract
A good understanding of the heat transfer in fused filament fabrication is crucial for an accurate stress prediction and subsequently for repetitive, high quality printing. This work focuses on two challenges that have been presented when it comes to the accuracy and efficiency in simulating the heat transfer in the fused filament fabrication process. With the prospect of choosing correct thermal boundary conditions expressing the natural convection between printed material and its environment, values for the convective heat transfer coefficient and ambient temperature were calibrated through numerical data fitting of experimental thermal measurements. Furthermore, modeling simplifications were proposed for an efficient numerical discretization of infill structures. Samples were printed with varying infill characteristics, such as varying air void size, infill densities and infill patterns. Thermal measurements were performed to investigate the role of these parameters on the heat transfer and based on these observations, possible modeling simplifications were studied in the numerical simulations.
**Keywords:** 3D printing, fused filament fabrication, heat transfer, convective boundary conditions, infill structures, air voids
## 1 Introduction
Fused filament fabrication (FFF), or fused deposition modeling, is one of the most widely applied additive manufacturing methods. It is a three-dimensional (3D) printing method in which a thermoplastic material is extruded through a nozzle to construct a layer-by-layer structure [1]. Once lauded for its potential in prototype manufacturing, it has now found its way into a plethora of fields and applications, such as biomedical, electrical and aerospace engineering [2]. Advantages of FFF printing are the relatively low costs, wide applicability and availability, large variety of suitable materials and the ease of use [3, 4]. Some of these advantages have caused a surge of the use of FFF in four-dimensional (4D) printing, where the fourth dimension represents the change of shape over time of the smart materials [5, 6]. By combining FFF printing, and the use of shape memory polymers (SMP), structures can be printed that can maintain a temporary shape and that can return to their original shape after being exposed to an external stimulus, such as heat [7]. The thermal sensitive nature of printed SMP parts enables potential applications such as fasteners in active assembly/disassembly, smart actuators and deployable structures for aerospace applications [3, 8, 9].
However, deposition of the filament at high temperatures, followed by quick cooling to enforce solidification results in significant thermal gradients and subsequently, residual stresses [1]. The presence of residual stresses can lead to warping during printing which can result in a failed print [10]. This is a common problem for Acrylonitrile Butadiene Styrene (ABS). Residual stresses can also cause distortion of parts and a loss of strength [11]. In case of printed parts with strict geometrical tolerances or structural requirements, this effect can lead to a loss of functionality.
During FFF printing the material is extruded at a temperature which is typically higher than the glass transition temperature. It is either deposited on the building platform or cooled existing layers, causing the material to be (partially) stretched as it bonds, to cool down and to finally solidify [12]. The induced pre-strain will be released as soon as the material is heated above the glass transition temperature and a 'new' permanent shape emerges. An example of such programming can be seen in figure 1 where two rectangular specimens are printed flat, but with a different orientation of the printed filament. After reheating the samples above the glass transition temperature, they either bend upwards, or twist upwards depending on the printing orientation. Such approach has been used in 4D printing to design structures with self-folding, self-bending, self-twisting and shape-shifting mechanisms [12, 13, 14, 15].
Whether the objective is to repetitively print parts of high quality, or to print SMP parts with a 4D effect, understanding the heat transfer is crucial for an accurate stress and bond strength prediction [16]. Several efforts have been made to predict thermal gradients and the development of the residual stresses in printed parts by performing thermo-mechanical finite element simulations. Such simulations often make use of sequential element activation to represent the deposition sequence that takes place during FFF printing. Zhang
and Chou [17] used such a model to perform a parametric study to predict part distortions in ABS. The analysis was thermo-mechanically coupled and the influence of various process parameters on the residual stresses was studied. Zhou et al.[18] presented a numerical model in which the heat transfer in ABS material was analyzed during the FDM process. In their model temperature-dependent material properties were used for the specific heat capacity and thermal conductivity. Cattenone et al. [1] performed thermo-mechanical simulations in which the FFF process was simulated to predict part distortions in ABS. Special attention was paid to the constitutive modeling of the polymer, and the influence of numerical considerations on the predicted residual stresses, such as time step size and meshing strategies. Yin et al. [16] performed similar heat transfer simulations but with a different objective. They used such models to predict the inter-facial bonding strength between printed filaments.
This work focuses on two challenges that have been presented when it comes to the accuracy and efficiency in simulating the heat transfer in the FFF process. The first challenge that arises is the correct choice of thermal boundary conditions, particularly the convective heat transfer coefficient. The exact magnitude of this parameter is not always explicitly stated, its empirical determination can be quite cumbersome and often the focus is on forced convection. Pereira et al. [19] focused on forced convection in their investigation of the effect of surface roughness on the convective heat transfer on the surfaces of FFF printed ABS cylindrical specimens. Zhou et al. [20] also calculated a value of the heat transfer coefficient based on forced convection over a rectangular body model. Costa et al. [21] did focus on natural convection between printed filament and the ambient air, but the determined values for the heat transfer coefficient still varied between a fairly large range (5-60 W/m\({}^{2}\cdot\)K) and this range was not experimentally validated. Lepovire et al. [22] determined the heat transfer coefficient by an empirical correlation for an external free convection flow. The magnitude of the parameters used in said correlation were not listed or further elaborated.
In this work the heat transfer coefficient is determined experimentally. Experimental thermal measurements are numerically simulated and a value for the heat transfer coefficient that describes natural convection is determined through data fitting.
The second challenge is related to the computational effort which is required for an accurate heat transfer simulation. In previous work, the printed material is often modeled as a continuum in which the finite element size is determined by the filament cross-sectional dimensions. This is a computationally expensive exercise as the cross-section of a filament is much smaller than the global dimensions of printed geometries. As a result, the question arises how the modeling can be simplified to speed up simulations. The first question that must be answered is whether the assumption of discretizing the material as a continuum without accounting for the inherent air voids is an accurate one. If this is the case, methods to simplify the discretization of the characteristic mesostructure can be investigated and the computational effort
required in heat transfer simulations can be reduced. Thus, in this work samples are printed with varying infill characteristics, such as varying air void size, infill densities and infill patterns. Thermal measurements are performed to investigate the role of these parameters on the heat transfer. Based on the observations made in these experiments, possible modeling simplifications are studied in the numerical simulations.
The remainder of this paper is set up as follows. The materials and methods used in the thermal measurements and numerical simulations on samples of varying infill geometries are presented in section 2. The results of the experimental thermal measurements are subsequently presented in section 3. Section 4 is dedicated to the numerical simulations. A value for the heat transfer coefficient is determined by numerically fitting the data obtained in the experiments and modeling simplifications of complex infill geometries are investigated. Finally the conclusions are presented in section 5 and ideas for future research are proposed.
## 2 Materials and Methods
### Experimental set-up
All specimens used in this work were printed with a Prusa i3 MK3 printer. The material used was Poly Lactid Acid (PLA) and the thermal properties as provided by the manufacturer Filamentum are listed in table 1. A JADE CW infrared camera from Cedip Infrared Systems was used for the thermal measurements. The experimental set-up used in this paper is shown in figure 2. The measuring procedure was as follows. First the printing bed was heated up to a nominal temperature of 60\({}^{\circ}\)C. A printed specimen was then placed on the heated printing bed for a sufficiently long time to allow for time to reach a steady-state. The heating of the specimens was recorded with the thermal
Figure 1: Shape change after heating FFF printed specimens above T\({}_{g}\)
camera and the temperature profiles were recorded with a sampling frequency of 1 Hz. The temperature evolution in time was recorded for five points on the top surface of the specimens (figure 2). All measurements were performed three times and they were spaced at least 4 hours apart to ensure full cooling prior to reheating. Even though the temperature of the printing bed was set to 60\({}^{\circ}\)C, the temperature which was measured on the surface of the bed with the thermal camera was equal to 56\({}^{\circ}\)C. The temperature of the ambient air in the room was 25\({}^{\circ}\)C.
### Geometries and infill structure
To facilitate the study of the two objectives in this paper different specimens printed with varying global and infill geometries were used during the thermal measurements. A reference specimen was printed which was used in the fitting process of the convective thermal boundary condition. All of the other printed specimens had a different infill geometry (air content, infill pattern) to investigate the influence of different infills on heat flow through the specimen. There are two different ways to vary the air content in the FFF printed specimens:
1. By variation of the extrusion factor
2. By variation of the infill density
Both approaches are shown schematically in figure 3. Varying the extrusion factor in a densely packed setting of filaments influences the size of the air voids between those filaments. The infill density controls the gap size between the printed filaments.
The Prusa slicer recommends a default value for the extrusion factor. This
\begin{table}
\begin{tabular}{l l} \hline \hline
**Property** & **Value** \\ \hline Density \(\rho\) & 1240 kg/m\({}^{3}\) \\ Specific heat capacity \(c_{p}\) & 1800 J/kg\(\cdot\)K \\ Conductivity \(K_{0}\) & 0.13 W/m\(\cdot\)K \\ \hline \hline \end{tabular}
\end{table}
Table 1: Thermal properties PLA
Figure 2: Experimental set-up
will be referred to as the default extrusion factor of 1.0 (EF=1.0). One sample was printed with the default extrusion factor, and the second sample was printed with an extrusion factor which was 10% higher than the default value. The dimensions of these samples were 30x30x2 mm and both specimens were printed with an infill density of 100%. These specimens were heated for five minutes. For the numerical discretization of the geometries with varying extrusion factor, a model as shown in figure 4 was used. The diamond shaped air voids were assumed based on a CT-scan of one of the printed samples (figure 4). The air void size is governed by parameter \(a\) and it was used to express the area of the air voids \(A_{air}\) as a fraction of the total area \(A_{tot}\):
\[\begin{split} A_{tot}&=w\cdot h\\ A_{air}&=\frac{1}{2}\cdot aw\cdot ah\cdot 4=2\cdot a^{2 }wh\\ A_{f}&=A_{tot}-A_{air}=(1-2a^{2})\cdot wh\\ v_{fr;\;a}&=\frac{A_{air}}{A_{tot}}=2a^{2}\end{split} \tag{1}\]
where \(w\) and \(h\) are the width and the height of a printed filament respectively, as can be seen in figure 4. The width of the filament equaled 0.45 mm and the layer height equaled 0.2 mm for all samples used in this work. By calculating the real volume fractions of air and filament in the printed samples, eq. 1 was solved for \(a\). The real volume fractions were calculated as follows (eq. 2):
1. The mass (\(m\)) and total volume (\(V_{tot}\)) of each printed sample were measured. The total volume was calculated by measuring the length, width and thickness of the printed specimens with a vernier caliper.
2. From the mass, the (real) extruded volume \(V_{extr}\) was calculated. The density of PLA was used (table 1).
3. The volume fractions of the filament \(v_{fr;\;f}\) and air \(v_{fr;\;a}\) were then calculated.
\[\begin{split} V_{extr}&=\frac{m}{\rho}\\ v_{fr;\;f}&=\frac{V_{extr}}{V_{tot}}\\ v_{fr;\;a}&=1-v_{fr;\;f}\end{split} \tag{2}\]
The samples for which the infill density and pattern were varied were printed with an extrusion factor of 1.0 and an infill density \(\leq\)100%. These samples were printed in the shape of a block with the dimensions of 30x30x20 mm. Two infill patterns were chosen; a rectilinear and a gyroid infill pattern (figure 5). The total steady-state heating time for these specimens was 40 minutes.
An overview of all the FFF printed samples with their dimensions, infill characteristics and heating time is given in table 2. The additional parameters required to describe the geometry of the cross-section of the printed specimens with varying air void size are listed in table 3.
### Thermal analysis
The heat transfer can be divided into various heat exchange modes [21]:
1. _Convection_ with the environment,
2. _Radiation_ with the environment and between adjacent filaments,
3. _Conduction_ with the printing bed and between adjacent filaments.
\begin{table}
\begin{tabular}{c c c c c} \hline \hline
**Sample** & \(1\times\mathbf{w}\times\mathbf{h}\) [mm] & **Infill pattern** & **Extrusion factor [\(\cdot\)]** & **Infill density [\%]** & **Heating time [min]** \\ \hline S1 & 30x30x20 & rectilinear & 1.0 & 100 & 40 \\ S2 & 30x30x20 & rectilinear & 1.0 & 50 & 40 \\ S3 & 30x30x20 & rectilinear & 1.0 & 25 & 40 \\ S4 & 30x30x20 & gyroid & 1.0 & 50 & 40 \\ S5 & 30x30x20 & gyroid & 1.0 & 25 & 40 \\ S6 & 30x30x2 & rectilinear & 1.0 & 100 & 5 \\ S7 & 30x30x2 & rectilinear & 1.1 & 100 & 5 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Overview of the printed samples and their characteristics
Figure 4: Discretization of air voids in the printed samples
Figure 3: Two different ways of varying the air content
Generally, conduction is well described by utilizing conductivity parameters found either in literature or provided by filament manufacturers. Filament cooling due to radiative heat exchange between the filaments is negligible [21]. Radiation with the environment can have an influence on filament temperature when the value of the heat transfer coefficient, the parameter which expresses convection, is relatively low (5 W/m\({}^{2}\cdot\)K) [21]. However, in most practical applications this value is much higher, which means that overall filament cooling becomes convection controlled [21]. Thus, in this work heat transfer by radiation was neglected and the focus was on determining the heat transfer coefficient and the temperature of the ambient air that describes the convection with the environment.
The temperature field \(T(\mathbf{x},t)\) is described by the heat equation:
\[\rho c_{p}\frac{\partial T(\mathbf{x},t)}{\partial t}=\nabla\cdot(K_{0}\nabla T (\mathbf{x},t))+q \tag{3}\]
where \(c_{p}\) [J/kg\(\cdot\)K] is the specific heat capacity, \(\rho\) [kg/m\({}^{3}\)] is the material density, \(K_{0}\) [W/m\(\cdot\)K] is the conductivity of the material, and \(q\) [W/m\({}^{3}\)] is the internal heat source. The thermal properties of PLA shown in table 1 were used here. The initial temperature of the specimens equaled the room temperature \(T_{a}\), thus the initial condition is expressed as:
\[T(\mathbf{x},0)=T_{a}\quad\mathbf{x}\in\Omega \tag{4}\]
where \(\Omega\) represents the domain of the printed specimen. Distinction is made between the boundary conditions at the interface between the heated printing bed and the sample \(\Gamma_{b}\), and at the free surfaces of the sample \(\Gamma_{f}\). The
\begin{table}
\begin{tabular}{c c c c c c c} \hline \hline
**EF** & **m** [g] & **V\({}_{\mathbf{tot}}\)** [cm\({}^{3}\)] & **V\({}_{\mathbf{extr}}\)** [cm\({}^{3}\)] & **v\({}_{\mathbf{fr};\;\mathbf{f}}\)** [-] & **v\({}_{\mathbf{fr};\;\mathbf{a}}\)** [-] & **a** [-] \\ \hline
1.0 & 2.10 & 1.78 & 1.69 & 0.95 & 0.05 & 0.16 \\
1.1 & 2.32 & 1.89 & 1.87 & 0.99 & 0.01 & 0.07 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Geometrical paramaters of the cross-section of samples printed with different extrusion factors
Figure 5: Examples of FFF printed specimens with rectilinear (left) and gyroid (right) infill patterns (both at 25% infill density)
temperature of the heated printing bed is applied as a Dirichlet boundary condition:
\[T(\mathbf{x},t)=T_{b}\quad\mathbf{x}\in\Gamma_{b} \tag{5}\]
At the free surfaces, it is assumed that the heat exchange between the sample and the environment is governed by convection. The Neumann boundary conditions are expressed as:
\[\begin{split} K_{0}\frac{\partial T(\mathbf{x},t)}{\partial \mathbf{n}}+q_{c}&=0\quad\mathbf{x}\in\Gamma_{f}\\ q_{c}&=h(T(\mathbf{x},t)-T_{c})\end{split} \tag{6}\]
where \(h\) [W/m\({}^{2}\cdot\)K] is the heat transfer coefficient, and \(T_{c}\) is the temperature of the ambient air. These two parameters were calibrated by fitting the numerical simulations to the experimental data acquired in the thermal measurements. Due to the nature of the thermal measurements, it was not entirely clear how the temperature was distributed in the vicinity of the outer surface of the sample. Since the printed specimens were relatively small, the heated printing bed might have influenced the temperature of the air surrounding the samples. This effect was taken into account during the calibration of the heat transfer coefficient. The process was structured in such a way that three situations were taken into account for the temperature of the ambient air:
1. Case 1 It was assumed that the printing bed did not have a significant influence on the temperature of the ambient air. The temperature of the ambient air equaled room temperature \(T_{a}\).
2. Case 2 The temperature of the ambient air was higher than the room temperature due to the heated printing bed. The same elevated temperature was assumed for all of the free surfaces of the heated specimen.
3. Case 3 In this case, it was still assumed that the free surfaces on the sides of the specimen, \(\Gamma_{f;\ s}\) were exposed to air of which the temperature was higher than room temperature. However, since the top surface of the specimen, \(\Gamma_{f;\ t}\) was more distant from the printing bed, a different temperature was assumed for this surface.
### Numerical set-up
All finite element (FE) analyses simulating the heat transfer in the printed specimens were set up in Ansys (Mechanical APDL 19.2). The analysis type was a transient thermal analysis. The initial and boundary conditions are listed in equations 4-6. The FE samples were discretized with 8-node thermal finite elements (SOLID70). The default element size coincided with the width of the extruded filament and layer height of the printed specimens. This is the case unless it is explicitly stated that a different mesh discretization was used. Convergence analyses were performed prior to all simulations presented
in this paper to ensure that the chosen mesh sizes were appropriate. As the air content in the printed samples was included in some simulations, distinction was made between the material properties of the air and of the PLA. Both materials were discretized with the same element type. All analyses that were performed in this work were assumed to be physically linear. This assumption is deemed valid as the maximum temperature that the specimens were heated up to is lower than the glass transition temperature of PLA. Thus, the constant thermal properties as given in table 1 were used in the numerical simulations. In the simulations, the measured value of 56\({}^{\circ}\)C was used for the temperature of the heated printing bed, unless it is explicitly stated otherwise.
The overall experimental and numerical methodology is summarized in figure 6. On the left side of the chart it is shown that the influence of the infill density (and pattern) is measured experimentally in specimens S1 - S5. Specimens S6 and S7 are used for the measurement of the influence of the air voids. Prior to numerical validation of these two factors, the convective thermal parameters \(h\) and \(T_{c}\) are numerically calibrated by using the experimental measurements on specimen S1, and numerically validated by comparing numerical and experimental results on specimens S2 and S3.
## 3 Experimental results
### Varying infill density
Specimens S1, S2 and S3 were heated to capture the influence of the infill density on the heat flow. Figure 7 shows the measured average temperature and envelope for each specimen as well as a comparison of the measured average temperature between the specimens with different infill density. The envelope data consists of three measurement repetitions on five data points for each measurement (15 data points total). It can be seen that the infill density affects the steady-state temperature and the heating rate. The significant increase in
Figure 6: Summary of experimental and numerical methodology used in this paper
air content for the samples with lower infill density results in less mass to be heated, which explains the faster heating rate.
### Varying extrusion factor
Figure 8 shows the results of the thermal measurements on the specimens S6 and S7 with varying air void size. Both the average values and the envelopes are plotted. Additionally, a comparison is made between the results of samples S6 and S7 to see what the influence is of the air void content. It can be seen that there is no significant difference between the temperatures measured on the samples printed with different extrusion factors. Varying the extrusion factor and thus the air void size does not seem to influence the heat transfer in the printed samples.
Figure 7: Experimental results of thermal measurements samples with varying infill density
### Varying infill pattern
To see whether the type of infill pattern influences the heat transfer, a comparison is made between samples printed with a rectilinear infill pattern and samples with a gyroid infill pattern. The comparison is done at an infill density of 50% (S2 and S4) and 25% (S3 and S5). The experimental results of the thermal measurements are shown in figure 9. At both infill densities, the differences between the measured temperatures of the rectilinear and gyroid specimen are negligible. Both the heating rate and steady-state temperature are similar for the samples with a rectilinear and gyroid infill pattern.
Figure 8: Results thermal measurements on specimens printed with 100% infill density and varying extrusion factors
## 4 Numerical results & discussion
### Determining the convective parameters
The three cases as described in section 2.3 were used in the calibration of the convective parameters, \(h\) and \(T_{c}\). The numerical results presented below were fitted to the the experimental results from the thermal measurements on sample S1. The experimental thermal measurements of samples S2 and S3 were used for validation of the determined parameters.
#### 4.1.1 Case 1
The first situation considered in the fitting process was the one in which the ambient temperature was assumed to be equal to the room temperature of 25\({}^{\circ}\)C. This value was also used as the initial temperature of the FE sample in the numerical simulations. The heat transfer coefficient in the simulations was varied between 10 and 30 W/m\({}^{2}\cdot\)K. The results are plotted in figure 10 for \(h\)=10, 20, 30 W/m\({}^{2}\cdot\)K. It is obvious that none of the simulations show agreement with the measurements. In each of the simulations, the converged temperature is significantly lower than the steady-state temperature found in the measurement (within the time frame of 40 minutes). Increasing the heat transfer coefficient above 30 W/m\({}^{2}\cdot\)K leads to an even lower steady-state temperature and a faster heating rate. For \(h\)=10 W/m\({}^{2}\cdot\)K, the steady-state is not reached within the considered time span, so using a smaller value for the heat transfer coefficient will delay this even further.
#### 4.1.2 Case 2
The assumption of the printed specimen being subjected to room temperature convection in the vicinity of its free surfaces seems to be incorrect. The simulations were therefore repeated for the same values of \(h\) as in figure 10,
Figure 9: Influence of infill pattern on heat transfer (experimental results)
but in combination with higher values for the ambient temperature. In figure 11 the results are plotted for the simulations in which the ambient temperature was varied between 25-32\({}^{\circ}\)C, with heat transfer coeffients of 20 W/m\({}^{2}\cdot\)K and 30 W/m\({}^{2}\cdot\)K. The same convective parameters were applied at all free surfaces. It can be seen that the correct steady-state temperature is reached for \(h\)=20 W/m\({}^{2}\cdot\)K and \(T_{c}\)=32\({}^{\circ}\)C. However, the temperature at the start of the simulation increases much faster than in the experimental measurements. This problem is not solved by choosing a different value for \(h\) than those displayed in figure 11.
#### Case 3
In an attempt to tackle the overestimated heating rate at the start of the simulations, the ambient air temperature at the top surface \(T_{c;\;t}\), was varied
Figure 11: Numerical simulations compared to experiments (case 2)
Figure 10: Numerical simulations compared to experiments (case 1)
between 25\({}^{\circ}\)C and 32\({}^{\circ}\)C. The temperature at the side surfaces of the specimens \(T_{c;\;s}\), was chosen to be higher than room temperature, again under the assumption that the printing bed heated up the air above it too. This temperature was varied between 32\({}^{\circ}\)C and the bed temperature. Heat transfer coefficients of 20, 25 and 30 W/m\({}^{2}\cdot\)K were prescribed at all free surfaces. In figure 12 the results are plotted for the case in which the temperature at the side surfaces is equal to the bed temperature and the temperature at the top surface equals 27\({}^{\circ}\)C. It can be seen that these values for the air temperature combined with a heat transfer coefficient of 25 W/m\({}^{2}\cdot\)K provide a good match with the experimental measurements.
Typically, the exact temperature on the surface of the printing bed is not measured but assumed to be equal to the value prescribed in the printing settings. To account for this, the data fitting process was repeated for \(T_{b}=T_{c;\;s}\)=60\({}^{\circ}\)C. These results are shown in figure 12. It can be seen that \(h\)=30 W/m\({}^{2}\cdot\)K combined with \(T_{c;\;s}\)=60\({}^{\circ}\)C and \(T_{c;\;t}\)= 27\({}^{\circ}\)C also provides a good fit with the experimental results. In the remainder of this paper, the following convective parameters are used: \(h\)=25 W/m\({}^{2}\cdot\)K, \(T_{c;\;s}\)=56\({}^{\circ}\)C, \(T_{c;\;t}\)=27\({}^{\circ}\)C.
#### Validation \(h\) & \(T_{c}\)
The accuracy of the calibrated convective parameters was validated by simulating the thermal measurements performed on the specimens printed with 50% and 25% infill density. The results of the numerical simulations are plotted against the experiments in figure 13 for both of the specimens. Good agreement is found between the simulations and experiments.
Figure 12: Fitting of numerical simulations to experiments (case 3)
### Modeling simplifications of the infill structure
#### 4.2.1 Air voids
In the simulations of the samples with an infill density of 100% and an extrusion factor of 1.0 presented in section 4.1, the FE meshes were made completely dense, ignoring the air voids that were present between the printed filaments. At the same time, the experiments performed on the samples with 100% infill density and a varying extrusion factor (section 3.2), implied that role of the air voids was negligible. In this section, we investigate whether a simplified material discretization as used in aforementioned simulations is accurate, or if it is necessary to model the air voids as well.
The simulations presented in this section were performed with a FE mesh which didn't include air voids, and with FE meshes which included air voids according to the extrusion factors that were used in the printed specimens (table 3). A sample of the detailed mesh including air voids is shown in figure 4. It consists of 13 hexahedral and prismatic elements for the cross-section of a single printed filament with surrounding air compared to one brick element when air voids are not included in the discretization.
The numerical results are shown in figure 14. First, the experimental results for the specimens printed with different extrusion factors are compared with the simulations obtained with the meshes including air voids. It can be seen that there is good agreement between the experimental and numerical results. The difference between the steady-state temperatures found in the simulations and experiments is less than 2%. A comparison is also made between the numerical results for the meshes in which air voids of varying size are included with a mesh in which the material is discretized as a continuum without air voids (figure 15). No significant difference is found between the measured temperatures for the different meshes. This coincides with the observation found
Figure 13: Validation of determined convective boundary conditions: numerical simulations against experiments
in the experimental measurements of the air voids size being of insignificant influence on the heat flow. Lastly, the meshes without air voids are coarsened to a mesh where the element dimensions are twice as large as the filament cross-sectional dimensions (mesh 2), and to a mesh where the element dimensions are five times as large as the filament dimensions (mesh 3). As can be seen in figure 15, no significant difference is found between the results obtained with these three meshes.
#### Infill density and pattern
From section 3.1 we know that varying the infill density has a significant influence on the heating rate and steady-state temperature (figure 7), whereas the
Figure 14: Influence of the air void size: numerical results and experimental validation
Figure 15: Influence of the air voids and mesh coarseness (simulations)
exact infill pattern doesn't seem to influence the aforementioned characteristics (figure 9). This means that it might be possible to simulate the accurate heat transfer within complex infill geometries by using much coarser and simplified FE meshes. To test this hypothesis, numerical models were set-up which simulated the heat transfer in the samples printed with a 25% rectilinear infill. In the models the geometry was discretized with the correct infill density and type, but with meshes of varying coarseness. In the default mesh the element dimensions equaled the dimensions of the filament cross-section. Based on the default mesh, two other, coarser FE meshes were used. In mesh 2, the element dimensions were twice as large as the filament cross-sectional dimensions. In the coarsest mesh (mesh 3) the element dimensions were five times as large as those of the filament cross-section. Since the empty spaces between the printed material were modeled with air elements, the air elements were also coarsened accordingly. The three different meshes are shown in figure 16.
The results are plotted in figure 17. It can be seen that all of the aforementioned mesh discretizations yield a similar temperature profile. By using the simplified and coarse rectilinear mesh (mesh 3), the computational time was reduced significantly. The number of elements and the CPU time required to solve the numerical models with the various meshes are listed in table 4. When compared to the experimental results of the thermal measurements performed on the printed samples with rectilinear infill, there is also a good agreement.
The next modeling simplification was to discretize a complex infill pattern, with both a simplified pattern and a coarser mesh. This has been done by comparing the experimental results of the specimen with a gyroid infill pattern with the coarsest rectilinear finite element model, both with a 25% infill density. These results are shown in figure 18. Again, there is good agreement between the experiments and the simulations. Both results in figures 17 and 18 support the experimental findings, namely that exact infill pattern does not influence the temperature profiles, as long as the correct infill density is taken into account.
## 5 Conclusions & Outlook
The overall objective of this work was to provide more clarity on two aspects which affect the accuracy and efficiency of heat transfer simulations of FFF printing. On one hand, a closer look has been taken at the prescription of thermal convective boundary conditions. A value for the heat transfer coefficient and ambient temperature, expressing the natural convection between FFF
\begin{table}
\begin{tabular}{l c c} \hline \hline & Elements [-] & CPU [min] \\ \hline Mesh 1 & 422500 & 96 \\ Mesh 2 & 54450 & 5 \\ Mesh 3 & 3380 & \(<\)1 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Comparison of CPU for various FE meshes
printed material and its environment have been calibrated through numerical data fitting of experimental thermal measurements.
On the other hand, simplifications for modeling air-filled and complex infill structures have been investigated. It has been found that accurate heat transfer can be simulated in such structures, as long as the infill density is respected. Discretization of the exact infill geometry does not lead to a significant increase in accuracy of the heat transfer simulations when compared to the experimental data. In densely packed geometries, the printed material can be modeled as a continuum. Discretization of the air voids between the printed filament is not necessary for an accurate heat transfer prediction.
The work performed in this paper was limited to the use of PLA and a Prusa i3 MK3 printer. For practical applications it would be of interest to expand the research to other materials such as ABS and to industrial 3D printers. It
Figure 16: FE meshes of varying coarseness for an infill density of 25% (blue: PLA elements, purple: air elements)
Figure 17: Comparison of various meshes for a rectilinear infill with 25% infill density
must also be noted that the influence of the surface roughness of the printed samples on the convective heat transfer was not taken into account. Lastly, the modeling simplifications proposed in this work have only been applied in thermal simulations. The next step is to model the extrusion process in FFF printing and to investigate the influence of the process parameters on the heat transfer and residual stresses.
## 6 Conclusions
Figure 18: Complex infill structure discretized with a simplified coarse mesh: gyroid infill vs. FE mesh 3 |
2310.20007 | Improved Bayesian Regret Bounds for Thompson Sampling in Reinforcement
Learning | In this paper, we prove the first Bayesian regret bounds for Thompson
Sampling in reinforcement learning in a multitude of settings. We simplify the
learning problem using a discrete set of surrogate environments, and present a
refined analysis of the information ratio using posterior consistency. This
leads to an upper bound of order $\widetilde{O}(H\sqrt{d_{l_1}T})$ in the time
inhomogeneous reinforcement learning problem where $H$ is the episode length
and $d_{l_1}$ is the Kolmogorov $l_1-$dimension of the space of environments.
We then find concrete bounds of $d_{l_1}$ in a variety of settings, such as
tabular, linear and finite mixtures, and discuss how how our results are either
the first of their kind or improve the state-of-the-art. | Ahmadreza Moradipari, Mohammad Pedramfar, Modjtaba Shokrian Zini, Vaneet Aggarwal | 2023-10-30T20:53:02Z | http://arxiv.org/abs/2310.20007v2 | # Improved Bayesian Regret Bounds for Thompson Sampling in Reinforcement Learning
###### Abstract
In this paper, we prove the first Bayesian regret bounds for Thompson Sampling in reinforcement learning in a multitude of settings. We simplify the learning problem using a discrete set of surrogate environments, and present a refined analysis of the information ratio using posterior consistency. This leads to an upper bound of order \(\widetilde{O}(H\sqrt{d_{l_{1}}T})\) in the time inhomogeneous reinforcement learning problem where \(H\) is the episode length and \(d_{l_{1}}\) is the Kolmogorov \(l_{1}-\)dimension of the space of environments. We then find concrete bounds of \(d_{l_{1}}\) in a variety of settings, such as tabular, linear and finite mixtures, and discuss how how our results are either the first of their kind or improve the state-of-the-art.
## 1 Introduction
Reinforcement Learning (RL) is a sequential decision-making problem in which an agent interacts with an unknown environment typically modeled as a Markov Decision Process (MDP) Sutton and Barto (2018); Bertsekas and Tsitsiklis (1996). The goal of the agent is to maximize its expected cumulative reward. This problem has a variety of applications, including robotics, game playing, resource management, and medical treatments. The key challenge in RL is to balance the so-called exploration-exploitation trade-off efficiently: exploring unseen state-action pairs to gain more knowledge about the unknown environment or exploiting the current knowledge to maximize the expected cumulative reward. Two efficient approaches have been developed to control this trade-off: _optimism in the face of uncertainty_ (OFU) and _Thompson Sampling_ (TS) (or Posterior Sampling (PS)). OFU constructs a confidence set of statistically plausible MDPs that includes the true MDP with high probability and plays an optimistic policy according to the MDP with maximum gain from this set Auer et al. (2008); Tossou et al. (2019). TS samples a statistically plausible MDP from a posterior distribution and plays the optimistic policy of the sampled MDP Osband et al. (2013); Osband and Van Roy (2017). In this work, we focus on the latter, and by combining an information theoretical approach first introduced by Russo and Van Roy (2016) with analysis based on posterior consistency tools, we prove state-of-the-art Bayesian regret bounds in a variety of settings.
In this paper, we start by defining the Bayesian RL problem, where transition and reward functions are Bayesian and time inhomogeneous. The Bayesian RL problem we consider is more comprehensive than in previous works, as we allow for both Bayesian transition and Bayesian rewards, and
do not make any assumption on their individual prior. To simplify the learning problem, we utilize the notion of surrogate environments, which is a discretization of the environments space, and its learning task and TS regret is a proxy to that of the main problem. The construction of the surrogate environments was first introduced by Hao and Lattimore (2022) with an incorrect proof, which is fixed in our work by defining the surrogate environments through an optimization. Of main importance is the size of this new environment space. The Bayesian regret decomposes to the product of two terms, one being the cumulative mutual information of the environment and history traversed by the policy. By the well-known entropy estimation of the mutual information, this significant factor in the regret is connected to the \(l_{1}-\)dimensions (\(d_{l_{1}}\)) of the transition and reward functions space, which can be more succinctly interpreted as the \(l_{1}-\)dimension \(d_{l_{1}}\) of the environment space. The latter is in turn estimated by the size of the space of surrogate environments.
The information ratio, representing a trade-off of exploration/exploitation, is the other significant term in the decomposition of the TS Bayesian regret. In an improvement to Hao and Lattimore (2022), our novel analysis of this ratio based on posterior consistency tools, shows that this trade-off is bounded by \(H^{3/2}\), where \(H\) is the episode length. This bound is general and independent of the dimension of transition/reward function space at each step, which is is a key factor behind the advantage of our regret bound, such as the \(\sqrt{SA}\) advantage in the tabular case compared to Hao and Lattimore (2022), or the lack of any restriction on the prior (e.g., Dirichlet prior) compared to Osband and Van Roy (2017). Following a further refined approach, we finally estimate the TS Bayesian regret to be \(\widetilde{O}(\lambda\sqrt{d_{l_{1}}T})\) for large enough \(T\) in the time inhomogeneous setting. Here, a new term 'value diameter' \(\lambda\), which is the average difference of the optimal value functions at different states, is used in bounding the information ratio, where instead of \(H^{3/2}\), we have the smaller term \(\lambda H^{1/2}\). Bounding the information ratio with \(\lambda\) is a conceptual contribution of our work, which shows that the ratio is bounded by a _value-dependent_ term, which is in nature different from \(H\) but always \(\leq H+1\). Further, there exists another bound for \(\lambda\); in environments where states are reachable from one another in \(D\) steps, we have \(\lambda\leq D+1\). In 'well-connected' MDPs, one could have \(D\ll H\), implying an improvement over the \(H^{3/2}\) information ratio bound.
Our generic bound is abstract in terms of \(d_{l_{1}}\), so we estimate it in more explicit terms for useful applications. Hao and Lattimore (2022) have bounded \(d_{l_{1}}\) in the tabular and linear case without formalizing this notion, and while for tabular MDPs, \(d_{l_{1}}\) was bounded by \(SAH\), for linear MDPs with feature space dimension \(d_{f}\), we investigate their claim of the bound \(d_{f}H\). Detailed in Appendix F, we show a counterexample to their analysis, and we manage to find a correct estimate in this setting. We also introduce finite mixtures MDPs and are the first to prove a TS Bayesian regret of order \(\widetilde{O}(\lambda\sqrt{HmT})\), where \(m\) is the number of mixtures.
Lastly, we note that our regret bound of order \(\widetilde{O}(\lambda\sqrt{d_{l_{1}}T})\) is the first in the general nonlinear time inhomogeneous Bayesian RL setting for TS, and generalizing (Osband and Van Roy, 2017, Conj. 1), we conjecture it to be optimal if \(\lambda\) can be replaced by \(\widetilde{O}(\sqrt{H})\).
Related work.Since the introduction of information ratio by Russo and Van Roy (2014, 2016), a new line of research has emerged to provide tighter regret bounds for TS. The general approach involves factoring the Bayesian regret into two components: an information ratio that captures the trade-off between optimal action selection and information gain, and a cumulative information gain term that depends on the target environment and the history of previous observations. Then, both components are bounded separately using information theoretic tools.
In the bandit setting, this analysis has been used to bound Bayesian regret for TS Dong and Van Roy (2018); Bubeck and Sellke (2020), as well as that of a new algorithm called information-directed sampling (IDS) Russo and Van Roy (2014); Liu et al. (2018); Kirschner et al. (2021); Hao et al. (2021, 2022). This analysis has also been used in partial monitoring Lattimore and Szepesvari (2019); Lattimore and Gyorgy (2021) and RL with a specific Dirichlet prior and additional assumptions Lu and Van Roy (2019); Lu (2020). More recently, Hao and Lattimore (2022) studied the Bayesian regret of TS in RL without any prior assumptions for tabular MDP. This is the closest work to our paper and we discuss our generalization in detail in Section 5.
It is worth noting that there is another line of work that incorporates confidence regions into TS to achieve Bayesian regret bounds that can match the best possible frequentist regret bounds by UCB in both bandit settings Russo and Van Roy (2014) and RL Osband et al. (2013); Osband and Van Roy (2014); Osband et al. (2019). However, this technique results in a sub-optimal Bayesian regret, as
the best bound known for UCB itself is not optimal. The Bayesian tabular MDP case has also been studied with the additional Dirichlet prior assumption in Osband and Van Roy (2017), where they achieve a regret bound matching ours. In an independent approach, the first non-linear Bayesian RL model was considered by Fan and Ming (2021) with a regret bound of \(dH^{3/2}T^{1/2}\) where \(d\) is a notion of dimension of their model, but their results were limited to Gaussian process settings with linear kernels. Finally, Chakraborty et al. (2022) considered general non-linear Bayesian RL models and introduced an algorithm that obtains \(dH^{1+\alpha/2}T^{1-\alpha/2}\) where \(\alpha\) is a tuning parameter and \(d\) is the dimension of \(\mathcal{S}\times\mathcal{A}\times\mathcal{S}\).
For the frequentist setting, various algorithms with provable regret guarantees have been proposed for model-free tabular MDPs. These include UCBVI Azar et al. (2017), optimistic Q-learning Jin et al. (2018), RLSVI Russo (2019); Zanette et al. (2020), and UCB-Advantage Zhang et al. (2020). These algorithms were further generalized to linear or linear mixture MDPs, such as LSVI-UCB Jin et al. (2020), OPPO Cai et al. (2020), and UCRL-VTR Ayoub et al. (2020); Zhou et al. (2021). Slightly more related to our work, model-based frequentist bounds have also been shown for a variant of posterior sampling (PS) in the tabular setting Agrawal and Jia (2017). For the specific variant of _optimistic_ PSRL, the optimal bound in the tabular setting with a Dirichlet prior was shown in Tiapkin et al. (2022). To our knowledge, a frequentist bound for PS is still an open problem for general RLs. Minimax regret bounds have also been studied for variants of TS, as in Dann et al. (2021). Most recently, Agarwal et al. (2022) presented VO\(Q\)L, an algorithm that achieves the optimal bound of \(\widetilde{O}(d\sqrt{HT})\) in the general model-free nonlinear setting, where \(d\) represents the generalized Eluder dimension of the value function space. Note that the notion of dimension used in our regret bounds is different, and unrelated, to the Eluder dimension used in model-free estimations. For frequentist model-based, the optimal bound was achieved in the tabular setting by Azar et al. (2017). As another research direction, Duan et al. (2021) utilized kernel-Hilbert spaces to estimate the value of infinite horizon Markov reward process (MRP) for RL problem, and Abedsoltan et al. (2023) pave the way for scalability challenges in kernel models.
While our work's emphasis is on theoretical guarantees for TS, we discuss here the experiments using this algorithm. Previous works on PSRL Russo and Van Roy (2014); Liu et al. (2018); Kirschner et al. (2021); Hao et al. (2022); Osband and Van Roy (2017) come with extensive experiments on TS (and/or its variants), and discussions on computational efficiency of PSRL. In particular, experiments in Osband and Van Roy (2017) support the assertion that "PSRL dramatically outperforms existing algorithms based on OFU". In addition, PSRL with oracle access has been shown to be the most performant, esp. when compared to recent OFU based UCBVI/UCBVI-B, or even variants of PSRL such as Optimistic PSRL (Tiapkin et al., 2022, Fig. 1.3). However, an important limitation in experiments is the need for oracle access to an optimal policy, and that can not be always satisfied efficiently. Nevertheless, clever engineering can make TS work even in large scale Deep RL. Indeed, for general RL settings, the recent work Sasso et al. (2023) shows how to implement TS in Deep RL on the Atari benchmark and concludes that "Posterior Sampling Deep RL (PSDRL) significantly outperforms previous state-of-the-art randomized value function approaches, its natural model-free counterparts, while being competitive with a state-of-the-art (model-based) reinforcement learning method in both sample efficiency and computational efficiency". In summary, experiments in the literature provide enough support for the empirical performance of TS.
## 2 Preliminaries
### Finite-horizon MDP
We follow the literature's conventions in our notation and terminology to avoid confusion when comparing results. The environment is a tuple \(\mathcal{E}=(\mathcal{S},\mu_{\mathcal{S}},\mathcal{A},\mu_{\mathcal{A}},H, \{P_{h}\}_{h=1}^{H},\{r_{h}\}_{h=1}^{H})\), where \(\mathcal{S}\) is the topological measurable state space, \(\mathcal{A}\) is the topological measurable action space, \(\mu_{\mathcal{S}}\) and \(\mu_{\mathcal{A}}\) are base probability measures on \(\mathcal{S}\) and \(\mathcal{A}\) respectively, \(H\) is the episode length, \(P_{h}:\mathcal{S}\times\mathcal{A}\rightarrow\Delta_{\mathcal{S},\mu_{\mathcal{ S}}}\) is the transition probability kernel, and \(r_{h}:\mathcal{S}\times\mathcal{A}\rightarrow\Delta_{[0,1],\mathrm{Lebesgue}}\) is the reward function, where we fix the convention \(r(s,a):=\mathbb{E}_{x}[r(x|s,a)]=\int_{0}^{1}xr(x|s,a)\,\mathrm{d}x\) as we mostly deal with its mean value. Notice that \(\Delta_{X,\mu}\) is the set of probability distributions over \(X\) that are absolutely continuous with respect to \(\mu\). We will use \(\Delta_{X}\) when the base measure is clear from the context. We assume \(\mathcal{S}\), \(\mathcal{A}\) are known and deterministic while the transition probability kernel and reward are unknown and random. Throughout the paper, the implicit dependence of \(P_{h}\) and \(r_{h}\) on \(\mathcal{E}\) should be clear from the context.
Let \(\Theta_{h}^{P}\) be the topological function space of \(P_{h}\) and \(\Theta^{P}=\Theta_{1}^{P}\times\cdots\times\Theta_{H}^{P}\) be the full function space. The space \(\Theta_{h}^{P}\) is assumed to be separable and equipped with prior probability measure \(\rho_{h}^{P}\) yielding the product prior probability measure \(\rho^{P}=\rho_{1}^{P}\otimes\cdots\otimes\rho_{H}^{P}\) for \(\Theta^{P}\). The exact same definition with similar notations \(\Theta_{h}^{R},\rho_{h}^{R},\rho^{R},\Theta^{R}\) applies for the reward function. Notice the explicit assumption of time inhomogeneity in these definitions, with all 'layers' \(h\) being independent. The two sets define the set of all environments parametrized by \(\Theta=\Theta_{1}\times\cdots\times\Theta_{H}\) where \(\Theta_{h}=\Theta_{h}^{P}\times\Theta_{h}^{R}\). Note that the prior is assumed to be known to the learner. This setting implies that an environment \(\mathcal{E}\) sampled according to the prior \(\rho=\rho^{P}\otimes\rho^{R}\) is essentially determined by its transition and reward functions pair \(\{(P_{h},r_{h})\}_{h=1}^{H}\). We simplify the notation to view \(\Theta\) as the set of all environments, i.e., saying \(\mathcal{E}\in\Theta\) should be viewed as \(\{(P_{h},r_{h})\}_{h=1}^{H}\in\Theta\). The space of all possible real-valued functions \(\{(P_{h},r_{h})\}_{h=1}^{H}\) has a natural vector space structure. Therefore it is meaningful to discuss the notion of the convex combination of environments. We assume that \(\Theta\) is a convex subspace of the space of all possible environments. This assumption is not restrictive, since we may replace any environment space with its convex hull. Note that we do not assume that the support of the prior is convex.
_Remark 1_.: The case of joint prior may be of interest, but to our knowledge all prior works also take \(\rho^{P},\rho^{R}\) to be independent.
Agent, policy and history.An agent starts at an initial state \(s_{1}^{\ell}\), which is fixed for all episodes \(\ell\). It observes a state \(s_{h}^{\ell}\) at layer \(h\) episode \(\ell\), takes action \(a_{h}^{\ell}\), and receives reward \(r_{h}^{\ell}\). The environment changes to the next random state \(s_{h+1}^{\ell}\) with probability \(P_{h}(s_{h+1}^{\ell}|s_{h}^{\ell},a_{h}^{\ell})\). The agent stops acting at \(s_{H+1}\) and the environment is reset to its initial state.
We define \(\mathcal{H}_{\ell,h}\) as the history \((s_{1}^{\ell},a_{1}^{\ell},r_{1}^{\ell},\ldots,s_{h}^{\ell},a_{h}^{\ell},r_{h} ^{\ell})\). Denote by \(\mathcal{D}_{\ell}=(\mathcal{H}_{1,H},\ldots,\mathcal{H}_{\ell-1,H})\) the history up to episode \(\ell\), where \(\mathcal{D}_{1}:=\emptyset\). Finally, let \(\Omega_{h}=\prod_{i=1}^{h}(\mathcal{S}\times\mathcal{A}\times[0,1])\) be the set of all possible histories up to layer \(h\).
A policy \(\pi\) is represented by stochastic maps \((\pi_{1},\ldots,\pi_{H})\) where each \(\pi_{h}:\Omega_{h-1}\times\mathcal{S}\rightarrow\Delta_{\mathcal{A},\mu_{ \mathcal{A}}}\). Let \(\Pi_{S}\) denote the entire stationary policy class, stationary meaning a dependence only on the current state and layer and let \(\Pi\subseteq\Pi_{S}\).
Value and state occupancy functions.Define the value function \(V_{h,\pi}^{\mathcal{E}}\) as the value of the policy \(\pi\) interacting with \(\mathcal{E}\) at layer \(h\):
\[V_{h,\pi}^{\mathcal{E}}(s):=\mathbb{E}_{\pi}^{\mathcal{E}}\left[\sum_{h^{\prime }=h}^{H}r_{h^{\prime}}(s_{h^{\prime}},a_{h^{\prime}})\bigg{|}s_{h}=s\right]\,, \tag{1}\]
where \(\mathbb{E}_{\pi}^{\mathcal{E}}\) denotes the expectation over the trajectory under policy, transition, and reward functions \(\pi,P_{h},r_{h}\). The value function at step \(H+1\) is set to null, \(V_{H+1,\pi}^{\mathcal{E}}(\cdot):=0\). We assume there is a measurable function \(\pi_{\mathcal{E}}^{*}:\Theta\rightarrow\Pi\) such that \(V_{h,\pi_{\mathcal{E}}^{*}}^{\mathcal{E}}(s)=\max_{\pi\in\Pi}V_{h,\pi_{ \mathcal{E}}^{*}}^{\mathcal{E}}(s),\;\forall s\in\mathcal{S},h\in[H]\). The optimal policy \(\pi^{*}\) is a function of \(\mathcal{E}\), making it a random variable in the Bayesian setting. Lastly, let the _state-action occupancy probability measure_ be \(\mathbb{P}_{\pi}^{\mathcal{E}}(s_{h}=s,a_{h}=a)\), also known as the state occupancy measure under policy \(\pi\) and environment \(\mathcal{E}\). It follows from the definitions that this measure is absolutely continuous with respect to \(\mu_{\mathcal{S}\times\mathcal{A}}:=\mu_{\mathcal{S}}\times\mu_{\mathcal{A}}\). Let \(d_{h,\pi}^{\mathcal{E}}(s,a)\) denote the Radon-Nikodym derivative so that we have \(d_{h,\pi}^{\mathcal{E}}(s,a)\,\mathrm{d}\mu_{\mathcal{S}\times\mathcal{A}}= \mathrm{d}\mathbb{P}_{\pi}^{\mathcal{E}}(s_{h}=s,a_{h}=a)\). We will assume throughout the paper that this density \(d_{h,\pi}^{\mathcal{E}}(s,a)\) is measurable and upper bounded for all \(\pi,\mathcal{E},s,a,h\). The upper bound is a reasonable assumption, and it happens trivially in the tabular case (\(d_{h,\pi}^{\mathcal{E}}(s,a)\leq SA\)). This also happens, e.g., when one assumes that the maps \((\mathcal{E},s,a,s^{\prime},h)\mapsto P_{h}^{\mathcal{E}}(s^{\prime}|s,a)\) and \((\pi,s,a,h)\mapsto\pi_{h}(a|s)\) are continuous and \(\Theta\), \(\mathcal{S}\), \(\mathcal{A}\) and the set of all optimal policies (as a subset of \(\Pi\)) are compact.
### Bayesian regret
We formulate the expected regret over \(L\) episodes and \(T=LH\) total steps in an environment \(\mathcal{E}\) as
\[\mathfrak{R}_{L}(\mathcal{E},\pi)=\mathbb{E}\left[\sum_{\ell=1}^{L}\left(V_{1, \pi_{\mathcal{E}}^{\mathcal{E}}}^{\mathcal{E}}(s_{1}^{\ell})-V_{1,\pi^{ \mathcal{E}}}^{\mathcal{E}}(s_{1}^{\ell})\right)\right]\,, \tag{2}\]
where the expectation is over the randomness of \(\pi=\{\pi^{\ell}\}_{\ell}\). The Bayesian regret is \(\mathfrak{B}\mathfrak{R}_{L}(\pi)=\mathbb{E}[\mathfrak{R}_{L}(\mathcal{E},\pi)]\). For Thompson Sampling (TS), the algorithm selects the optimal policy of a given
sample \(\mathcal{E}_{\ell}\) picked from the posterior \(\mathcal{E}_{\ell}\sim\mathbb{P}(\mathcal{E}\in\cdot|\mathcal{D}_{\ell})\):
\[\pi^{\ell}_{\text{TS}}=\text{argmax}_{\pi\in\Pi}V_{1,\pi}^{\mathcal{E}_{\ell}}( s_{1}^{\ell})\,. \tag{3}\]
Importantly, the law of TS aligns with the posterior, i.e., \(\mathbb{P}(\mathcal{E}|\mathcal{D}_{\ell})=\mathbb{P}(\pi^{\ell}_{\text{TS}}= \pi^{*}_{\mathcal{E}}|\mathcal{D}_{\ell})\).
_Remark 2_.: Note that \(\mathbb{P}(\pi^{\ell}_{\text{TS}}=\pi^{*}_{\mathcal{E}}|\mathcal{D}_{\ell})\) is a probability for a specific measure on the space of optimal policies. To ensure that \(\int_{\Pi}\), \(\mathbb{P}(\pi^{*}|\mathcal{D}_{\ell})\mathrm{d}\rho_{\Pi^{*}}=1\), we need an appropriate measure \(\rho_{\Pi^{*}}\) on \(\Pi^{*}\). Given the law of TS, the natural choice for this measure is the push-forward of the prior measure \(\rho\) under the map \(star:\Theta\to\Pi^{*}\), where \(star(\mathcal{E})=\pi^{*}_{\mathcal{E}}\).
### Notations
For Bayesian RL, conditional expressions involving a given history \(\mathcal{D}_{\ell}\) are widely used. We adopt the notation in Hao and Lattimore (2022) to refer to such conditionals; let \(\mathbb{P}_{\ell}(\cdot):=\mathbb{P}(\cdot|\mathcal{D}_{\ell})\), \(\mathbb{E}_{\ell}[\cdot]:=\mathbb{E}[\cdot|\mathcal{D}_{\ell}]\). We can rewrite the Bayesian regret as
\[\mathfrak{B}\mathfrak{R}_{L}(\pi)=\sum_{\ell=1}^{L}\mathbb{E}\left[\mathbb{E}_ {\ell}\left[V_{1,\pi^{\mathcal{E}}_{\ell}}^{\mathcal{E}}(s_{1}^{\ell})-V_{1,\pi }^{\mathcal{E}}(s_{1}^{\ell})\right]\right] \tag{4}\]
and define the conditional mutual information \(\mathbb{I}_{\ell}(X;Y):=D_{\text{KL}}(\mathbb{P}((X,Y)\in\cdot|\mathcal{D}_{ \ell})||\mathbb{P}(X\in\cdot|\mathcal{D}_{\ell})\otimes\mathbb{P}(Y\in\cdot| \mathcal{D}_{\ell}))\). For a random variable \(\chi\) and random policy \(\pi\), the following will be involved in the information ratio:
\[\mathbb{I}_{\ell}^{\mathcal{E}}(\chi;\mathcal{H}_{\ell,h}):=\mathbb{I}_{\ell} (\chi;\mathcal{H}_{\ell,h}|\pi)=\mathbb{E}_{\pi}[D_{\text{KL}}(\mathbb{P}_{ \ell}((\chi,\mathcal{H}_{\ell,h})\in\cdot|\pi)||\mathbb{P}_{\ell}(\chi\in \cdot|\pi)\otimes\mathbb{P}_{\ell}(\mathcal{H}_{\ell,h}\in\cdot|\pi))]\,, \tag{5}\]
Note that \(\mathbb{E}[\mathbb{I}_{\ell}(X;Y)]=\mathbb{I}(X;Y|\mathcal{D}_{\ell})\). To clarify, \(\mathbb{P}_{\ell}(\mathcal{H}_{\ell,h}\in\cdot|\pi)\) is the probability of \(\mathcal{H}_{\ell,h}\) being generated under \(\pi\) within some environment. Given that the histories under consideration are generated by the TS algorithm, they are always generated in the true environment \(\mathcal{E}\) under an optimal policy \(\pi^{*}_{\mathcal{E}^{\prime}}\). For \(\pi=\pi^{\ell}_{\text{TS}}\), this can be computed as \(\mathbb{P}_{\ell}(\mathcal{H}_{\ell,h}|\pi)=\int_{\mathcal{E}}P(\mathcal{H}_{ \ell,h}|\pi,\mathcal{E})\,\mathrm{d}\mathbb{P}_{\ell}(\mathcal{E})\), where \(P(\mathcal{H}_{\ell,h}|\pi,\mathcal{E})\) is an expression in terms of transition and reward functions of \(\mathcal{E}\) and \(\pi\).
Finally, we define \(\bar{\mathcal{E}}_{\ell}\) as the mean MDP where \(P_{h}^{\bar{\mathcal{E}}_{\ell}}(\cdot|s,a)=\mathbb{E}_{\ell}[P_{h}^{\mathcal{ E}}(\cdot|s,a)]\) is the mean of posterior measure, and similarly for \(r_{h}^{\bar{\mathcal{E}}_{\ell}}(\cdot|s,a)=\mathbb{E}_{\ell}[r_{h}^{\mathcal{ E}}(\cdot|s,a)]\). We note that under the independence assumption across layers, the same is given for the state-occupancy density \(d_{h,\pi}^{\bar{\mathcal{E}}_{\ell}}=\mathbb{E}_{\ell}[d_{h,\pi}^{\mathcal{E}}]\).
## 3 Bayesian RL problems
**Definition 1**.: A Bayesian RL in this paper refers to the time-inhomogeneous finite-horizon MDP with independent priors on transition and reward functions, as described in Section 2.1.
The Bayesian RL _problem_ is the task of finding an algorithm \(\pi\) with optimal Bayesian regret as defined in Eq. (4). Below we list the variations of this problem. A setting considered by most related works such as Osband and Van Roy (2017); Fan and Ming (2021) is the following:
**Definition 2**.: The **time (reward) homogeneous** Bayesian RL refers to the Bayesian RL setting where the prior \(\rho^{P}\) (\(\rho^{R}\)) is over the space \(\Theta^{P}\) (\(\Theta^{R}\)) containing the single transition (reward) function \(P\) (\(r\)) defining \(\mathcal{E}\), i.e., all layers have the same transition (reward) functions.
**Definition 3**.: The **tabular** Bayesian RL is a Bayesian RL where \(\mathcal{S},\mathcal{A}\) are finite sets.
**Definition 4** (Linear MDP Yang and Wang (2019); Jin et al. (2020)).: Let \(\phi^{P}:\mathcal{S}\times\mathcal{A}\to\mathbb{R}^{d_{f}^{P}}\), \(\phi^{R}:\mathcal{S}\times\mathcal{A}\to\mathbb{R}^{d_{f}^{R}}\) be feature maps with bounded norm \(\|\phi^{P}(s,a)\|_{2},\|\phi^{R}(s,a)\|_{2}\leq 1\). The **linear** Bayesian RL is a Bayesian RL where for any \(\mathcal{E}=\{(P_{h}^{\mathcal{E}},\pi_{h}^{\mathcal{E}})\}_{h=1}^{H}\in\Theta\), there exists vector-valued maps \(\psi_{h}^{P,\mathcal{E}}(s),\psi_{h}^{R,\mathcal{E}}(s)\) with bounded \(l_{2}-\)norm such that for any \((s,a)\in\mathcal{S}\times\mathcal{A}\),
\[P_{h}^{\mathcal{E}}(\cdot|s,a)=\langle\phi^{P}(s,a),\psi_{h}^{P,\mathcal{E}}( \cdot)\rangle\,,\,\,\,r_{h}^{\mathcal{E}}(\cdot|s,a)=\langle\phi^{R}(s,a),\psi _{h}^{R,\mathcal{E}}(\cdot)\rangle \tag{6}\]
A restricted version of the finite mixtures called linear mixture was first considered in Ayoub et al. (2020) in the frequentist setting. Here, we consider the general setting.
**Definition 5**.: The **finite mixtures** Bayesian RL is a Bayesian RL where for any \(h\in[H]\) there exists fixed conditional distributions \(\{Z_{h,i}^{P}:\mathcal{S}\times\mathcal{A}\rightarrow\Delta_{\mathcal{S}}\}_{i=1 }^{m_{h}^{P}}\) and \(\{Z_{h,i}^{R}:\mathcal{S}\times\mathcal{A}\rightarrow\Delta_{[0,1]}\}_{i=1}^{m_{h} ^{R}}\), such that for any environment \(\mathcal{E}\) given by \(\{(P_{h}^{\mathcal{E}},r_{h}^{\mathcal{E}})\}_{h=1}^{H}\), there exists parametrized probability distributions \(\mathbf{a}_{h}^{P,\mathcal{E}}:\mathcal{S}\times\mathcal{A}\rightarrow\Delta_{m_{h }^{P}},\mathbf{a}_{h}^{R,\mathcal{E}}:\mathcal{S}\times\mathcal{A}\rightarrow\Delta_ {m_{h}^{R}}\) such that
\[P_{h}^{\mathcal{E}}(\cdot|s,a)=\sum_{i=1}^{m_{h}^{P}}a_{h,i}^{P,\mathcal{E}}(s, a)Z_{h,i}^{P}(\cdot|s,a),\ r_{h}^{\mathcal{E}}(\cdot|s,a)=\sum_{i=1}^{m_{h}^{R}}a_{h,i}^{R, \mathcal{E}}(s,a)Z_{h,i}^{R}(\cdot|s,a) \tag{7}\]
## 4 Surrogate learning
Next, we define the discretized surrogate learning problem, and bound the size of the surrogate environments space, a significant term in the regret. To do so, we need to first define the Kolmogorov dimension of a set of parametrized distributions, esp. working out the case of \(l_{1}-\)distance. In the definitions below, we implicitly assume any required minimal measurability assumptions on the involved sets.
**Definition 6**.: Given a set \(\mathcal{F}\) of \(\mathcal{O}-\)parametrized distributions \(P:\mathcal{O}\rightarrow\Delta(\mathcal{S})\) over a set \(\mathcal{S}\) where both \(\mathcal{O},\mathcal{S}\) are measurable. Let \(\mathcal{M}(\cdot,\cdot):\mathcal{F}\times\mathcal{F}\rightarrow\mathbb{R}^{ \geq 0}\) be a _distance_, i.e., \(\mathcal{M}(P,Q)\geq 0\xleftrightarrow{\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
The above is proved in Appendix A. It is hard to find \(d_{\text{surm}}\), but one can estimate \(d_{l_{1}}\), and according to the above, this acts as a proxy for \(K_{\text{surm}}\). This is useful as the regret relates to \(K_{\text{surm}}\). But to show this, we need to construct _surrogate environments_ inside each partition, and show that learning those is almost equivalent to the original problem. Let \(\zeta\) be a discrete random variable taking values in \(\{1,\cdots,K_{\text{surm}}(\varepsilon)\}\) that indicates the partition \(\mathcal{E}\) lies in, such that \(\zeta=k\) if and only if \(\mathcal{E}\in\Theta_{k}\).
**Lemma 2**.: _For any \(\varepsilon-\)value partition and any \(\ell\in[L]\), there are random environments \(\tilde{\mathcal{E}}^{*}_{\ell}\in\Theta\) with their laws only depending on \(\zeta,\mathcal{D}_{\ell}\), such that_
\[\mathbb{E}_{\ell}\left[V^{\mathcal{E}}_{1,\pi^{\prime}_{\mathcal{E}}}(s^{\ell }_{1})-V^{\mathcal{E}}_{1,\pi^{\prime}_{\mathcal{E}}}(s^{\ell}_{1})\right]- \mathbb{E}_{\ell}\left[V^{\tilde{\mathcal{E}}^{*}_{\ell}}_{1,\pi^{\prime}_{ \mathcal{E}}}(s^{\ell}_{1})-V^{\tilde{\mathcal{E}}^{*}_{\ell}}_{1,\pi^{\prime }_{\mathcal{E}}}(s^{\ell}_{1})\right]\leq\varepsilon\,. \tag{11}\]
_The expectation in both equations is over \(\mathcal{E}\) and \(\pi^{\ell}_{\mathcal{E}}\in\{\pi^{*}_{\mathcal{E}}\}_{\mathcal{E}^{\prime}\in\Theta}\), with both sampled independently \(\sim\mathbb{P}_{\ell}(\cdot)\), and the \(K\) different values of \(\tilde{\mathcal{E}}^{*}_{\ell}\). The second expectation over \((\tilde{\mathcal{E}}^{*}_{\ell},\mathcal{E})\) is over pairs that are in the same partition, i.e., \(\tilde{\mathcal{E}}^{*}_{\ell},\mathcal{E}\) are independent only after conditioning on \(\zeta\)._
We note that the proof in (Hao and Lattimore, 2022, App. B.1) contains the use of a lemma that does not apply to construct the law of the environment \(\tilde{\mathcal{E}}^{*}_{\ell}\). More details is provided in Appendix B, where we find \(\tilde{\mathcal{E}}^{*}_{\ell}\) by minimizing an expected value of \(\pi^{\ell}_{\text{TS}}\).
## 5 Bayesian regret bounds for Thompson Sampling
### General Bayesian regret bound
We start by introducing the notion of value diameter.
**Definition 9**.: Given the environment \(\mathcal{E}\), its value diameter is defined as
\[\lambda_{\mathcal{E}}:=\max_{1\leq h\leq H}(\sup_{s}V^{\mathcal{E}}_{h,\pi^{ \prime}_{\mathcal{E}}}(s)-\inf_{s}V^{\mathcal{E}}_{h,\pi^{\prime}_{\mathcal{E }}}(s))+\max_{1\leq h\leq H,s\in\mathcal{S},a\in\mathcal{A}}(r^{\sup}_{h}(s,a )-r^{\inf}_{h}(s,a)),\]
where \(r^{\sup}_{h}(s,a)\) (and \(r^{\inf}_{h}(s,a)\)) is the supremum (and infimum) of the set of rewards that are attainable under the distribution \(r_{h}(s,a)\) with non-zero probability. As a special case, if rewards are deterministic, then we have \(r^{\sup}_{h}(s,a)=r^{\inf}_{h}(s,a)\) for all \(s,a\). The (average) value diameter over \(\Theta\) is denoted by \(\lambda:=\mathbb{E}_{\mathcal{E}\sim\rho}[\lambda^{2}_{\mathcal{E}}]^{1/2}\).
As the value function is between \(0\) and \(H\), we have \(\lambda_{\mathcal{E}}\leq H+1\) implying \(\lambda\leq H+1\). Note that value diameter is closely related to the notion of diameter commonly defined in finite RL problems. Strictly speaking, for a time-homogeneous RL, it is straightforward to see that the value diameter is bounded from above by one plus the diameter Puterman (2014). To our knowledge, the notion of diameter has not been defined in the time-inhomogeneous setting.
We now discuss the assumptions surrounding our results. The main technical assumption of this paper is the existence of consistent estimators, which as we will see in Appendix J, is closely related to the notion of posterior consistency:
**Assumption 1**.: _There exists a strongly consistent estimator of the true environment given the history._
Roughly speaking, we assume that with unlimited observations under TS, it is possible to find the true environment. For this assumption to fail, we need to have two environments that produce the same distribution over histories under TS and are therefore indistinguishable from the point of view of TS. The precise description of this assumption is detailed in Appendix J.
Another necessary technical assumption is that almost all optimal policies visit almost all state action pairs in their respective environment.
**Assumption 2**.: _For almost every environment \(\mathcal{E}\in\Theta\) and almost every \((s,a)\in\mathcal{S}\times\mathcal{A}\) and every \(h\in[H]\), we have_
\[d^{\mathcal{E}}_{h,\pi^{*}_{\mathcal{E}}}(s,a)\neq 0.\]
Recall that, for any environment \(\mathcal{E}\in\Theta\), the policy \(\pi^{*}_{\mathcal{E}}\) is the optimal policy of \(\mathcal{E}\) within the policy class \(\Pi\). Therefore, one example of how the above assumption holds is when \(\Pi\) is the set of \(\varepsilon\)-greedy algorithms and transition functions of environments assign non-zero probability to every state. Under these assumptions, we discuss our main result and its corollaries.
**Theorem 3**.: _Given a Bayesian RL problem, for all \(\varepsilon>0\), we have_
\[\mathfrak{B}\mathfrak{R}_{L}(\pi_{\text{TS}})\leq 2\lambda\sqrt{\log(K_{\text{ surr}}(\varepsilon))T}+L\varepsilon+T_{0} \tag{12}\]
_where \(T_{0}\) does not depend on \(T\). This can be further upper bounded by_
\[\mathfrak{B}\mathfrak{R}_{L}(\pi_{\text{TS}})\leq\widetilde{O}(\lambda\sqrt{d_ {l_{1}}T})\,. \tag{13}\]
_for large enough \(T\). Given a homogeneous \(l_{1}\) dimension \(d_{\mathrm{hom}}=d_{l_{1},h},\forall h\), this simplifies to_
\[\mathfrak{B}\mathfrak{R}_{L}(\pi_{\text{TS}})\leq\widetilde{O}(\lambda\sqrt{ Hd_{\mathrm{hom}}T})\,. \tag{14}\]
_Remark 4_.: For all regret bounds, we will replace \(\lambda\leq H+1\) to compare our result. For the case of homogeneous dimensions, we obtain \(\widetilde{O}(H^{3/2}\sqrt{d_{\mathrm{hom}}T})\). Crucially, our main result shows a new conceptual understanding of the information ratio by bounding it by two terms of different nature: \(H\) and \(\lambda\), where the latter can be bounded by either the largest diameter of the environments or \(H\).
_Remark 5_.: Despite not impacting the asymptotics, the impact of \(T_{0}\) can be large depending on the structure of the RL problem, and could be dominant even for large \(T\)s in practice.
_Remark 6_.: Considering time as a part of the state observation, one could apply this regret analysis to particular time-homogeneous settings. However, this mapping of time-inhomogeneous RLs to homogeneous ones is not surjective, hence the result above does not readily extend to time-homogeneous settings.
While Fan and Ming (2021) were the first to consider a nonlinear Bayesian RL model, their bound is limited to the Gaussian process (with linear kernel) setting, while ours in the nonlinear time inhomogeneous setting makes no assumptions on the prior and is the first such bound. Our novel analysis allow us to upper bound the information ratio by \(\lambda\sqrt{H}\) instead of, for example \(H^{3/2}\sqrt{SA}\)(Hao and Lattimore (2022)) in the tabular case, improving the regret bound by a square root relevant to the dimension \(d\) of the problem.
The detailed proof is given in Appendix C. Following Hao and Lattimore (2022), the regret (4) is rewritten using Lemma 2 to reduce the problem into its surrogate, and we use the well-known information-ratio trick by multiplying and dividing by the mutual information. We follow that with a Cauchy-Schwarz, summarized below
\[\mathfrak{B}\mathfrak{R}_{L}(\pi_{\text{TS}}) \leq\mathbb{E}\left[\sum_{\ell=1}^{L}\frac{\mathbb{E}_{\ell}\left[ V_{1,\pi_{\text{FS}}^{\mathcal{E}_{\ell}^{*}}}^{\mathcal{E}_{\ell}^{*}}(s_{1}^{ \ell})-V_{1,\pi_{\text{TS}}^{\mathcal{E}_{\ell}^{*}}}^{\mathcal{E}_{\ell}^{* }}(s_{1}^{\ell})\right]}{\sqrt{\mathbb{I}_{\ell}^{\pi_{\text{TS}}}}(\tilde{ \mathcal{E}_{\ell}^{*}};\mathcal{H}_{\ell,H})}\right]+L\varepsilon \tag{15}\] \[\leq\sqrt{\mathbb{E}\left[\sum_{\ell=1}^{L}\frac{\left(\mathbb{E }_{\ell}\left[V_{1,\pi_{\text{FS}}^{\mathcal{E}_{\ell}^{*}}}^{\mathcal{E}_{ \ell}^{*}}(s_{1}^{\ell})-V_{1,\pi_{\text{TS}}^{\mathcal{E}_{\ell}^{*}}}^{ \mathcal{E}_{\ell}^{*}}(s_{1}^{\ell})\right]\right)^{2}}{\mathbb{I}_{\ell}^{ \pi_{\text{TS}}}(\tilde{\mathcal{E}_{\ell}^{*}};\mathcal{H}_{\ell,H})}\right] \mathbb{E}\left[\sum_{\ell=1}^{L}\mathbb{I}_{\ell}^{\pi_{\text{TS}}^{\mathcal{ E}_{\ell}^{*}}}(\tilde{\mathcal{E}_{\ell}^{*}};\mathcal{H}_{\ell,H}) \right]+L\varepsilon \tag{16}\]
Note the cost \(\varepsilon\) at each episode (Lemma 2) in the first inequality, yielding the overall error \(L\varepsilon\). Then, we can bound the mutual information appearing in the regret term by \(\mathbb{E}\left[\sum_{\ell=1}^{L}\mathbb{I}_{\ell}^{\pi_{\text{TS}}}(\tilde{ \mathcal{E}_{\ell}^{*}};\mathcal{H}_{\ell,H})\right]=I_{\ell}^{\pi_{\text{TS}} ^{\mathcal{E}_{\ell}^{*}}}(\tilde{\mathcal{E}_{\ell}^{*}};\mathcal{D}_{\ell}) \leq I_{\ell}^{\pi_{\text{TS}}^{\mathcal{E}_{\ell}}}(\zeta;\mathcal{D}_{\ell}) \leq\log(K_{\text{surr}}(\varepsilon))\), where we used the mutual information chain rule, followed by data processing inequality to substitute \(\tilde{\mathcal{E}_{\ell}^{*}}\to\zeta\), and finally used the trivial bound by the entropy. But the main novelty of our approach lies in our control of the first term
\[\Gamma_{\ell}(\pi_{\text{TS}}^{\ell}):=\frac{\left(\mathbb{E}_{\ell}\left[V_{1,\pi_{\text{FS}}^{\mathcal{E}_{\ell}^{*}}}^{\mathcal{E}_{\ell}^{*}}(s_{1}^{ \ell})-V_{1,\pi_{\text{TS}}^{\mathcal{E}_{\ell}^{*}}}^{\mathcal{E}_{\ell}^{*}}( s_{1}^{\ell})\right]\right)^{2}}{\mathbb{I}_{\ell}^{\pi_{\text{TS}}^{\mathcal{E}_{ \ell}^{*}}}(\tilde{\mathcal{E}_{\ell}^{*}};\mathcal{H}_{\ell,H})} \tag{17}\]
called the information ratio. In our analysis, we have the following bound on its expectation.
\[\mathbb{E}[\Gamma_{\ell}(\pi_{\text{TS}}^{\ell})\mid\mathcal{E}_{0}]\leq \mathbb{E}\left[\sum_{h}\int\frac{\mathbb{E}_{\ell}\left[(\lambda_{\mathcal{E} }d_{h,\pi^{*}}^{\mathcal{E}_{\ell}}(s,a))^{2}\right]}{\mathbb{E}_{\ell}\left[d _{h,\pi^{*}}^{\mathcal{E}_{\ell}}(s,a)\right]}\mu_{\mathcal{S}\times\mathcal{A}} \mid\mathcal{E}_{0}\right],\]
where the average is taken over all histories \(\mathcal{D}_{\ell}\) that are generated from running TS on the true environment \(\mathcal{E}_{0}\), and we have introduced the smaller term \(\lambda_{\mathcal{E}}\) instead of \(H\) in Hao and Lattimore (2022). While Hao and Lattimore (2022) essentially bound the above only in the tabular setting with \(SAH^{3}\), we manage to generally bound the above with a more precise bound using Doob's consistency theorem. Assumption 1 allows us to use Doob's consistency theorem to conclude that for almost every environment \(\mathcal{E}_{0}\), almost every infinite sequence of histories \((\mathcal{D}_{\ell})_{\ell=1}^{\infty}\) sampled from \(\mathcal{E}_{0}\), and every integrable function \(f\), the posterior mean \(\mathbb{E}_{\ell}[f(\mathcal{E})]=\mathbb{E}[f(\mathcal{E})\mid\overline{D}_{ \ell}]\) converges to \(f(\mathcal{E}_{0})\). In particular, we conclude that \(\mathbb{E}[\Gamma_{\ell}(\pi_{\text{TS}}^{\ell})\mid\mathcal{E}_{0}]\) tends to \(\lambda_{2_{0}}^{2}H\) in the limit, allowing us to claim that for large enough \(\ell\), the expected information ratio \(\mathbb{E}[\Gamma_{\ell}(\pi_{\text{TS}}^{\ell})]\) is uniformly bounded by \(2\mathbb{E}[\lambda_{\mathcal{E}}^{2}]H=2\lambda^{2}H\). As there are \(L\) many such ratios, the two bounds together yield \(2\sqrt{\lambda^{2}HL}\cdot\sqrt{\log(K_{\text{surr}}(\varepsilon))}+L\varepsilon\). This bound is true for large enough \(\ell\), giving the additional additive term \(T_{0}\) in the theorem. Since this term is additive, applying Lemma 1 to bound \(\log(K_{\text{surr}}(\varepsilon))\), we have successfully shown the asymptotic behavior of the regret, independent of the prior, is of order \(\widetilde{O}(H\sqrt{d_{l_{1}}T})\).
### Applications
In each application below, the challenge is to bound \(d_{l_{1}}\) using the specifics of the model, and except for the case of tabular Bayesian RL, such analysis has not been carried out rigorously. We formalize the corollaries and show they are state-of-the-art compared to the literature.
Tabular RL.The result below follows from Theorem 3; the main contribution comes from our new information ratio bound, followed by the estimate \(\widetilde{O}((\frac{1}{\varepsilon})^{SAH})\) of \(K_{\text{surr}}(\varepsilon)\)(Hao and Lattimore (2022)).
**Corollary 4**.: _Given a tabular Bayesian RL problem, for large enough \(T\),_
\[\mathfrak{B}\mathfrak{R}_{L}(\pi_{\text{TS}})\leq\widetilde{O}(\lambda\sqrt{ HSAT})\,, \tag{18}\]
_where the polylogarithmic terms are explicitly in terms of \(H,S,A,L\)._
We observe that our result matches Osband and Van Roy (2017) when their result in the time homogeneous setting (Definition 2) is extended to time inhomogeneous. However, in that paper, the authors assume a Dirichlet based prior which we do not.
Linear RL.A previous state-of-the-art \(\widetilde{O}(d_{f}H^{3/2}\sqrt{T})\) was claimed by Hao and Lattimore (2022) to hold for linear Bayesian RLs with deterministic reward. We note:
* As in the previous cases, their proof in bounding their information ratio includes a factor of \(d_{f}\), which ours avoids.
* We show that the proof bounding \(K_{\text{surr}}(\varepsilon)\) in (Hao and Lattimore, 2022, App. B.4) is incorrect, starting with a wrong application of Cauchy-Schwarz and a wrong mutual information in their definition of information ratio. We provide counterexamples for the estimates found therein to substantiate our claim (see Appendix F.1).
To state our own corollary in this case, we need to define a few notions. Let \(d_{l_{1}}^{f}=d_{l_{1}}^{P,f}+d_{l_{1}}^{R,f}\) be the sum of the \(l_{1}-\)dimensions of the feature map space \(\{\psi_{h}^{P,\mathcal{E}}\}_{\mathcal{E}\in\Theta},\{\psi_{h}^{R,\mathcal{E }}\}_{\mathcal{E}\in\Theta}\) where the \(l_{1}-\)distance between feature maps is defined as \(l_{1}(\psi_{h}^{\mathcal{E}},\psi_{h}^{\mathcal{E}^{\prime}})=\int_{s}\|\psi_{h }^{\mathcal{E}}-\psi_{h}^{\mathcal{E}^{\prime}}\|_{1}\mu_{\mathcal{S}}\). Our corollary also provides a concrete bound in the case of _mixture_ linear Bayesian RL where the feature maps are themselves a sum of finitely many **fixed** feature maps. This means for all \(\mathcal{E}\in\Theta\), we have
\[\psi_{h}^{P,\mathcal{E}}=\sum_{i=1}^{m_{h}^{P}}a_{h,i}^{P,\mathcal{E}}\Psi_{h, i}^{P}(s),\;\;\psi_{h}^{R,\mathcal{E}}=\sum_{i=1}^{m_{h}^{R}}a_{h,i}^{R, \mathcal{E}}\Psi_{h,i}^{R}(s) \tag{19}\]
where \(\{\Psi_{h,i}^{P}(s)\}_{i=1}^{m_{h}^{P}},\{\Psi_{h,i}^{R}(s)\}_{i=1}^{m_{h}^{R}}\) are finitely many fixed feature maps and \(\forall\mathcal{E},h:\sum_{i}|a_{h,i}^{P,\mathcal{E}}|^{2},\sum_{i}|a_{h,i}^{R, \mathcal{E}}|^{2}\leq C_{a}\) for some constant \(C_{a}>0\). Let \(M=M^{P}+M^{R}=\sum_{h}m_{h}^{P}+\sum_{h}m_{h}^{R}\).
**Corollary 5**.: _For a linear Bayesian RL, for large enough \(T\),_
\[\mathfrak{B}\mathfrak{R}_{L}(\pi_{\text{TS}})\leq\widetilde{O}(\lambda\sqrt{d_ {l_{1}}^{f}T}). \tag{20}\]
_Given a linear Bayesian RL with finitely many states and total feature space dimension \(d_{f}=d_{f}^{P}+d_{f}^{R}\), we have \(d_{l_{1}}\leq 2d_{f}HS\), yielding for large enough \(T\),_
\[\mathfrak{B}\mathfrak{R}_{L}(\pi_{\text{TS}})\leq\widetilde{O}(\lambda\sqrt{Hd_ {f}ST}). \tag{21}\]
_Given a mixture linear Bayesian RL, for large enough \(T\),_
\[\mathfrak{B}\mathfrak{R}_{L}(\pi_{\text{TS}})\leq\widetilde{O}(\lambda\sqrt{ MT})\,, \tag{22}\]
The proof is given in Appendix F. The fact that \(d_{l_{1}}\) appears instead of \(d_{f}\) in the general bound is not counter-intuitive, as we should expect the complexity of the feature map space \(\{\psi_{h}^{P,\mathcal{E}}(s)\}_{\mathcal{E}\in\Theta,h\in[H]},\{\psi_{h}^{R, \mathcal{E}}(s)\}_{\mathcal{E}\in\Theta,h\in[H]}\) to play a role in the regret, especially as this space can be very complex, and model very different environments that can not be grouped in the same \(\varepsilon-\)value partition.
Therefore, opposite to the claim made by Hao and Lattimore (2022), this complexity can not be captured by simply \(d_{f}\) except maybe in degenerate cases, such as when \(\mathcal{S}\) is finite, which is our second statement. More generally, if each feature map \(\psi_{h}^{P,\mathcal{E}}(s),\psi_{h}^{R,\mathcal{E}}(s)\) can be characterized with a vector of uniformly bounded norm \(\mathbf{a}_{h}^{P,\mathcal{E}}\in\mathbb{R}^{m_{h}^{P}},\mathbf{a}_{h}^{R,\mathcal{E }}\in\mathbb{R}^{m_{h}^{R}}\), then we can bound the regret in terms of \(m_{h}^{P},m_{h}^{R}\)'s, as is done in Eq. (22) (the finite state case corresponds to \(m_{h}^{P}=d_{f}^{P}S,m_{h}^{R}=d_{f}^{R}S\)).
Finite mixtures RL.To state our finite mixtures model result, we need to set the following notations. Let \(d_{l_{1}}^{m}=d_{l_{1}}^{m,P}+d_{l_{1}}^{m,R}=\sum_{h}d_{l_{1},h}^{m,P}+\sum_{ h}d_{l_{1},h}^{m,R}\) correspond to the total \(l_{1}-\)dimension of the space of mixtures coefficient maps \(\{\mathbf{a}_{h}^{P,\mathcal{E}}(s,a)\}_{\mathcal{E}\in\Theta},\{\mathbf{a}_{h}^{R, \mathcal{E}}(s,a)\}_{\mathcal{E}\in\Theta}\) with \(l_{1}-\) distance defined as \(l_{1}(\mathbf{a}_{h}^{E},\mathbf{a}_{h}^{E^{\prime}})=\sup_{s,a}\|\mathbf{a}_{h}^{E}(s,a)- \mathbf{a}_{h}^{E^{\prime}}(s,a)\|_{1}\). Define also the restricted finite mixtures model where \(\mathbf{a}_{h}^{P,\mathcal{E}},\mathbf{a}_{h}^{R,\mathcal{E}}\) are vectors in \(\mathbb{R}^{m_{h}^{P}},\mathbb{R}^{m_{h}^{R}}\) independent of \((s,a)\) and let \(M=M^{P}+M^{R}=\sum_{h}m_{h}^{P}+\sum_{h}m_{h}^{R}\).
**Corollary 6**.: _Given a finite mixtures Bayesian RL problem, for large enough \(T\),_
\[\mathfrak{B}\mathfrak{R}_{L}(\pi_{\text{TS}})\leq\widetilde{O}(\lambda\sqrt{d _{l_{1}}^{m}T})\,. \tag{23}\]
_Assuming the restricted finite mixtures model, for large enough \(T\),_
\[\mathfrak{B}\mathfrak{R}_{L}(\pi_{\text{TS}})\leq\widetilde{O}\left(\lambda \sqrt{MT}\right)\,. \tag{24}\]
_which, given a uniform dimension \(m=m_{h}^{P}=m_{h}^{R}\), yields \(\widetilde{O}(\lambda\sqrt{HmT})\)._
We prove the above in Appendix G, deriving it from our generic bound, after relating the \(l_{1}-\)dimension \(d_{l_{1}}\) of the environment space to that of the mixtures coefficients. To the best of our knowledge, this is the first bound for finite mixtures Bayesian RL problems. We note that in a previous work (Ayoub et al. (2020)), a restricted version of finite mixtures, like in Eq. (24), was considered in the frequentist setting.
We finish this section by proposing the following conjecture, in line with (Osband and Van Roy, 2017, Conj. 1).
**Conjecture 7**.: _For the Bayesian RL, the following is true and optimal for **all**\(T\):_
\[\mathfrak{B}\mathfrak{R}_{L}(\pi_{\text{TS}})\leq O\left(\inf_{\varepsilon>0}( \sqrt{H\log(K_{\mathrm{surr}}(\varepsilon))T}+L\varepsilon)\right)\,. \tag{25}\]
_where the constant factor is independent of the prior. This means there exists a Bayesian RL problem such that \(\mathfrak{B}\mathfrak{R}_{L}(\pi_{\text{TS}})=\widetilde{\Omega}(\sqrt{Hd_{ \mathrm{surr}}T})\). All polylogarithmic terms are in terms of \(H,d_{\mathrm{surr}},T\)._
Note that the above coincides with the lower bound for the (model-based) time inhomogeneous frequentist setting; see e.g., Jin et al. (2018) for the proven lower bound for the tabular case. This is also \(\sqrt{H}\) higher (this factor being baked in \(d_{\mathrm{surr}}\)) than that of the time homogeneous frequentist setting, which is expected, according to (Jin et al., 2018, App. D). Note that in this conjecture, the \(\lambda\) in our bound is replaced by \(\sqrt{H}\), and the conjecture is not for \(T\) large enough, but for all \(T\)
Supporting this conjecture requires experiments where TS can be exactly implemented assuming access to an oracle which provides the optimal policy for a query environment. Simulations have been performed for the similar (Osband and Van Roy, 2017, Conj. 1) in the time homogeneous case. Our conjecture is similar but with the additional expected factor of \(\sqrt{H}\) due to time inhomogeneity, thus their simulation also supports the above.
## 6 Conclusions
In this paper, we have addressed the Bayesian Reinforcement Learning (RL) problem in the context of time inhomogeneous transition and reward functions. By considering both Bayesian transition and Bayesian rewards without prior assumptions, we have extended the scope of previous works, making our formulation more comprehensive. To simplify the learning problem, we have introduced surrogate environments, which discretize the environment space. We have established a connection between the size of this new environment space and the \(l_{1}\)-dimensions of the transition and reward functions space, providing insights into the \(l_{1}\)-dimension of the environment space denoted by \(d_{l_{1}}\). We have employed posterior consistency tools to analyze the information ratio, which captures the trade-off between exploration and exploitation. We conjecture that (at least a weakened version of) our posterior consistency assumption should hold in general, which is left for future work. Our analysis has resulted in a refined approach to estimate the Bayesian regret in Thompson Sampling (TS), yielding a regret bound of \(\widetilde{O}(\lambda\sqrt{d_{l_{1}}T})\) for large enough time steps \(T\). The result is specialized to linear, tabular, and finite mixtures MDPs.
**Limitations:** While the paper provides asymptotic generic regret bound for TS in a generalized setup which improve the state of the art results, finding lower bounds, esp. one dependent on \(\lambda\), are left open. In addition, the issue of prior misspecificity is not discussed and left for future studies. |
2303.06846 | Improved quantum error correction with randomized compiling | Current hardware for quantum computing suffers from high levels of noise, and
so to achieve practical fault-tolerant quantum computing will require powerful
and efficient methods to correct for errors in quantum circuits. Here, we
explore the role and effectiveness of using noise tailoring techniques to
improve the performance of error correcting codes. Noise tailoring methods such
as randomized compiling (RC) convert complex coherent noise processes to
effective stochastic noise. While it is known that this can be leveraged to
design efficient diagnostic tools, we explore its impact on the performance of
error correcting codes. Of particular interest is the important class of
coherent errors, arising from control errors, where RC has the maximum effect
-- converting these into purely stochastic errors. For these errors, we show
here that RC delivers an improvement in performance of the concatenated Steane
code by several orders of magnitude. We also show that below a threshold
rotation angle, the gains in logical fidelity can be arbitrarily magnified by
increasing the size of the codes. These results suggest that using randomized
compiling can lead to a significant reduction in the resource overhead required
to achieve fault tolerance. | Aditya Jain, Pavithran Iyer, Stephen D. Bartlett, Joseph Emerson | 2023-03-13T04:24:24Z | http://arxiv.org/abs/2303.06846v1 | # Improved quantum error correction with randomized compiling
###### Abstract
Current hardware for quantum computing suffers from high levels of noise, and so to achieve practical fault-tolerant quantum computing will require powerful and efficient methods to correct for errors in quantum circuits. Here, we explore the role and effectiveness of using noise tailoring techniques to improve the performance of error correcting codes. Noise tailoring methods such as randomized compiling (RC) convert complex coherent noise processes to effective stochastic noise. While it is known that this can be leveraged to design efficient diagnostic tools, we explore its impact on the performance of error correcting codes. Of particular interest is the important class of coherent errors, arising from control errors, where RC has the maximum effect - converting these into purely stochastic errors. For these errors, we show here that RC delivers an improvement in performance of the concatenated Steane code by several orders of magnitude. We also show that below a threshold rotation angle, the gains in logical fidelity can be arbitrarily magnified by increasing the size of the codes. These results suggest that using randomized compiling can lead to a significant reduction in the resource overhead required to achieve fault tolerance.
## I Introduction
Noise is pervasive in present-day quantum computation. The theory of fault tolerance was developed to guarantee reliable computations in the presence of noise. However, fault tolerant constructions demand a large overhead in terms of additional resources required to encode a logical computation in a way that is resilient to errors. Achieving the target logical error rates as required by various applications with the limited amount of resources in terms of the number of physical qubits is a challenging task. Along with designing better error correcting codes, decoders and high quality hardware components of a quantum computer, there are other ways of reducing logical error rates. Active noise tailoring by randomized compiling (RC) [1] is a potential candidate for two key reasons. First, RC significantly simplifies the form of the noise on the encoded quantum information. Second, RC can be used to transform an unknown error model into one that is adapted to the error correction capabilities of a particular code.
Randomized compiling tools were leveraged to accurately predict the performance of quantum error correction schemes in Ref. [2]. Although simplifying the form of the noise makes the performance more predictable, it was observed that RC can sometimes degrade the performance of an error correcting code. We can understand this effect by using the \(\chi\)-representation [3] of a physical noise process. In this representation, the action of noise on a quantum state \(\rho\) is given by: \(\mathcal{E}(\rho)=\sum_{i,j}\chi_{i,j}P_{i}\rho P_{j}\) where \(P_{i}\) denote Pauli matrices in the \(n-\)qubit Pauli group \(\mathcal{P}_{n}\) without phases, i.e., \(P_{i}\in\mathcal{P}n/\{\pm 1,\pm i\}\). Noise tailoring methods such as RC can transform the elements of the \(\chi\)-matrix, for example by removing off-diagonal elements \(\chi_{i,j}\)\(\forall\)\(i\neq j\). This mathematical transformation is commonly referred to as twirling [4, 5, 6]. If one were to remove the contribution of \(\chi_{i,j}\) corresponding to Pauli errors that are correctable by the decoder, this could have a negative impact of the code's performance. In general, noise tailoring methods are oblivious to the details of what error terms are relevant for quantum error correction.
The impact of twirling the noise on the performance of error correction schemes has been explored in the literature under various settings. The performance of surface codes under coherent and incoherent error models have been compared in Ref. [7], and using numerical studies it was noted that while the threshold is similar in both cases, the subthreshold performance of the twirled channel is significantly better than the original coherent error model. In another setting, analytical calculations of the logical error rate of repetition codes under rotation errors reveal that coherent errors can accumulate faster, leading to worse logical error rates than their corresponding Pauli approximations [8]. The necessity of active coherence-suppression methods for codes with large distances was also noted, but their impact on the code's performance was not explored. For the Toric code under coherent error models, a laborious analysis has shown that the effective logical channel approaches an incoherent channel provided the noise decreases with increasing code size [9]. However, in the scenario where the error rate remains constant independent of the code size, there are several challenges to arriving at a similar conclusion. In Ref. [10], the poor predictability of the logical error rate and the code's pseudo threshold under coherent errors provided by their twirled counterparts was identified, reinforcing the need for active noise tailoring. The im
pact of twirling the noise for complex error models, such as combinations of stochastic errors and rotations around an arbitrary non-Pauli axis, is unknown. The scaling of the potential gains from twirling with increased code-concatenation levels remains unexplored.
In this paper, we analyze the impact of RC on the performance of quantum error correction. In particular, we show that RC improves the performance of a concatenated Steane code under a coherent noise model (specifically, a tensor product of arbitrary identical unitary errors). This positive result demonstrates that RC tools can play a key role in achieving fault tolerance. We present a detailed study of the performance gains with respect to changes in the axis of rotation and the number of levels of concatenation. We identify a special axis of rotation for a given concatenation level where maximum gains from RC are achieved. We note that this axis can be different from the axes of rotation for which the best pseudo-threshold for the code is achieved. It has been observed, in previous studies, that randomized compiling can also degrade logical performance [11]. Our study shows that a wide class of physically motivated error models do not exhibit such behaviour. However, we identify some complex noise models where such degradation can occur and provide numerical results for the same.
The paper is structured as follows. In section II, we introduce the necessary background material including noise processes, randomized compiling and quantum error correction. Section III discusses the methods used to study the impact of randomized compiling on the logical performance. In section IV, we present analytical studies for gains offered by randomized compiling using realistic error models. Finally, in section V we provide concluding remarks and describe some interesting open problems.
## II Background
In this section, we review the mathematical description of noise processes in quantum circuits as well as the formalism of stabilizer quantum error correction.
### Noise in quantum circuits
The interaction of a quantum system with its environment manifests as errors on the stored quantum information. While the system and its environment together undergo unitary time evolution, the system's reduced dynamics is often a non-unitary map. Markovian noise processes are described by completely positive trace preserving (CPTP) maps \(\mathcal{E}:\rho\mapsto\mathcal{E}(\rho)\). One of the common ways to represent a CPTP map is using the \(\chi\)-matrix: \(\chi(\mathcal{E})\), a \(4\times 4\) matrix where: \(\mathcal{E}(\rho)=\sum_{i,j}\chi_{i,j}P_{i}\rho P_{j}\), where \(P_{i},P_{j}\) are Pauli matrices.
A special subclass of noise processes that are widely analyzed in developing fault-tolerant protocols is Pauli channels. They correspond to the probabilistic action of Pauli matrices on the input state, i.e., \(\mathcal{E}(\rho)=\sum_{i,j}\chi_{i,i}P_{i}\rho P_{i}\), where \(\chi_{i,i}\) can be interpreted as the probability of the Pauli error \(P_{i}\).
While it is easy to study quantum error correction on Pauli error models, unfortunately realistic noise is often poorly approximated by Pauli error models. This causes a severe disparity between error models that can be accurately analyzed in theory and those that occur in experiments. Noise tailoring, achieved through Randomized compiling [1], is a promising tool that helps resolve this disparity. With RC, the average logical performance of a QEC scheme over several compilations with random Pauli gates can be well approximated by the performance of the QEC scheme under an effective Pauli error model. The effective Pauli error model is nothing but the Pauli twirl of the underlying CPTP noise process \(\mathcal{E}\), denoted by \(\mathcal{T}(\mathcal{E})\) defined as
\[\mathcal{T}(\mathcal{E})(\rho)=\sum_{P\in\mathcal{P}_{n}}P\mathcal{E}(P\rho P )P\;. \tag{1}\]
We will use the notation \(\mathcal{E}^{T}\) to denote the Pauli Twirl of the CPTP map \(\mathcal{E}\): \(\mathcal{T}(\mathcal{E})\).
### Quantum Error Correction
An \([[n,k]]\) stabilizer code \(\mathcal{C}\) is a \(2^{k}\) dimensional space defined as: \(\mathcal{C}=\{\left|\psi\right\rangle\,:\,S_{i}|\psi\rangle=\left|\psi\right\rangle,\,1\leq i\leq n-k\}\), where \(S_{i}\) are stabilizer generators. See Ref. [12] for an introduction to stabilizer codes and fault tolerance. Concatenated codes are a family of codes where we encode the physical qubits at level \(\ell\) using the code at level \(\ell-1\). This is a way of constructing larger codes from smaller ones and these codes are typically used to guarantee error suppression in fault tolerance proofs [13; 14].
Measuring stabilizer generators yields a signature of the error that occurred called a syndrome. Inferring the error from the syndrome is called decoding. There are several ways to define a decoder, the simplest of which is the minimum weight decoder. It selects a Pauli error of minimum Hamming weight consistent with the observed syndrome. While some errors on the encoded states can be undone by quantum error correction, there are uncorrectable errors that cause unwanted logical operations on the encoded states under a quantum error correction routine. These uncorrectable errors determine the logical error rate. A valuable tool to define the logical error rate is the effective channel, which encapsulates the effect of a physical noise process and a quantum error correction protocol on the encoded quantum information.
Besides the error-correcting code and the underlying physical noise process, the effective channel is a function of the measured error syndrome \(s\). We will use the notation \(\mathcal{E}_{1}^{s}\) to denote the effective channel where the subscript "1" refers to one encoding level. The relevance of the subscript becomes crucial for concatenated codes [12],
where \(\mathcal{E}^{s}_{\ell}\) refers to the effective channel for a level\(-\ell\) concatenated code. A particularly useful quantity is the average of logical channels \(\mathcal{E}^{s}_{\ell}\) over all syndrome outcomes, denoted by \(\overline{\mathcal{E}}_{\ell}\):
\[\overline{\mathcal{E}}_{\ell}=\sum_{s}\mathcal{E}^{s}_{\ell}\Pr(s)\;, \tag{2}\]
where \(\Pr(s)\) is the probability of observing the outcome \(s\)[15; 16; 7]. The average logical channel \(\overline{\mathcal{E}}_{\ell}\) indicates how quantum error correction suppresses the effect of physical errors, on average. We will use logical infidelity \(r(\overline{\mathcal{E}}_{\ell})\)[10; 15] as a measure of the logical error rate.
The average logical infidelity for a code under a noise process \(\mathcal{E}\) is calculated using the following equation: [2]
\[r(\overline{\mathcal{E}}_{1})=1-\sum_{\begin{subarray}{c}E,E^{\prime}\in \mathcal{E}_{C}\\ s(E)=s(E^{\prime}),\,\overline{E}=\overline{E}^{\prime}\end{subarray}}\phi(E) \ \phi^{\star}(E^{\prime})\ \chi_{E,E^{\prime}}\;, \tag{3}\]
where \(\chi_{i,j}\) represents the \((i,j)^{th}\) entry of the \(\chi-\)matrix of \(\mathcal{E}\), \(\mathcal{E}_{C}\) is the set of correctable errors, \(\overline{E}\) is the logical component in the decomposition of \(E\) with respect to the Stabilizer group and \(\phi(E)\) is specified by \(R_{s(E)}E=\phi(E)\ S\) for any Pauli error \(E\) and some stabilizer \(S\). We use this expression at various points to calculate the logical infidelity. To calculate the entries of the \(\chi-\)matrix of the effective logical channel we use the following general expression: [2]
\[\chi(\overline{\mathcal{E}}_{1})_{l,m}=\sum_{\begin{subarray}{c}E,E^{\prime} \in\mathcal{E}_{C}\\ s(E)=s(E^{\prime}),\,\overline{E}=\overline{E}^{\prime}\end{subarray}}\phi(E, l)\ \phi^{\star}(E^{\prime},m)\ \chi_{E\overline{P}_{l},\overline{P}_{m}E^{\prime}}\;. \tag{4}\]
where \(R_{s(E)}\)\(|E\ \overline{P}_{l}|=\phi(E,l)\ S\ |\overline{P}_{l}|\), for \(l\in\{0,1,2,3\}\), any Pauli error \(E\) and some stabilizer \(S\). Here \(|P|\) stands for the bare Pauli without any associated global phase.
We calculate the \(\chi-\)matrix for logical channels at higher levels i.e, for \(\ell>1\) by recursing the expression in Eq.(4) and using the entries of \(\chi(\overline{\mathcal{E}}_{\ell-1})\) in the right hand side to evaluate \(\chi(\overline{\mathcal{E}}_{\ell})\).
## III Methods
The goal of this paper is twofold. First, we want to identify important scenarios for physical errors wherein RC can be leveraged to improve the performance of quantum error correcting codes. Second, identify settings under which such performance gains cannot be guaranteed. For the first goal, we study the performance of concatenated Steane code under realistic error models. We start off by simple rotations about \(Z-\)axis and progressively move to arbitrary rotations followed by a combination of coherent and stochastic error models. For the second goal, we generate numerical results for a large ensemble of noise processes belonging to more complex noise models which involve random rotations on different qubits and arbitrary CPTP maps. All the performance metrics in this paper are derived in the memory model and assume perfect syndrome extraction. Simulations with gate dependent errors can be pursued in the future.
For both the goals, it is crucial to understand how RC can be applied alongside quantum error correction in practice. We follow the methods of Ref. [2]. The main idea can be summarized as follows. Recall that noise tailoring by randomized compiling is achieved by inserting random Pauli gates in a circuit such that its net effect does not change the logical output of the circuit. Consequently, the average output distribution of the circuit over all possible Pauli random gates can be understood by studying the response of the original circuit against Pauli noise on the individual components. In the same spirit, we insert random Pauli gates around all the individual components of a quantum error correction circuit. There is no need to account for sources of noise in the extra Pauli random gates because they can be absorbed into the original elements of the quantum error correcting circuit. The theory of RC prescribes exponentially many compilations of the underlying circuit to achieve perfect twirling. However, in practice, only a handful compilations can be realized [17]. Despite this practical limitation, we assume the ideal application of RC in this paper for simplicity. We leave the details of this procedure to the appendix section D.
We now have two variations of the average fidelity. First, the standard notion - average fidelity over all syndrome outcomes, \(r(\overline{\mathcal{E}}_{1})\), defined in Eq. (3). Second, the average fidelity over syndrome outcomes as well as logically equivalent compilations of the quantum error correction circuit, which we will denote \(r_{\mathrm{rc}}\). Note that the number of random compilations for a circuit with \(n\) elements grows as \(\mathcal{O}(4^{n})\). In the ideal case, where we have considered all of these compilations in \(r_{\mathrm{rc}}(\overline{\mathcal{E}}_{1})\), it reduces to \(r(\overline{\mathcal{E}}_{1}^{T})\).
While Eq. (3) addresses the logical channel of a block code, we can easily extend these definitions for a concatenated code assuming a hard decoder [10; 16]. In this case, the logical channel at level\(-\ell\) can be recursively defined in as a function whose input physical channels are the logical channels at level\(-(\ell-1)\). We will use the notation \(r(\overline{\mathcal{E}}_{\ell})\) and \(r(\overline{\mathcal{E}}_{\ell}^{T})\) to denote the logical channels of a level\(-\ell\) concatenated code without RC and with RC, respectively. Their ratio, denoted by \(\delta_{\ell}\), where
\[\delta_{\ell}=\frac{r(\overline{\mathcal{E}}_{\ell})}{r(\overline{\mathcal{E} ^{T}}_{\ell})}\;, \tag{5}\]
is an indicator of the performance gain due to RC, which we will estimate for various error models. Note that \(\delta_{\ell}>1\) indicates a performance gain whereas \(\delta_{\ell}<1\) denotes a performance loss.
Results and discussion
This section is devoted to case studies of performance gains from RC for the concatenated Steane code, under various interesting classes of error models, and inferences we can draw from these studies. Markovian errors can be broadly classified into unital and non-unital maps. Since non-unital components of a noise map do not impact the error rate significantly [8; 18], we restrict our attention to unital maps in this paper. In particular, we choose coherent rotations which form an important class of unital maps. In practice, these typically arise from imperfect pulses used to implement quantum gates in the hardware. Interestingly, these are also the class of errors on which randomized compiling has the maximum effect of turning them into purely incoherent noise.
### Rotation about \(Z-\)axis
While we ideally want to study the impact of RC on the performance of a quantum error correcting code under general coherent errors, let us first start with a simple yet interesting model - rotations about the \(Z-\)axis. Although the RC process tailors the underlying physical noise, irrespective of the choice of the code, through this example we show that in fact the gains produced from RC can be arbitrarily increased by choosing codes of increasing distances.
Recall that the rotation about \(Z-\)axis is specified by \(\rho\to R_{Z}(\omega)\rho R_{Z}(-\omega)\) where
\[R_{Z}(\omega)=\cos(\omega/2)\;I+i\sin(\omega/2)\;Z\;. \tag{6}\]
Applying the rotation independently across all \(n=7\) the physical qubits of the Steane code, is specified by the map
\[\mathcal{E}(\vec{\rho})=R_{Z}^{\otimes n}(\omega)\;\bar{\rho}\;R_{Z}^{\otimes n }(-\omega). \tag{7}\]
The performance of the Steane code under the above error model, can be inferred from Eq. (3), where the correctable errors \(\mathcal{E}_{\mathcal{C}}\) can be defined with respect to the minimum weight decoder. Explicitly enumerating all correctable errors, we find that there are 22 correctable errors of weight at most one, and 42 two-qubit ones. Since we are confined to rotations about the \(Z-\)axes, we can limit ourselves to the correctable errors of \(Z-\)type. Reserving the details of our derivation to Appendix A, we find
\[r(\overline{\mathcal{E}}_{1})\approx 63\;(\omega/2)^{4}-476\;(\omega/2)^{6}+ \mathcal{O}(\omega^{8})\;. \tag{8}\]
In comparison, the logical infidelity for quantum error correction with randomized compiling is
\[r(\overline{\mathcal{E}^{T}}_{1})\approx 21\;(\omega/2)^{4}-112\;(\omega/2)^{6 }+\mathcal{O}(\omega^{8})\;. \tag{9}\]
Finally, the performance gain from RC quantified using the metric \(\delta_{1}\) defined in eq. 5 can now be estimated as
\[\delta_{1}=\frac{r(\overline{\mathcal{E}}_{1})}{r(\overline{\mathcal{E}^{T}} _{1})}\approx 3-\frac{5}{3}\;(\omega)^{2}+\mathcal{O}(\omega^{4})\;. \tag{10}\]
We now show that the above modest performance gains can be made arbitrarily large by concatenating the Steane code with itself. It is possible to extend the analysis above via recursion to approximate the effective logical channel for a level \(\ell\) concatenated Steane code for \(\ell>1\). The details of this procedure can be found in Appendix B. The approximate logical channel allows us to estimate the performance of level \(\ell\) concatenated Steane code and study the impact of randomized compiling on it. To understand the impact of RC with the number of levels, we can do a leading order analysis of the recursive relations used to construct the average logical channel, described in Appendix B. We find that for small rotation angle \(\omega\), the average infidelity of the logical channel scales as
\[r(\overline{\mathcal{E}}_{\ell}) \approx 63^{2^{\ell}-1}(\omega/2)^{2^{\ell+1}}\;,\] \[r(\overline{\mathcal{E}^{T}}_{\ell}) \approx 21^{2^{\ell}-1}(\omega/2)^{2^{\ell+1}}\;. \tag{11}\]
Subsequently, the scaling of gain \(\delta_{\ell}\) with the levels of concatenation is given by
\[\delta_{\ell}\approx 3^{2^{\ell}-1}-(5\times 2^{\ell-1}\times 3^{2^{\ell}-3}) \omega^{2}+O(\omega^{4})\;. \tag{12}\]
Figure 1 corroborates this scaling law for the exact value of the logical error rates of the concatenated Steane code, in other words, showing that \(\log(\log(\delta_{\ell})))\) is approximately a linear function of \(\ell\). Note that the above analysis is accurate for small rotation angles.
Varying the rotation angles leads us to another important discovery. Figure 2 shows the gains from randomized compiling for a range of rotation angles for levels \(1\leq\ell\leq 5\). The gains from RC grow significantly with increase in number of levels of the code. The figure suggests the presence of a
Figure 1: The above figure shows that the gain at level \(\ell\) and the gain \(\delta_{\ell}\) scales doubly exponentially with \(\ell\). The rotation angle used here is \(\omega=\pi/20\).
threshold rotation angle \(\omega_{\star}\) below which arbitrary gains from RC can be achieved by increasing the size of the code (levels of concatenation). On the contrary, for rotations \(\omega>\omega_{\star}\), the trend reverses.
We now turn to more general noise models, where we will find that the presence of a threshold in the case of rotations about the \(Z-\)axis, extends to the general case.
### Rotation about an arbitrary axis
While the above analysis considered coherent error models described by rotations about the \(Z-\)axis, it is straightforward to apply these ideas to rotations about any of the Pauli axes. We now investigate average gains due to RC for a rotation about an arbitrary axis.
We consider a general error model where the physical qubits of a code undergo rotations about an arbitrary axes of the Bloch sphere, described by the unitary matrix \(U\), i.e., \(\mathcal{E}(\bar{\rho})=U^{\otimes n}\bar{\rho}(U^{\dagger})^{\otimes n}\). The following parameterization for \(U\)[19] is useful for our analysis:
\[\begin{pmatrix}\cos(\omega/2)+i\sin(\omega/2)\cos(\theta)&ie^{-i\phi}\sin( \omega/2)\sin(\theta)\\ ie^{-i\phi}\sin(\omega/2)\sin(\theta)&\cos(\omega/2)-i\sin(\omega/2)\end{pmatrix}.\]
where \(0\leq\theta\leq\pi\) and \(0\leq\phi\leq 2\pi\) define the axis (in polar angles) about which each qubit is rotated, and \(\omega\) gives the magnitude of the rotation. For example, \(\theta=\phi=0\) can be identified with rotations about the \(Z-\)axis. The performance gain from RC can be defined following Eq. 10, as a function of the parameters \(\delta(\theta,\phi,\omega)\). The average gain for an unknown axis is computed as
\[\overline{\delta}_{\ell}(\omega)=\frac{1}{2\pi}\int_{0}^{2\pi}d\phi\int_{0}^{ \pi}\sin(\theta)\:d\theta\;\delta_{\ell}(\theta,\phi,\omega)\;, \tag{13}\]
for \(\ell=1\). Likewise, for concatenated codes, \(\overline{\delta}_{\ell}\) denotes the average gain in performance for level \(\ell\). This is similar to the conclusion drawn for the case of rotations about the \(Z-\)axis. First of all we see that for all coherent errors RC improves the performance of the Steane code. Furthermore, performance gains are largest for coherent errors that correspond to rotations about the \(X,Y\) or \(Z\) axes.
Using the general techniques developed in the appendix to approximate the effective logical channel of a level\(-\ell\) concatenated code, we can estimate the gains \(\overline{\delta}_{\ell}\) in average performance due to RC over the various rotation axes. Similar to the case of \(Z-\)rotations, Fig. 3 suggests the presence of a threshold \(\overline{\omega}_{\star}\) wherein for rotation angles \(\omega\leq\overline{\omega}_{\star}\) the gains can be arbitrarily increased by choosing codes of larger distance, whereas the trend reverses for \(\omega>\overline{\omega}_{\star}\).
Note that threshold angle \(\overline{\omega}_{\star}\) for rotations about an unknown axis is higher the threshold for rotations about the \(Z-\)axes, i.e., \(\overline{\omega}_{\star}>\omega_{\star}\). This can be explained as follows. In the case of a generic non-Pauli axis, the twirled noise model, i.e., is in the presence of RC, is composed of a probabilistic mixture of \(X,Y\) and \(Z\) type errors. Whereas, in the case of a fixed Pauli axis, we only have errors of one type (either \(X,Y\) or \(Z\)). For a fixed error budget, specified by fidelity, the case of a non-Pauli axis results in the error strength spread over a larger number of correctable errors than the case of a fixed Pauli axis which would include relatively higher weight Pauli errors of one type. Hence, the Steane code has better error correction capability. Figure 4 provides evidence to our argument by showing that the threshold angle for performance gains from RC under rotations about various axes, is higher for non-Pauli axes compares to the Pauli ones. As a consequence, we also note that for rotation angles \(\omega_{\star}<\omega<\overline{\omega}_{\star}\), the largest gains from
Figure 3: The average gain in performance from RC, using the Haar average over all axes of rotation, for the level \(\ell\) concatenated Steane code. The average gains are larger for small magnitudes of rotation. We observe that the gains increase significantly with the number of levels for \(\omega\leq\overline{\omega}_{\star}\approx 0.65\), which corresponds to a rotation angle of about \(19^{\circ}\).
Figure 2: Gains in logical performance, \(\delta_{\ell}\), of a level \(\ell\) concatenated Steane code for rotations by angle \(\omega\) about the \(Z-\)axis. The common crossover point lies at \(\omega_{\star}\approx 0.51\), which corresponds to a rotation angle of about \(15^{\circ}\), below which gains from RC can be amplified by increasing the number of levels of concatenation.
RC are achieved for rotations axes that lie between the \(X,Y\) and \(Z\) axes as opposed to the individual Pauli axes.
### Composition of coherent and stochastic map
So far, we have shown that RC always improves the performance of quantum error correcting codes under coherent errors. Generic unital maps can be approximately described as a composition of a coherent error and a Pauli error model [20; 21]. In what follows, we consider a more general unital map where we model coherent errors in a similar fashion as in the previous section and for Pauli errors, we choose the depolarizing error model, i.e.,
\[\mathcal{E}\simeq(\mathcal{E}_{dep}\circ\mathcal{E}_{coh})^{\otimes n}, \tag{14}\]
where
\[\mathcal{E}_{coh}(\rho) =U\rho U^{\dagger},\] \[\mathcal{E}_{dep}(\rho) =(1-p)\rho+\frac{p}{2}\mathbb{I}. \tag{15}\]
and \(U\) can be parameterized using Eq. (13). In what follows, we will study the impact of RC under the approximation given by Eq. (14). Note that both the coherent as well as the incoherent parts of the error model contribute to the strength of noise, for instance, the average gate fidelity. While RC only affects the coherent part of the error process, we expect that for a fixed noise strength, the performance gain due to RC under the error model described above will diminish with increasing \(p\). This expectation is supported by the numerical simulations presented in Fig. 5, where we present numerical estimates of \(\overline{\delta}_{\ell}(\omega,p)\) for several depolarizing strengths \(p\). Here, \(\overline{\delta}_{\ell}(\omega,p)\) is defined analogous to Eq. (13) as
\[\overline{\delta}_{\ell}(\omega,p)=\frac{1}{2\pi}\int_{0}^{2\pi}\int_{0}^{\pi }\ \delta_{\ell}(\theta,\phi,\omega,p)\ \sin(\theta)\ d\theta\ d\phi. \tag{16}\]
Note that in all of the error models considered so far, we have only observed gains in performance due to RC. However, amongst the most general CPTP maps including the unital as well as non-unital types, we have identified cases under which RC can lead to a loss in the performance. Some examples of these maps are mentioned in Appendix C.
## V Conclusion
The application of randomized compiling in fault tolerance is attractive for two reasons. First, amongst the exponentially growing number of parameters controlling a physical noise process, RC effectively eliminates the impact of most of them on a QEC scheme. Second, since RC removes multiple noise sources, we expect the code to perform better. This paper provides concrete evidence to show that RC improves the performance of quantum error correction under a wide class of coherent errors. We have identified noise regimes where gains are drastic for
Figure 5: The impact of the depolarizing component on the gains from RC. We fix the average infidelity per qubit to be \(r\approx 0.003\) and increase the value of the depolarizing strength from \(p=10^{-4}\) to \(p=10^{-3}\). The value of \(\omega\) corresponding to each value of \(p\) is chosen such that the total physical infidelity of the qubit remains constant. We observe that the gains from RC diminish with increase in depolarizing strength. This is because RC does not impact the stochastic component of the noise model.
the case of concatenated Steane codes. In particular, it grows doubly exponentially with the number of levels, under small rotations about a Pauli axis. Our results can be extended to guarantee performance gains under generic unital noise processes, leveraging tools from [20; 21] that approximate a unital noise process as a composition of a coherent and an incoherent error model. These observations strengthen the need for active noise tailoring methods as a crucial component of a fault tolerant scheme.
Performance gains offered by RC also depend on the strength of errors affecting the physical qubits. We stumbleed upon an interesting observation that indicates gains decrease when the amount of coherent rotation error passes beyond a threshold value. To the best of our knowledge a threshold of this nature hasn't been reported in earlier works. The threshold helps estimate the maximum possible noise that can be alleviated on a hardware device by leveraging RC tools. We also carried out extensive studies to analyze the variation of this threshold with the features of the underlying coherent error model.
Beyond the paradigm of identical unital maps across all physical qubits, we argue that unilateral conclusions about performance gains due to RC cannot be made, i.e., it depends strongly on the microscopic details of the underlying physical noise process. Our arguments are strengthened by numerical studies of complex physical noise processes that revealed some cases where the code's performance can also degrade in the presence of RC. In Ref. [16], it was shown that twirled noise processes may improve or degrade thresholds depending on the decoding algorithm used. In this paper we arrive at a similar conclusion by exploring different error models for the minimum weight decoder.
Obtaining efficiently computable estimates for performance gains due to RC in different experimental setups would be crucial to optimizing fault tolerance resources in near-term applications. In the absence of exact values, it would be useful to provide bounds for the impact of RC on the code's performance. Although RC's impact on performance depends strongly on the underlying noise process, it is still interesting to see that it can provide significant gains for a wide variety of realistic error models and relevant error regimes.
To ensure a performance gain from a noise tailoring technique, such as RC, ideally, we want to cancel the impact of those terms in the underlying noise process, which correspond to uncorrectable errors - since these add to the logical infidelity. It would be worthwhile to explore ways of controlling physical noise sources to ensure that RC always offers a gain in performance. It would also be interesting to explore different Twirling gate sets that can tailor the noise process to suppress terms that contribute negatively to the logical channel's fidelity. Although we identified a handful of cases where a performance loss is observed, it will be noteworthy to develop cheap experimental protocols to ascertain whether performing error correction with RC will be significantly beneficial for a given device.
###### Acknowledgements.
This research was undertaken thanks in part to funding from the Canada First Research Excellence Fund. Research was partially sponsored by the ARO and was accomplished under Grant Number: W911NF-21-1-0007. SDB acknowledges support from the Australian Research Council (ARC) via the Centre of Excellence in Engineered Quantum Systems (EQuS) project number CE170100009.
|
2301.01788 | Nucleon Energy Correlators for the Color Glass Condensate | We demonstrate the recently proposed nucleon energy-energy correlator
(nucleon EEC) $f_{\rm EEC}(x,\theta)$ can unveil the gluon saturation in the
small-$x$ regime in $eA$ collisions. The novelty of this probe is that it is
fully inclusive just like the deep-inelastic scattering (DIS), with no
requirements of jets or hadrons, but still provides an evident portal to the
small-$x$ dynamics through the shape of the $\theta$-distribution. We find that
the saturation prediction is significantly different from the expectation of
the collinear factorization. | Hao-Yu Liu, Xiaohui Liu, Ji-Chen Pan, Feng Yuan, Hua Xing Zhu | 2023-01-04T19:00:52Z | http://arxiv.org/abs/2301.01788v2 | # Nucleon Energy Correlators for the Color Glass Condensate
###### Abstract
We demonstrate the recently proposed nucleon energy-energy correlator (nucleon EEC) \(f_{\rm EEC}(x,\theta)\) can unveil the gluon saturation in the small-\(x\) regime in \(eA\) collisions. The novelty of this probe is that it is fully inclusive just like the deep-inelastic scattering (DIS), with no requirements of jets or hadrons, but still provides an evident portal to the small-\(x\) dynamics through the shape of the \(\theta\)-distribution. We find that the saturation prediction is significantly different from the expectation of the collinear factorization.
_Introduction._ Small-\(x\) gluon saturation [1; 2; 3; 4; 5; 6] has been one of the central focuses in nuclear physics community in recent years and will be a major research area in the future Electron Ion Collider (EIC) [7; 8; 9]. An effective field theory called color-glass-condensate (CGC) [4; 5; 6] has been established to compute the hadronic and nuclear structure functions in deep inelastic scattering (DIS) at small values of Bjorken-\(x_{B}\)[10; 11]. The CGC predicts the gluon saturation with a characteristic scale \(Q_{s}\), as a consequence of the small-\(x\) nonlinear dynamics governed by the BK-JIMWLK equation [12; 13; 14; 15; 16; 17]. The saturation scale \(Q_{s}\) represents the typical size of the gluon transverse momentum inside the nucleus and grows as the momentum fraction \(x\to 0\). For large nucleus and small-\(x\), typically \(Q_{s}>\Lambda_{\rm QCD}\).
Previous experiments from DIS in \(ep\) collisions at HERA and hadron productions in \(pA\) collisions at RHIC and LHC have shown some evidence of gluon saturation at small-\(x\). With the planned EIC in the horizon, this physics will be explored in a systematic manner with unprecedented precision [7; 8; 9]. Extensive studies have been carried out for the EIC experiments, including the inclusive DIS structure functions at small-\(x_{B}\)[18; 19; 20] and the azimuthal correlations of di-jet/di-hadron/photon-jet/lepton-jet in the inclusive or diffractive processes [21; 22; 23; 24; 25; 26; 27; 28; 29; 30; 31; 32; 33; 34; 35; 36]. These processes are considered as promising channels to look for the gluon saturation in \(eA\) collisions.
In this manuscript, we present a novel approach to probe the gluon saturation in \(eA\) collisions in terms of the nucleon energy-energy correlator (nucleon EEC) recently proposed in Ref. [57], which is an extension of the EEC [58; 59] to the nucleon case. The EEC is the vacuum expectation of a set of final state correlators to reformulate jet substructures [60; 61; 62; 63; 64; 65; 66; 67; 68; 69; 70; 71; 72; 73; 74; 75; 76; 77; 78; 79; 80; 81; 82; 83; 84], while the nucleon EEC is the nucleon expectation of the initial-final state correlator. The latter encodes the partonic angular distribution induced by the intrinsic transverse momentum within the nucleon [57]. Therefore we expect the features of the gluon saturation, especially the saturation scale \(Q_{s}\) that measures the size of the intrinsic transverse momentum, should be naturally imprinted in the nucleon EEC. Our numeric results in Figs. 5 and 6 will show that the saturation predictions have distinguished behaviors as compared to those from the collinear factorization. From this comparison, we can further deduce the saturation scales in \(ep\) and \(eA\) collisions, respectively.
The quark contribution to the nucleon EEC in the momentum space is defined as
\[f_{q,{\rm EEC}}(x,\theta)=\int\frac{dy^{-}}{4\pi E_{A}}e^{-ixP\,y^{-}}\gamma^{+ }\langle A|\bar{\chi}(y^{-})\,\mathcal{E}(\theta)\chi(0)|A\rangle\,, \tag{1}\]
where \(x\) is the momentum fraction that initiates a scattering process, meanwhile we measure the energy deposit in a detector at a given angle \(\theta\) from the initial state radiation and the remnants through the energy operator \(\mathcal{E}(\theta)=\lim_{r\to\infty}\int_{0}^{\infty}dtT_{0\bar{n}}(t,\bar{n }r)r^{2}\)[90; 91; 92; 93], \(\mathcal{E}(\theta)|X\rangle=\sum_{i\in X}E_{i}\delta(\theta_{i}^{2}-\theta^{ 2})|X\rangle\,\). The measured energy deposit is normalized to the energy \(E_{A}\) carried by the nucleus \(A\). Here, \(\chi\) is the gauge invariant collinear quark field [85; 86; 87; 88; 89]. The gluon EEC can be defined similarly. When \(\theta E_{A}\sim\Lambda_{\rm QCD}\), the \(f_{\rm EEC}\) probes the intrinsic transverse dynamics of the nucleus \(A\) through the operator \(\mathcal{E}(\theta)\).
In the collinear factorization, it has been shown that when \(\theta E_{A}\gg\Lambda_{\rm QCD}\), the \(f_{q,{\rm EEC}}(x,\theta)\) can be further factorized as [57]
\[f_{i,{\rm EEC}}(x,\theta)=\int\frac{d\xi}{\xi}I_{ij}\left(\frac{x}{\xi},\theta \right)\left[\xi f_{j/A}\left(\xi\right)\right]\,, \tag{2}\]
where \(f_{j/A}(\xi)\) is the collinear PDF, and \(I_{ij}\) is the matching coefficient found to be solely determined by the vacuum collinear splitting functions [57].
As the values of \(x\) decreases, the \(f_{q,{\rm EEC}}\) receives dramatically enhanced contributions from the low \(x\) gluon. In this regime, the non-linear small-\(x\) dynamics becomes important. Consequently, if compared to the collinear factorization in which the distribution is determined by vacuum collinear splitting, the shape of the \(\theta\)-distribution will be modified, due to a sizable initial transverse momentum \(q_{t}\) of order the saturation scale \(Q_{s}\), see, the illustrations in Fig. 1. Therefore, the nucleon EEC can
be used to probe the gluon saturation phenomenon and the small-\(x\) dynamics, as we will show in the rest of this manuscript.
_The measurement and the factorization theorem._ We follow [57] to consider the unpolarized DIS process \(l+A\to l^{\prime}+X\) in the Breit frame. We assume the nucleus is moving along the \(+z\)-direction. We measure the Bjorken \(x_{B}=\frac{-q^{2}}{2P-q}\), the photon virtuality \(Q^{2}=-q^{2}\) and the energy \(\sum_{i}E_{i}\) that deposits in a calorimeter at an angle \(\theta\) with respect to the beam, as shown in Fig. 2. Here \(q=l^{\prime}-l\) is the momentum carried by the virtual photon. We then measure the weighted cross section \(\Sigma(Q^{2},x_{B},\theta)\) defined as
\[\Sigma(Q^{2},x_{B},\theta)=\sum_{i}\int d\sigma(x_{B},Q^{2},p_{i})\,\frac{E_{ i}}{E_{A}}\,\delta(\theta^{2}-\theta_{i}^{2})\,, \tag{3}\]
where \(E_{A}\) is the energy carried by the incoming nucleus. We note that the energy weight suppresses the soft contributions, which is an important feature of the proposed measurement and its resulting nucleon EEC.
In order to probe the small-\(x\) dynamics, we are particularly interested in the scenario in which \(x_{B}\ll 0.1\), and we place the detector in the far-forward region such that \(Q\theta\ll Q\) while \(Q\theta\sim Q_{s}\gg\Lambda_{\rm QCD}\). At this point, we emphasize that the measurement involves neither additional hadron tagging nor jet clustering, and in contrast to the TMD which restricts the events in the small \(q_{t}\) region, this approach is inclusive and does not veto events. It weights the full cross section by the energy recorded at a certain angle \(\theta\), therefore the probe is as inclusive as the DIS but with additional control via \(\theta\).
When \(\theta Q\gg\Lambda_{\rm QCD}\), the weighted cross section can be calculated perturbatively in the collinear factorization. More interestingly, when \(Q\theta\ll Q\), it has been shown that the \(\Sigma(Q^{2},x_{B},\theta)\) fulfils the factorized form [57]
\[\Sigma(Q^{2},x_{B},\theta)=\int\frac{dx}{x}\hat{\sigma}_{i,\rm DIS}\left(\frac {x_{B}}{x},Q\right)f_{i,\rm EEC}(x,\theta)\,, \tag{4}\]
where \(\hat{\sigma}_{i,\rm DIS}\) is the fully inclusive partonic DIS cross section. \(f_{i,\rm EEC}\) is the nucleon EEC in Eq. (1). The \(\theta\)-dependence enters entirely through the nucleon EEC \(f_{\rm EEC}(x,\theta)\), and therefore the \(\theta\) distribution of the \(\Sigma(Q^{2},x_{B},\theta)\) probes the nucleon EEC when \(\theta\) is small. We note that \(f_{\rm EEC}\) satisfies the same collinear evolution as the collinear PDFs [57]
\[\frac{df_{i,\rm EEC}(x,\theta)}{d\ln\mu}=P_{ij}\otimes f_{j,\rm EEC}\,, \tag{5}\]
as required by \(d\Sigma/d\ln\mu=0\), and since \(d\hat{\sigma}_{i,\rm DIS}/d\ln\mu=-P_{ji}\otimes\hat{\sigma}_{j,\rm DIS}\). Here the convolution in the momentum fraction is defined as \(f\otimes g(x)\equiv\int_{x}^{1}\frac{dz}{z}f\left(\frac{x}{z}\right)g(z)\). It is clear from the evolution that there is no perturbative Sudakov suppression in \(f_{\rm EEC}\), due to the absence of the soft contribution in the collinear factorization eliminated by the energy weight [57].
The factorization theorem in Eq. (4) and Eq. (2) can be easily understood by considering the leading contribution shown in Fig. 3, where a parton out of the nucleus \(A\) with momentum \(\xi P\) splits into a parton with momentum fraction \((1-z)\xi\) that hits the detector at \(\theta\), and an internal line with fraction \(z\xi\) and virtuality \(t=-\frac{\vec{E}_{i}^{2}}{1-z}\) that initiates the partonic inclusive DIS process. Here \(z\) is the momentum fraction with respect to the incoming parton
Figure 1: The \(f_{EEC}(x,\theta)\) in the collinear factorization (left) and the CGC framework (right). Here \(Q\) represents the center of mass energy of the partonic cross section.
Figure 3: The collinear splitting that initiates the DIS process and a daughter parton that hits the detector at \(\theta\ll 1\). The momentum fractions are shown. We abbreviates \(P^{+}\) with \(P\) in this work for simplicity notation.
Figure 2: The \(x_{B}\) and \(Q^{2}\) measurement in DIS with a forward detector that records the energy flow \(\sum_{i}E_{i}\) at the angle \(\theta\).
and \(k_{t}=\frac{1}{2}\xi(1-z)P\,\theta\) is the transverse momentum of the final state parton. In the vacuum, the splitting is described by the leading order vacuum collinear splitting kernel \(\frac{1}{t}\,P_{ij}^{(0)}\). Since \(\theta Q\ll Q\), the chance for the radiations from the hard interaction to reach the calorimeter vanishes as \(\theta\to 0\). It is then found that in the small \(\theta\) limit,
\[\Sigma(Q^{2},x_{B},\theta)=\int\frac{dx_{i}}{x_{i}}\hat{\sigma}_{i, \mathrm{DIS}}\left(Q^{2},\frac{x_{B}}{x_{i}}\right)\] \[\times\int d\xi dz\frac{1}{\bar{\theta}^{2}}\delta(x_{i}-\xi z)(1 -z)\xi P_{ij}^{(0)}\left(z\right)f_{j/A}(\xi)\,, \tag{6}\]
which, after performing the \(z\) integration, gives
\[\Sigma(Q^{2},x_{B},\theta)=\int\frac{dx_{i}}{x_{i}}\hat{\sigma}_{ i,\mathrm{DIS}}\left(Q^{2},\frac{x_{B}}{x_{i}}\right)\] \[\times\frac{1}{\theta^{2}}\int\frac{d\xi}{\xi}\left(1-\frac{x_{i }}{\xi}\right)P_{ij}^{(0)}\left(\frac{x_{i}}{\xi}\right)\left[\xi f_{j/A}( \xi)\right]. \tag{7}\]
This produces the factorized form in Eq. (4) and Eq. (2) by identifying the leading order matching coefficient \(I_{ij}^{(0)}(\xi,\theta)=\frac{1}{\theta^{2}}(1-\xi)P_{ij}^{(0)}(\xi)\).
If \(x_{B}\ll 0.1\), the gluon density is overwhelmingly large and the leading contribution to the \(\Sigma(Q^{2},x_{B},\theta)\) is coming from
\[\Sigma(Q^{2},x_{B},\theta)=\sum_{q}\frac{4\pi\alpha^{2}e_{q}^{2}}{Q^{4}}\,f_{ q,\mathrm{EEC}}(x_{B},\theta)\,, \tag{8}\]
with
\[f_{q,\mathrm{EEC}}(x,\theta)\] \[= \frac{\alpha_{s}T_{R}}{2\pi\theta^{2}}\int_{x}^{1}\frac{d\xi}{ \xi}(1-\xi)(\xi^{2}+(1-\xi)^{2})\,\left[\frac{x}{\xi}f_{g}\left(\frac{x}{\xi} \right)\right]\,. \tag{9}\]
The collinear factorization predicts a \(\frac{1}{\theta^{2}}\)-scaling behavior at \(\mathcal{O}(\alpha_{s})\). For very small \(\theta\), the scaling rule could receive corrections from both the evolution of the \(f_{\mathrm{EEC}}\) in Eq. (5) and non-perturbative effects. But for generic small \(\theta\), these effects are mild and therefore \(\theta^{2}\Sigma\) will be insensitive to the values of \(\theta\), up to \(\mathcal{O}(\theta Q)\) power corrections. Furthermore, since the energy weight kills the soft contribution, to all orders there will be no perturbative Sudakov suppression in the small \(\theta\) region in the collinear factorization [57], as is clear from Eq. (5). Such a feature will be modified by the small-\(x\) dynamics as we will show.
_The nucleon EEC in the small-\(x\) regime._ In the small-\(x\) region, the gluon density grows as \(\frac{1}{x}\) and becomes overwhelmingly important and has to be resummed to all orders. To realize such resummation in \(f_{\mathrm{EEC}}\), we invoke the CGC effective theory framework and follow the strategy in [94; 95; 96] to write the nucleon EEC in terms of the CGC dipole distribution 1. By evaluating the diagrams in Fig. 4, we find in the leading logarithmic (LL) approximation
Footnote 1: The complete calculation using the full dipole amplitude \(\psi_{T,L}^{\gamma^{*}\to q\bar{q}}\) for \(\gamma^{*}\to q\bar{q}\) is presented in the Supplemental Material. Both approaches agree in the small \(\theta\) limit.
\[f_{q,\mathrm{EEC}}(x_{B},\theta)=\frac{N_{C}S_{\perp}}{8\pi^{4} }\int d^{2}\vec{g}_{t}\] \[\times\int_{\xi_{\mathrm{cut}}}^{1}\frac{d\xi}{\xi}\mathcal{A}_{ qg}\left(\xi,\theta,\vec{g}_{t}\right)\,F_{g,x_{B}}(\vec{g}_{t})\,, \tag{10}\]
where \(S_{\perp}\) is the averaged transverse area of the target nucleus and \(g_{t}\sim Q_{s}\sim\theta Q\) is the transverse momentum transfer. \(F_{g,x_{F}}=\int\frac{d^{2}\vec{r}}{4\pi^{2}}e^{-i\vec{g}_{t}\cdot\vec{r}_{s}} S_{x}^{(2)}(\vec{r}_{t})\) is the CGC dipole distribution evaluated at the scale \(x_{F}\), where \(S_{x_{F}}^{(2)}(\vec{r}_{t})=\frac{1}{N_{C}}\langle\mathrm{Tr}[W(\vec{r}_{t})W ^{\dagger}(\vec{0})]\rangle_{x_{F}}\). \(x_{F}\frac{Q}{x_{B}}\) is the rapidity scale/boundary that separates the fast moving modes being integrated out and the active slow moving partons in the CGC effective frame work. In this work, we default to the natural choice \(x_{F}=x_{B}\). \(\frac{1-\xi}{\xi}Q\) is the momentum "\(+\)"-component that enters the detector. \(\xi_{\mathrm{cut}}\) is determined by requiring the momentum of the active quark does not exceed the rapidity boundary. Here the coefficient \(\mathcal{A}_{qg}\) is given by
\[\mathcal{A}_{qg}(\xi,\theta,\vec{g}_{t})=\frac{1}{\theta^{2}}(1- \xi)\vec{k}_{t}^{2}(\vec{k}_{t}-\vec{g}_{t})^{2}\] \[\times\left|\frac{\vec{k}_{t}}{\xi\vec{k}_{t}^{2}+(1-\xi)(\vec{k }_{t}-\vec{g}_{t})^{2}}-\frac{\vec{k}_{t}-\vec{g}_{t}}{(\vec{k}_{t}-\vec{g}_{t })^{2}}\right|^{2}\,, \tag{11}\]
with \(k_{t}\) defined as \(k_{t}=\frac{1-\xi}{\xi}\frac{Q}{2}\theta\), should be of order \(Q_{s}\).
It is easy to show that if \(g_{t}\sim Q_{s}\ll Q\theta\), Eq. (10) reduces to the \(\frac{1}{\theta^{2}}\)-scaling behavior of the collinear factorization in Eq. (9). On the other hand, if \(\theta Q\ll Q_{s}\)
Eq. (10) scales as \(\theta^{0}\). We thus expect that in CGC, the \(\theta^{2}\Sigma\) will be independent of the \(\theta\) for \(\theta Q\gg Q_{s}\), however, contrary to the collinear factorization, suppressed when \(\theta Q\ll Q_{s}\). Meanwhile the \(\theta\) region between these two limits provide the opportunity to estimate the saturation scale \(Q_{s}\).
_Numerics._ Now we study the numerical impacts of the small-\(x\) dynamics on the shape of the \(\theta^{2}\Sigma(Q^{2},x_{B},\theta)\) distribution from Eq. (10), compared with the collinear prediction. We are particularly interested in the region \(\theta\ll 1\) where the \(\theta\) distribution probes directly the \(f_{\rm EEC}(x,\theta)\), see Eq. (4). For the small-\(x\) dipole distribution \(S_{xF}^{(2)}(\vec{r}_{t})\), we use both the MV model with rcBK running [12; 13; 18; 97; 98; 99; 100; 101; 102; 103; 104] and the GBW model [105].
As for the MV model with rcBK running, we adopt the MV-like model [106] as the initial condition, whose form is \(S_{x_{0}}^{(2)}(\vec{r}_{t})=\exp\left[-\frac{(r_{s}^{2}Q_{s}^{2})^{\gamma}}{ 4}\ln\left(\frac{1}{\Lambda r_{t}}+e\right)\right]\), where we choose \(x_{0}=0.01\), \(\gamma=1.119,\Lambda=0.241\,\,\text{GeV},Q_{s0}^{2}=A^{1/3}0.168\,\,\text{GeV}^ {2}\) with \(A\) the atomic number. We use the solution to the LL BK evolution with \(\alpha_{s}\) running [101; 106; 98] of the dipole distribution to evolve the dipole distribution from \(x_{0}\) to \(x_{F}\). In our calculation, we use the result fitted from the HERA data for the transverse area of the nucleus \(S_{\perp}\)[19]. The GBW model is implemented using \(S_{xF}^{(2)}(\vec{r}_{t})=\exp\left[-\frac{1}{4}r_{t}^{2}Q_{s}^{2}(x_{F})\right]\), where \(Q_{s}^{2}(x_{F})=A_{N}(x_{0}/x_{F})\,\text{GeV}^{2}\) and we use \(x_{0}=2.24\times 10^{-4}\), \(\lambda=0.27\) and \(A_{N}=1\) for the proton while \(A_{N}=5\) for the Au [107].
In Fig. 5, we show the CGC predictions for \(\theta^{2}\Sigma(Q^{2},x_{B},\theta)\) as a function of \(\theta\). Since we are only interested in the shape, we normalized the distribution by \(\int_{\theta_{\rm min}}^{\theta_{\rm max}}d\theta^{2}\Sigma\). We fixed \(x_{B}=3\times 10^{-3}\) and choose \(Q^{2}=25\,\text{GeV}^{2}\), \(\sqrt{s}=105\,\text{GeV}\) (left panel) and \(Q^{2}=100\,\text{GeV}^{2}\), \(\sqrt{s}=318\,\text{GeV}\) (right panel). We present predictions from CGC for both proton (in purple lines) and Au (in orange), by the MV model with rcBK running and GBW model. We see both models predict similar shapes in the \(\theta\) spectrum, in which the small-\(\theta\) region is suppressed. They are impressively different from the collinear expectations (in red lines and green dots). In the figure, the collinear predictions (red lines) are made out of the complete fixed order \(\alpha_{s}\) calculation, without the \(Q\theta\ll Q\) approximation, using CT18A [108] and EPPS21 [109] PDF sets for proton and Au, respectively. To validate our collinear calculation and to estimate the size of the evolution effect in Eq. (5), we also run a Pythia82 simulation [110] for the proton case, where the LL resummation is performed. We see that for large \(\theta\), the fixed order calculations agree well with the the Pythia simulation, while for small \(\theta\) values, the resummation effects could be sizable but does not suppress the small-\(\theta\) region due to the absence of the perturbative Sudakov factor in \(f_{\rm EEC}\) in the collinear factorization. The collinear prediction for the Au follows closely the proton's. The notable difference demonstrates that the \(f_{\rm EEC}(x,\theta)\) could serve as a clean probe of the small-\(x\) phenomenon. For comparison, we also show the predictions from the full CGC calculation derived in the Supplemental Material using the GBW model in purple circles.
In Fig. 5, the proton spectrum turns into a plateau for large values of \(\theta\), which is expected from Eq. (10) when \(Q\theta\gg Q_{s}\). We can define a turning point around which the slope of the distribution starts to switch its monotonicity. The turning point allows us to estimate the size of the saturation scale \(Q_{s}\). For instance, from
the left panel of Fig. 5, the turning point for the proton is roughly around \(\theta\sim 0.15-0.2\) and thus \(Q_{s}\sim\theta Q\sim 0.75-1.0\,\mathrm{GeV}\) which is consistent with the values of \(\Lambda_{\mathrm{QCD}}\). And we estimate the saturation scale for the Au will be around \(\theta\sim 0.4-0.5\) and thus \(Q_{s}\sim\theta Q\sim 2-2.5\,\mathrm{GeV}\). The right panel of Fig. 5 is similar to the left, but with \(Q^{2}=100\,\mathrm{GeV}^{2}\). Since \(Q^{2}\) is larger, the distribution enters the plateau earlier as expected. Now the turning point for the Au is around \(\theta\sim 0.2-0.3\) which again indicates that \(Q_{s}\sim 2-3\,\mathrm{GeV}\), consistent with the \(Q^{2}=25\,\mathrm{GeV}^{2}\) case.
We can further introduce the nuclear modification factor \(R_{pA}=\frac{A^{-1}\Sigma_{A}(Q^{2},x_{B},\theta)}{\Sigma_{p}(Q^{2},x_{B}, \theta)}\), which helps to reduce the systematics. In the collinear factorization, for \(\theta Q\gg\Lambda_{\mathrm{QCD}}\), the \(\theta\) distribution is determined by the matching coefficient \(I_{ij}\) as predicted by Eq. (8), which is independent of the incoming nucleus species. Thus taking the ratio \(R_{Ap}\) reduces the impacts from perturbative higher order corrections as well as possible non-perturbative hadronization effects, and the collinear factorization predicts the \(R_{Ap}\) insensitive to the \(\theta\) values, as showed explicitly as red lines in Fig. 6.
Once again, the small-\(x\) formalism changes the pattern as we observed in Fig. 6, where the modification factor \(R_{pA}\) is suppressed in the small \(\theta\) region, while converges toward around unity as \(\theta\) becomes large and \(Q\theta\gg Q_{s}\).
_Conclusions._ In this manuscript, we have proposed the nucleon energy-energy correlator (nucleon EEC) as a new probe of the gluon saturation phenomenon in DIS at the future electron-ion colliders. In particular, we have shown that the \(\theta\)-shape of the nucleon EEC \(f_{\mathrm{EEC}}(x,\theta)\) behaves differently in the collinear factorization theorem and the CGC formalism. The drastic difference is due to the intrinsic transverse momentum of order \(Q_{s}\) induced by the non-linear small-\(x\) dynamics. We thus expect the \(f_{\mathrm{EEC}}\) to complement the other standard small-\(x\) processes and offer a great opportunity to pin down the onset of the gluon saturation phenomenon in \(eA\)-collisions.
The advantage of the nucleon EEC probe, as compared to other standard small-\(x\) processes, is that it is fully inclusive, involving no fragmentation functions and jet clustering. Therefore the observable is expected to be clean both theoretically and experimentally. Extensions to other observables that are induced by the intrinsic transverse dynamics of the nucleon/nucleus shall follow. For polarized hadron beam, we can study the spin asymmetry by adding the azimuthal dependence to the energy operator \(\mathcal{E}\) and we expect different asymmetries to be predicted in the collinear and CGC frameworks. We hope that the results presented in this manuscript will motivate carrying out the proposed measurement at the current and future electron-ion facilitieis, meanwhile stimulate further applications of the nucleon EEC in nuclear structure studies.
_Acknowledgement._ We are grateful to Farid Salazar, Hongxi Xing, Jian Zhou for useful discussions. This work is supported by the Natural Science Foundation of China under contract No. 12175016 (X. L.), No. 11975200 (H. X. Z.), and the Office of Science of the U.S. Department of Energy under Contract No. DE-AC02-05CH11231 (F.Y.).
## References
* (1) L. V. Gribov, E. M. Levin, and M. G. Ryskin, Phys. Rept. **100**, 1 (1983).
* (2) A. H. Mueller and J.-w. Qiu, Nucl. Phys. B **268**, 427 (1986).
* (3) A. H. Mueller, Nucl. Phys. B **335**, 115 (1990).
* (4) L. D. McLerran and R. Venugopalan, Phys. Rev. D **49**, 2233 (1994), hep-ph/9309289.
* (5) L. D. McLerran and R. Venugopalan, Phys. Rev. D **49**, 3352 (1994), hep-ph/9311205.
* (6) L. D. McLerran and R. Venugopalan, Phys. Rev. D **50**, 2225 (1994), hep-ph/9402335.
* (7) A. Accardi _et al._, Eur. Phys. J. A **52**, 268 (2016), 1212.1701.
* (8) R. Abdul Khalek _et al._, (2021), 2103.05419.
* November 16, 2018_, WSP, 2020, 2002.12333.
* (10) F. Gelis, E. Iancu, J. Jalilian-Marian, and R. Venugopalan, Ann. Rev. Nucl. Part. Sci. **60**, 463 (2010), 1002.0333.
* (11) E. Iancu and R. Venugopalan, _The Color glass condensate and high-energy scattering in QCD_ (, 2003), pp. 249-3363, hep-ph/0303204.
* (12) I. Balitsky, Nucl. Phys. B **463**, 99 (1996), hep-ph/9509348.
* (13) Y. V. Kovchegov, Phys. Rev. D **60**, 034008 (1999), hep-ph/9901281.
* (14) J. Jalilian-Marian, A. Kovner, A. Leonidov, and H. Weigert, Nucl. Phys. B **504**, 415 (1997), hep
Figure 6: \(R_{pA}\) as a function of \(\theta\), with \(x_{B}=3\times 10^{-3}\) using the MV model with rcBK running and collinear factorization.
-ph/9701284.
* Jalilian-Marian et al. [1998] J. Jalilian-Marian, A. Kovner, A. Leonidov, and H. Weigert, Phys. Rev. D **59**, 014014 (1998), eprint 9706377.
* Iancu et al. [2001] E. Iancu, A. Leonidov, and L. D. McLerran, Nucl. Phys. A **692**, 583 (2001), eprint hep-ph/0011241.
* Ferreiro et al. [2002] E. Ferreiro, E. Iancu, A. Leonidov, and L. McLerran, Nucl. Phys. A **703**, 489 (2002), eprint hep-ph/0109115.
* Albacete et al. [2011] J. L. Albacete, N. Armesto, J. G. Milhano, P. Quiroga-Arias, and C. A. Salgado, Eur. Phys. J. C **71**, 1705 (2011), eprint 1012.4408.
* Lappi and Mantysaari [2013] T. Lappi and H. Mantysaari, Phys. Rev. D **88**, 114020 (2013), eprint 1309.6963.
* Ducloue et al. [2017] B. Ducloue, H. Hanninen, T. Lappi, and Y. Zhu, Phys. Rev. D **96**, 094017 (2017), eprint 1708.07328.
* Dominguez et al. [2011] F. Dominguez, B.-W. Xiao, and F. Yuan, Phys. Rev. Lett. **106**, 022301 (2011), eprint 1009.2141.
* Dominguez et al. [2011] F. Dominguez, C. Marquet, B.-W. Xiao, and F. Yuan, Phys. Rev. D **83**, 105005 (2011), eprint 1101.0715.
* Mueller et al. [2013] A. H. Mueller, B.-W. Xiao, and F. Yuan, Phys. Rev. D **88**, 114010 (2013), eprint 1308.2993.
* Metz and Zhou [2011] A. Metz and J. Zhou, Phys. Rev. D **84**, 051503 (2011), eprint 1105.1991.
* Dominguez et al. [2012] F. Dominguez, J.-W. Qiu, B.-W. Xiao, and F. Yuan, Phys. Rev. D **85**, 045003 (2012), eprint 1109.6293.
* Dumitru et al. [2015] A. Dumitru, T. Lappi, and V. Skokov, Phys. Rev. Lett. **115**, 252301 (2015), eprint 1508.04438.
* Dumitru and Skokov [2016] A. Dumitru and V. Skokov, Phys. Rev. D **94**, 014030 (2016), eprint 1605.02739.
* Boer et al. [2016] D. Boer, P. J. Mulders, C. Pisano, and J. Zhou, JHEP **08**, 001 (2016), eprint 1605.07934.
* Beuf [2017] G. Beuf, Phys. Rev. D **96**, 074033 (2017), eprint 1708.06557.
* Dumitru et al. [2019] A. Dumitru, V. Skokov, and T. Ullrich, Phys. Rev. C **99**, 015204 (2019), eprint 1809.02615.
* Mantysaari et al. [2020] H. Mantysaari, N. Mueller, F. Salazar, and B. Schenke, Phys. Rev. Lett. **124**, 112301 (2020), eprint 1912.05586.
* Zhao et al. [2021] Y.-Y. Zhao, M.-M. Xu, L.-Z. Chen, D.-H. Zhang, and Y.-F. Wu, Phys. Rev. D **104**, 114032 (2021), eprint 2105.08818.
* Boussarie et al. [2021] R. Boussarie, H. Mantysaari, F. Salazar, and B. Schenke, JHEP **09**, 178 (2021), eprint 2106.11301.
* Caucal et al. [2021] P. Caucal, F. Salazar, and R. Venugopalan, JHEP **11**, 222 (2021), eprint 2108.06347.
* Zhang and Wang [2022] Y.-Y. Zhang and X.-N. Wang, Phys. Rev. D **105**, 034015 (2022), eprint 2104.04520.
* Taels et al. [2022] P. Taels, T. Altinoluk, G. Beuf, and C. Marquet, JHEP **10**, 184 (2022), eprint 2204.11650.
* Caucal et al. [2022] P. Caucal, F. Salazar, B. Schenke, and R. Venugopalan, JHEP **11**, 169 (2022), eprint 2208.13872.
* Boussarie et al. [2014] R. Boussarie, A. V. Grabovsky, L. Szymanowski, and S. Wallon, JHEP **09**, 026 (2014), eprint 1405.7676.
* Boussarie et al. [2016] R. Boussarie, A. V. Grabovsky, L. Szymanowski, and S. Wallon, JHEP **11**, 149 (2016), eprint 1606.00419.
* Salazar and Schenke [2019] F. Salazar and B. Schenke, Phys. Rev. D **100**, 034007 (2019), eprint 1905.03763.
* Boussarie et al. [2019] R. Boussarie, A. V. Grabovsky, L. Szymanowski, and S. Wallon, Phys. Rev. D **100**, 074020 (2019), eprint 1905.07371.
* Boer and Setyadi [2021] D. Boer and C. Setyadi, Phys. Rev. D **104**, 074006 (2021), eprint 2106.15148.
* Iancu et al. [2022] E. Iancu, A. H. Mueller, and D. N. Triantafyllopoulos, Phys. Rev. Lett. **128**, 202001 (2022), eprint 2112.06353.
* Iancu et al. [2022] E. Iancu, A. H. Mueller, D. N. Triantafyllopoulos, and S. Y. Wei, JHEP **10**, 103 (2022), eprint 2207.06268.
* Hatta et al. [2016] Y. Hatta, B.-W. Xiao, and F. Yuan, Phys. Rev. Lett. **116**, 202301 (2016), eprint 1601.01585.
* Altinoluk et al. [2016] T. Altinoluk, N. Armesto, G. Beuf, and A. H. Rezaeian, Phys. Lett. B **758**, 373 (2016), eprint 1511.07452.
* Mantysaari et al. [2019] H. Mantysaari, N. Mueller, and B. Schenke, Phys. Rev. D **99**, 074004 (2019), eprint 1902.05087.
* Hagiwara et al. [2021] Y. Hagiwara, C. Zhang, J. Zhou, and Y.-j. Zhou, Phys. Rev. D **104**, 094021 (2021), eprint 2106.13466.
* Zheng et al. [2014] L. Zheng, E. C. Aschenauer, J. H. Lee, and B.-W. Xiao, Phys. Rev. D **89**, 074037 (2014), eprint 1403.2413.
* Bergabo and Jalilian-Marian [2022] F. Bergabo and J. Jalilian-Marian, Nucl. Phys. A **1018**, 122358 (2022), eprint 2108.10428.
* Bergabo and Jalilian-Marian [2022] F. Bergabo and J. Jalilian-Marian, Phys. Rev. D **106**, 054035 (2022), eprint 2207.03606.
* Iancu and Mulian [2022] E. Iancu and Y. Mulian, (2022), eprint 2211.04837.
* Fucila et al. [2022] M. Fucila, A. V. Grabovsky, E. Li, L. Szymanowski, and S. Wallon, (2022), eprint 2211.05774.
* Kolbe et al. [2021] I. Kolbe, K. Roy, F. Salazar, B. Schenke, and R. Venugopalan, JHEP **01**, 052 (2021), eprint 2008.04372.
* Tong et al. [2022] X.-B. Tong, B.-W. Xiao, and Y.-Y. Zhang, (2022), eprint 2211.01647.
* Altinoluk et al. [2022] T. Altinoluk, G. Beuf, A. Czajka, and A. Tymowska, (2022), eprint 2212.10484.
* Liu and Zhu [2022] X. Liu and H. X. Zhu, (2022), eprint 2209.02080.
* Basham et al. [1978] C. Basham, L. S. Brown, S. D. Ellis, and S. T. Love, Phys. Rev. Lett. **41**, 1585 (1978).
* Basham et al. [1979] C. Basham, L. Brown, S. Ellis, and S. Love, Phys. Rev. D **19**, 2018 (1979).
* Chen et al. [2020] H. Chen, I. Moult, X. Zhang, and H. X. Zhu, Phys. Rev. D **102**, 054012 (2020), eprint 2004.11381.
* Hofman and Maldacena [2008] D. M. Hofman and J. Maldacena, JHEP **05**, 012 (2008), eprint 0803.1467.
* Belitsky et al. [2014] A. Belitsky, S. Hohenegger, G. Korchemsky, E. Sokatchev, and A. Zhiboedov, Phys. Rev. Lett. **112**, 071601 (2014), eprint 1311.6800.
* Belitsky et al. [2014] A. Belitsky, S. Hohenegger, G. Korchemsky, E. Sokatchev, and A. Zhiboedov, Nucl. Phys. B **884**, 305 (2014), eprint 1309.0769.
* Kologlu et al. [2021] M. Kologlu, P. Kravchuk, D. Simmons-Duffin, and A. Zhiboed
* (77) H. Chen, I. Moult, J. Thaler, and H. X. Zhu, JHEP **07**, 146 (2022), 2205.02857.
* (78) K. Lee, B. Mecaj, and I. Moult, (2022), 2205.03414.
* (79) A. J. Larkoski, (2022), 2205.12375.
* (80) L. Ricci and M. Riembau, (2022), 2207.03511.
* (81) T.-Z. Yang and X. Zhang, (2022), 2208.01051.
* (82) C. Andres _et al._, (2022), 2209.11236.
* (83) H. Chen _et al._, (2022), 2210.10058.
* (84) E. Craft, K. Lee, B. Mecaj, and I. Moult, (2022), 2210.09311.
* (85) C. W. Bauer, S. Fleming, D. Pirjol, and I. W. Stewart, Phys. Rev. D **63**, 114020 (2001), hep-ph/0011336.
* (86) C. W. Bauer, D. Pirjol, and I. W. Stewart, Phys. Rev. D **65**, 054022 (2002), hep-ph/0109045.
* (87) C. W. Bauer and I. W. Stewart, Phys. Lett. B **516**, 134 (2001), hep-ph/0107001.
* (88) M. Beneke, A. P. Chapovsky, M. Diehl, and T. Feldmann, Nucl. Phys. B **643**, 431 (2002), hep-ph/0206152.
* (89) C. W. Bauer, S. Fleming, D. Pirjol, I. Z. Rothstein, and I. W. Stewart, Phys. Rev. D **66**, 014017 (2002), hep-ph/0202088.
* (90) N. A. Sveshnikov and F. V. Tkachov, Phys. Lett. B **382**, 403 (1996), hep-ph/9512370.
* (91) F. V. Tkachov, Int. J. Mod. Phys. A **12**, 5411 (1997), hep-ph/9601308.
* (92) G. P. Korchemsky and G. F. Sterman, Nucl. Phys. B **555**, 335 (1999), hep-ph/9902341.
* (93) C. W. Bauer, S. P. Fleming, C. Lee, and G. F. Sterman, Phys. Rev. D **78**, 034027 (2008), 0801.4569.
* (94) C. Marquet, B.-W. Xiao, and F. Yuan, Phys. Lett. B **682**, 207 (2009), 0906.1454.
* (95) B.-W. Xiao, F. Yuan, and J. Zhou, Nucl. Phys. B **921**, 104 (2017), 1703.06163.
* (96) J. Zhou, Phys. Rev. D **99**, 054026 (2019), 1807.00506.
* (97) Y. V. Kovchegov and H. Weigert, Nucl. Phys. A **789**, 260 (2007), hep-ph/0612071.
* (98) Y. V. Kovchegov and H. Weigert, Nucl. Phys. A **784**, 188 (2007), hep-ph/0609090.
* (99) K. J. Golec-Biernat, L. Motyka, and A. M. Stasto, Phys. Rev. D **65**, 074037 (2002), hep-ph/0110325.
* (100) J. L. Albacete and Y. V. Kovchegov, Phys. Rev. D **75**, 125021 (2007), 0704.0612.
* (101) I. Balitsky, Phys. Rev. D **75**, 014001 (2007), hep-ph/0609105.
* (102) E. Gardi, J. Kuokkanen, K. Rummukainen, and H. Weigert, Nucl. Phys. A **784**, 282 (2007), hep-ph/0609087.
* (103) I. Balitsky and G. A. Chirilli, Phys. Rev. D **77**, 014019 (2008), 0710.4330.
* (104) J. Berger and A. Stasto, Phys. Rev. D **83**, 034015 (2011), 1010.0671.
* (105) K. J. Golec-Biernat and M. Wusthoff, Phys. Rev. D **59**, 014017 (1998), hep-ph/9807513.
* (106) H. Fujii and K. Watanabe, Nucl. Phys. A **915**, 1 (2013), 1304.2221.
* (107) K. Golec-Biernat and S. Sapeta, JHEP **03**, 102 (2018), 1711.11360.
* (108) T.-J. Hou _et al._, Phys. Rev. D **103**, 014013 (2021), 1912.10053.
* (109) K. J. Eskola, P. Paakkinen, H. Paukkunen, and C. A. Salgado, Eur. Phys. J. C **82**, 413 (2022), 2112.12462.
* (110) T. Sjostrand _et al._, Comput. Phys. Commun. **191**, 159 (2015), 1410.3012.
# Supplemental Materials for "Nucleon Energy Correlators for the Color Glass Condensate"
Hao-Yu Liu
Center of Advanced Quantum Studies, Department of Physics, Beijing Normal University, Beijing, 100875, China
Xiaohui Liu
[email protected] Center of Advanced Quantum Studies, Department of Physics, Beijing Normal University, Beijing, 100875, China
Ji-Chen Pan
Institute of High Energy Physics, Chinese Academy of Sciences, Beijing 100049, China School of Physics, University of Chinese Academy of Sciences, Beijing 100049, China Nuclear Science Division, Lawrence Berkeley National Laboratory, Berkeley, CA 94720, USA
Feng Yuan
[email protected] Nuclear Science Division, Lawrence Berkeley National Laboratory, Berkeley, CA 94720, USA
Hua Xing Zhu
[email protected] Nuclear Science Division, Lawrence Berkeley National Laboratory, Berkeley, CA 94720, USA Zhejiang Institute of Modern Physics, Department of Physics, Zhejiang University, Hangzhou, 310027, China
November 3, 2021
###### Abstract
Here we present the \(\Sigma(Q^{2},x_{B},\theta)\)
\[\Sigma(Q^{2},x_{B},\theta)=\sum_{i}\int d\sigma(x_{B},Q^{2},p_{i})\,\frac{E_{i} }{E_{A}}\,\delta(\theta^{2}-\theta_{i}^{2})\,, \tag{1}\]
within the CGC formalism without the small \(\theta\) approximation. The calculation is achieved by evaluating amplitude for a virtual photon \(\gamma^{*}\) splitting into the \(q\bar{q}\) pair, as shown in Fig. 1. Each of the quarks will enter the detector to contribute to the energy deposit \(E_{i}\), which we assume it is always the \(\bar{q}\). The contribution from the \(q\) is obtained by multiplying a symmetric factor at this order.
We work in the Breit frame, in which \(q=(0,0,0,-Q)\) for the virtual photon, and \(P=\frac{Q}{2x_{B}}(1,0,0,1)\) for the nucleus. The calculation is straightforward which gives
\[\Sigma(Q^{2},x_{B},\theta)=\sum_{\lambda=l,t}f_{\lambda}\Sigma_{\lambda}^{ \gamma^{*}}(Q^{2},x_{B},\theta)\,, \tag{2}\]
where we sum over the virtual photon polarization with the flux \(f_{t}=1-y+\frac{y^{2}}{2}\) for the transverse and \(f_{l}=1-y\) longitudinal polarization, respectively, with \(y=\frac{Q^{2}}{x_{B}s}\) the inelasticity. Here for the transverse photon contribution, we find
\[\Sigma_{t}^{\gamma^{*}}(Q^{2},x_{B},\theta) = \sum_{q}\frac{2N_{c}\alpha^{2}e_{g}^{2}}{\pi^{2}x_{B}Q^{2}}S_{ \perp}\int dzd^{2}\vec{k}_{t}\frac{d^{2}\vec{l}_{t}}{(2\pi)^{2}}F_{g,x_{B}}( \vec{l}_{t})\left[z^{2}+(1-z)^{2}\right]\Bigg{|}\frac{\vec{k}_{t}}{\vec{k}_{t }^{2}+\Delta^{2}}-\frac{\vec{k}_{t}-\vec{l}_{t}}{(\vec{k}_{t}-\vec{l}_{t})^{2} +\Delta^{2}}\Bigg{|}^{2} \tag{3}\] \[\times\left[\frac{\vec{k}_{t}^{2}+(1-z)^{2}Q^{2}}{(1-z)Q}\frac{x _{B}}{Q}\right]\frac{1}{2\theta}\delta\left(\theta-\tan^{-1}\frac{2k_{t}(1-z)Q }{k_{t}^{2}-(1-z)^{2}Q^{2}}\right)\theta\left(\frac{\vec{k}_{t}^{2}+(1-z)^{2} Q^{2}}{(1-z)Q}<x_{B}\frac{Q}{x_{B}}\right)\,,\]
where \(k\) is the momentum for \(\bar{q}\) and \(1-z=\frac{k^{-}}{Q}\) the momentum fraction with respect to the photon. We defined \(\Delta^{2}=z(1-z)Q^{2}\). In the last line, the first term is the energy weight, derived using the on-shell condition \(k^{+}k^{-}-\vec{k}_{t}^{2}=0\), normalized to the incoming nucleus energy \(Q/x_{B}\) in the Breit frame. The second term defines the \(\theta\) angle with respect
to the \(z\)-axis, with \(\theta<\pi/2\) for \(k_{z}>0\) while \(\theta>\pi/2\) for \(k_{z}<0\). The last \(\theta\) function ensures that \(2k^{0}\) can not exceed the incoming parton momentum \(x_{B}P=x_{B}\frac{Q}{x_{B}}\)[1]. As for the longitudinal polarization, we have
\[\Sigma_{l}^{\gamma^{*}}(Q^{2},x_{B},\theta) = \sum_{q}\frac{2N_{c}\alpha^{2}e_{q}^{2}}{\pi^{2}x_{B}Q^{2}}S_{ \perp}\int dzd^{2}\vec{k}_{t}\frac{d^{2}\vec{l}_{t}}{(2\pi)^{2}}F_{g,x_{B}}( \vec{l}_{t})4z^{2}(1-z)^{2}Q^{2}\left|\frac{1}{\vec{k}_{t}^{2}+\Delta^{2}}- \frac{1}{(\vec{k}_{t}-\vec{l}_{t})^{2}+\Delta^{2}}\right|^{2} \tag{4}\] \[\times\left[\frac{\vec{k}_{t}^{2}+(1-z)^{2}Q^{2}}{(1-z)Q}\frac{x _{B}}{Q}\right]\frac{1}{2\theta}\delta\left(\theta-\tan^{-1}\frac{2k_{t}(1-z)Q }{k_{t}^{2}-(1-z)^{2}Q^{2}}\right)\theta\left(\frac{\vec{k}_{t}^{2}+(1-z)^{2} Q^{2}}{(1-z)Q}<x_{B}\frac{Q}{x_{B}}\right)\,.\]
Now we consider the small-\(\theta\) limit. As \(\theta\to 0\), the contribution is dominated by \(1-z\sim\frac{k_{t}^{2}}{Q^{2}}\). We define the variable \(\xi\) through \((1-z)Q=\frac{\xi}{1-\xi}\frac{k_{t}^{2}}{Q}\) and only keep the leading contribution in \(z\to 1\) limit to find
\[\Sigma(Q^{2},x_{B},\theta) = \sum_{q}\frac{4\pi\alpha^{2}e_{q}^{2}}{Q^{4}}\,\frac{N_{c}S_{ \perp}}{8\pi^{4}}\frac{1}{\theta^{2}}\int\frac{d\xi}{\xi}d^{2}\vec{l}_{t}(1- \xi)k_{t}^{2}(k_{t}-l_{t})^{2}\left|\frac{\vec{k}_{t}}{\xi\vec{k}_{t}^{2}+(1- \xi)(\vec{k}_{t}-\vec{l}_{t})^{2}}-\frac{\vec{k}_{t}-\vec{l}_{t}}{(\vec{k}_{t} -\vec{l}_{t})^{2}}\right|^{2} \tag{5}\] \[\times\theta\left(\frac{1-\xi}{\xi}<1\right)\,F_{g,x_{B}}(\vec{l} _{t})\,,\]
where \(k_{t}=\frac{1-\xi}{\xi}\frac{Q}{2}\theta\). We thus identify \(f_{\rm EEC}(x_{B},\theta)\) at this order in the small-\(x\) formalism as
\[f_{q,{\rm EEC}}(x_{B},\theta)=\frac{N_{c}S_{\perp}}{8\pi^{4}}\frac{1}{\theta^ {2}}\int_{\xi_{\rm cut}}^{1}\frac{d\xi}{\xi}d^{2}\vec{l}_{t}(1-\xi)k_{t}^{2}(k _{t}-l_{t})^{2}\left|\frac{\vec{k}_{t}}{\xi\vec{k}_{t}^{2}+(1-\xi)(\vec{k}_{t} -\vec{l}_{t})^{2}}-\frac{\vec{k}_{t}-\vec{l}_{t}}{(\vec{k}_{t}-\vec{l}_{t})^{2} }\right|^{2}\,F_{g,x_{B}}(\vec{l}_{t})\,. \tag{6}\]
The \(\xi_{\rm cut}\) is determined from the last \(\theta\)-function in Eq. (5).
|
2305.14114 | Bipolar thermoelectric superconducting single-electron transistor | Thermoelectric effects in normal metals and superconductors are usually very
small due to the presence of electron-hole symmetry. Here, we show that
superconducting junctions brought out of equilibrium manifest a sizable bipolar
thermoelectric effect that stems from a strong violation of the detailed
balance. To fully control the effect, we consider a thermally biased SIS'IS
junction where the capacitance of the central S' region is small enough to
establish a Coulomb blockade regime. By exploiting charging effects we are able
to tune the Seebeck voltage, the thermocurrent, and thereby the power output of
this structure, via an external gate. We then analyse the main figures of merit
of bipolar thermoelectricity and we prospect for possible applications. | Sebastiano Battisti, Giorgio De Simoni, Luca Chirolli, Alessandro Braggio, Francesco Giazotto | 2023-05-23T14:38:18Z | http://arxiv.org/abs/2305.14114v1 | # Bipolar thermoelectric superconducting single-electron transistor
###### Abstract
Thermoelectric effects in normal metals and superconductors are usually very small due to the presence of electron-hole symmetry. Here, we show that superconducting junctions brought out of equilibrium manifest a sizable bipolar thermoelectric effect that stems from a _strong_ violation of the detailed balance. To fully control the effect, we consider a thermally biased _SIS'IS_ junction where the capacitance of the central \(S^{\prime}\) region is small enough to establish a Coulomb blockade regime. By exploiting charging effects we are able to tune the Seebeck voltage, the thermocurrent, and thereby the power output of this structure, via an external gate. We then analyse the main figures of merit of bipolar thermoelectricity and we prospect for possible applications.
_Introduction--_ Thermal transport and quantum thermodynamics at the nanoscale have recently attracted a growing interest [1; 2; 3; 4; 5; 6; 7; 8; 9; 10; 11; 12; 13; 14; 15; 16; 17; 18; 19], thanks to the opportunity the thermoelectric effect offers to manipulate heat and control the energy efficiency of nanodevices [20; 21; 22; 23; 24; 25; 26; 27; 28; 29; 30; 31; 32]. In the linear regime thermoelectricity requires a broken electron-hole (EH) symmetry, which also implies a non-reciprocal \(IV\) characteristic, i.e., \(I(V,\Delta T)\neq-I(-V,\Delta T)\), where \(\Delta T\) is the temperature difference. Indeed metals, that are almost electron-hole (EH) symmetric, show nearly negligible Seebeck coefficients [33] and present zero thermovoltages in the superconducting phase. However, in the non-linear regime it has been demonstrated [34; 35], that superconducting _SIS'_ tunnel junctions with a sufficiently suppressed Josephson coupling exhibit a sizable thermopower due to the spontaneous breaking of EH symmetry, yielding an effective Seebeck coefficient (\(\mathcal{S}\)) as large as \(\sim 10^{5}\) times its value in the normal state [36; 37]. At the same time, the EH symmetry determines the full _bipolarity_ of the effect with reciprocal \(IV\) characteristics. The bipolar thermoelectric effect emerges when a strong temperature difference is suitably applied, and in the presence of strong asymmetry in the energy gaps of the junction.
In this Letter, we consider an _SIS'IS_ structure where a central superconducting (SC) island featuring strong Coulomb interaction is sandwiched between two SC leads via tunnel barriers. In such a system, the origin of the bipolar thermoelectric properties greatly differs from that of standard thermoelectricity in quantum dots, and lies in the _strong_ violation of the detail balance induced by the temperature difference in the junction and the interacting nature of BCS theory. Furthermore, we exploit the gating properties of the Coulombic island to control the thermoelectric performances of the engine. This unique electrical tunability differs from other platforms [38], and can be relevant for on-chip energy harvesting and other energy management purposes in superconducting quantum processors and radiation sensors [39].
_Model--_The _SIS'IS_ structure under investigation is shown in Fig.1(a), and consists of two superconducting (SC) leads (\(L\),\(R\), red part in Fig.1(a)) with SC gap \(\Delta\) put in tunnel contact with a Coulombic island (central blue part in Fig.1(a)) with a different SC gap \(\Delta_{is}\), via two identical barriers of resistance \(R_{L/R}\). In order to observe bipolar thermoelectricity the leads are chosen to have a larger gap than the island, \(\Delta>\Delta_{is}\), and they are kept at a temperature \(T_{hot}>T_{cold}\) larger than the island temperature, \(T_{is}\equiv T_{cold}=0.2~{}T_{C}\), where \(T_{C}\) is the critical temperature of the superconducting leads. The tunneling barriers are assumed to be resistive enough to make the Josephson energy negligible with respect to thermal energy, thus allowing the Josephson coupling to be neglected [41]. Yet, in order to observe Coulomb blockade we assume the charging energy of the central island,
Figure 1: **(a)**: Scheme of the _SIS’IS_ transistor. The red-coloured parts show the hot superconductors and the blue-coloured part shows the cold one. \(V\) and \(V_{G}\) denote respectively the source-drain and gate voltages. **(b)**: Sequential tunneling rates versus energy for different values of \(T_{hot}\) at \(T_{cold}=0.2~{}T_{C}\). Here we set \(\Delta=220~{}\mu\)eV (corresponding to Al, aluminum), \(\Delta_{is}=\Delta/2\), which can be obtained using a normal metal-superconducting bilayer [36; 40], and all identical barrier resistances \(R=1\)M\(\Omega\).
\(E_{C}=e^{2}/2C_{tot}\), with \(C_{tot}\) the total island capacitance, large enough that \(E_{C}\gg k_{B}T_{l}\) with \(l=is,L,R\) and \(E_{C}\gtrsim\Delta\).
For sufficiently resistive barriers the full transport properties of the system can be described through the rates \(\Gamma_{j}(\delta U)\), with \(j=R,L\), that describe the tunneling probability through the \(j\)-th barrier by the Fermi golden rule [42; 43]
\[\Gamma_{j}=\frac{1}{e^{2}R_{j}}\int_{-\infty}^{\infty}\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
\(E_{C}(n-q_{is}/e)^{2}/2\) the electrostatic energy, which depends on the offset charge on the island \(q_{is}=C_{g}V_{g}+\sum_{j}C_{j}V_{j}\), with \(C_{g}\) the gate capacitance and \(C_{j}\) the \(j\)-th barrier capacitance (for which \(C_{tot}=C_{g}+C_{L}+C_{R}\)). In the stationary limit [50], the current can be simply computed in the right lead \(I=I_{R}=-I_{L}=e\sum_{n}[\Gamma_{R}^{n,n+1}-\Gamma_{R}^{n,n-1}]P_{n}^{0}\) where \(P_{n}^{0}\) is the stationary probability of island charge states. This general numerical approach can be further simplified in the Coulomb blockade regime \(k_{B}T\ll E_{C}\) by noting that the dominant contribution to the transport at one resonance is associated only to the tunneling rates involving neighboring charge states \(n-1\rightleftharpoons n\). The current in such case can be written as
\[I_{n}=e\frac{\Gamma_{L}^{f}(n-1)\Gamma_{R}^{f}(n)-\Gamma_{L}^{b}(n)\Gamma_{R} ^{b}(n-1)}{\Gamma_{L}^{f}(n-1)+\Gamma_{R}^{f}(n)+\Gamma_{L}^{b}(n+\Gamma_{R}^ {b}(n-1)}, \tag{3}\]
where we use a shortened notation for the rates \(\Gamma_{j}^{f/b}(n)=\Gamma_{j}(\pm\delta U_{n,n\pm 1})\). Since we are mainly interested in the deep subgap regime in the bias range \(|eV|<2\Delta\) with \(E_{C}=4\Delta\), we approximate the current by considering the dominant contribution of two neighboring Coulomb resonances, i.e., \(I=I_{n}+I_{n+1}+\mathcal{O}(e^{-E_{c}/2k_{B}T_{hot}})\).
_Coulomb diamonds_-- Figure 2(a),(b) show the current \(I\) as a function of the source-drain bias \(V\) and the gate-tunable offset charge \(N_{G}=C_{g}V_{g}/e\). Typical Coulomb diamonds appear, which display periodicity in the offset charge \(N_{G}\) in units of the electron charge \(e\). The system does not present any even-odd effect since the average Cooper pair recombination rate in our system \(\Gamma_{r}\simeq 16\)kHz [51] is much smaller than the tunneling rates \(\Gamma_{j}\sim I/e\) (inverse average electron dwelling time in the island). Coulomb diamonds at equilibrium are shown in Fig. 2(a). As a guide for the eye, we show with black dashed lines the boundaries of the Coulomb diamonds for \(\Delta=0\), where the electrostatic energies vanish, \(\delta U_{n,n\pm 1}(N_{G},V)\equiv 0\). As expected, the SC gap pushes the boundaries of the Coulomb diamonds up in energy, and charge transport is suppressed in the \((N_{G},V)\) plane domains satisfying (\(eV<2\Delta+2\Delta_{is}\approx 3\Delta\)).
In Fig. 2(b) we show the results for the non-equilibrium case, \(T_{hot}=0.7~{}T_{C}>T_{cold}\). Subgap conduction channels become clearly visible, as thermally excited states promote the stronger emergence of the matching peak resonances. At integer \(N_{G}\) and \(|eV|\sim 3\Delta\), where the transport is dissipative (\(IV>0\)) even if fully inside the Coulomb blockade diamond, we clearly see the appearance of yellow (blue) crosses at positive (negative) \(V\). These features stems from the enhancement of the _negative energy_ peak in the tunneling rate for electrostatic energy \(\delta U\approx-\delta U^{*}\) [see Fig. 1(b)], and are also a direct consequence of the strong violation of the detailed balance, even if their nature is still dissipative. More intriguingly, for half-integer \(N_{G}\) and \(eV\sim\Delta/2\), the sign of the current becomes opposite to the bias \(IV<0\), as shown in the blow-up of Fig. 2(c). This behavior is a signature of thermoelectricity and it appears _only_ when a finite temperature difference is applied between the island and the hot leads. These subgap structures are equivalently present at positive and negative bias for the _same_ temperature difference, showing the full _bipolar_ character that is enforced by the EH symmetry of the unbiased system. The emerging bipolar thermoelectric effect is similar to SIS' systems [52]. Furthermore, our bipolar thermoelectric superconducting transistor offers the possibility to be manipulated thanks to Coulomb interaction and associated gating effects. We also notice that, unlike a conventional quantum-dot thermoelectric effect [53; 54] that is activated by a temperature gradient _between_ the leads, the present effect appears when the leads have the _same_ temperature higher than the Coulomb island [45].
Figure 3(a) displays cuts of Fig. 2(b) at different \(V\). In a region around half-integer gate charge \(N_{G}\) we clearly see a change of the sign of the current around the Coulomb resonance opposing to the bias (thermoelectric effect). Notably, the sign of the thermoelectric current does not change passing through the resonant value due to the unique bipolar nature, unlike in the conventional (unipolar) thermoelectric effect in a quantum dot [55].
In Fig. 3(b) we show cuts of Fig. 2(b) at fixed \(N_{G}\). We first focus on the zero-bias behavior: as we vary \(N_{G}\) towards half-integer values the zero-bias conductance (ZBC) \(G_{0}=dI/dV|_{V=0}\) becomes negative at the critical
Figure 3: Out-of-equilibrium (\(T_{hot}=0.7T_{C}\)) current \(I\) as a function of: (**a**) \(N_{G}\) for different values of \(V\), and (**b**) \(V\) for different values of \(N_{G}\) (respectively horizontal and vertical cuts of Fig. 2(**b**)). The dashed lines correspond to the prediction of the simplified model Eq. (3), showing its validity in the case of \(E_{C}\gg eV\). **(c)**: Out-of-equilibrium zero-bias conductance as a function of \(N_{G}\) calculated for the curves in (**b**).
value \(N_{G}^{*}\approx 4.42\) [see Fig. 3(c)]. This behavior blatantly suggests the spontaneous breaking of the EH symmetry of the system [52, 34], and highlights the unique capability of the gate control in our system that, differently from other platforms [38], allows to continuously tune the emergent bipolar thermoelectric properties. A negative ZBC, together with the condition \(I(0,\Delta T)=0\) as dictated by EH symmetry, implies the existence of thermoelectricity (\(IV<0\)) and the existence of a Seebeck voltage \(V_{S}\), since at high-biases \(eV\gg 3\Delta\) the system necessarily becomes again dissipative, \(IV>0\). Interestingly, this implies that the Seebeck voltage (i.e., open circuit voltage defined as \(I(V_{S})=0\)) is expected to be dependent on the gate voltage, \(V_{S}(N_{G})\), with clear consequences on the gate tunability of the thermoelectric performance. At finite values of \(V\) the current exhibits a peak changing in sign (thermoelectricity) when \(N_{G}\) approaches half-integer values. The system shows an absolute negative conductance and thereby thermopower, \(\dot{W}=-IV>0\). Furthermore, for some values of \(N_{G}\) the \(IV\) curve presents more than one resonant peak (see yellow line): this is a consequence of the Coulomb blockade, since the island electrostatic energy differences \(\delta U\) for different rates depends on the gate \(N_{G}\) and bias \(V\), and correspondingly the matching peaks resonance of the dominant rates appear split for non resonant values \(N_{G}\neq 1/2+n\) of the \(IV\) curve. Note that a similar double-peak structure can be observed also as a function of \(N_{G}\), as shown in Fig. 3(b).
_Thermoelectric figures of merit_--The intrinsic nonlinear nature of the above effect does not allow to describe the thermoelectric figures of merit of our system via a linear thermoelectric approach [56]. However, we can still define a Seebeck voltage \(V_{S}\) and a nonlinear Seebeck coefficient \(\mathcal{S}=V_{S}/\Delta T\) with \(\Delta T=T_{hot}-T_{cold}\). We stress that EH symmetry implies two Seeback voltages, \(\pm V_{S}\), and a _bipolar_\(\mathcal{S}\)[34]. Figure 4(a) shows \(V_{S}\) (solid line, left scale) and \(|\mathcal{S}|\) (dashed line, right scale) as a function of \(N_{G}\) for different temperatures of the leads. By changing \(T_{hot}\), the Seebeck voltage shows horn-like nonlinear features which are even higher at slightly lower value of \(T_{hot}\). By inspection of Fig. 3(b) we recognize that the yellow curve has two peaks and for certain values of \(\Delta T\) the second peak can even cross the \(I=0\) axis, returning a higher open circuit (Seebeck) voltage. Analogously, \(\mathcal{S}\) is also similarly affected and its maximal value is not necessarily associated to the maximal thermovoltage (due to the nonlinearities it is not even necessarily associated with the highest temperature difference \(\Delta T\)). In Fig. 4(b) the maximum thermocurrent \(I_{max}(N_{G})=\max_{0<V<2\Delta/e}\lvert I(V,N_{G})\rvert\) as a function of \(N_{G}\) is shown to gradually become zero while lowering \(T_{hot}\), as expected, since the temperature difference is not enough to trigger the bipolar thermoelectricity [37, 36]. The thermoelectric generator character of the transistor appears when closing the circuit on a load resistor. Figure 4(c) displays the output power of the structure as a function of \(N_{G}\) for different values of the load resistor, demonstrating the ability of fine gating control of the output power. The maximum achievable output power is typically associated with the smallest possible load resistance, and turns out to depend on several parameters.
_Conclusions_--We theoretically proposed and analysed a bipolar thermoelectric superconducting single-electron transistor that enables tuning and control of the bipolar thermoelectric effect through an applied gate voltage. The interplay between Coulomb blockade and out-of-equilibrium thermoelectricity finds its origin in the strong violation of the detail balance, which is triggered by different SC gaps, a finite temperature difference and, crucially, by the interacting nature intrinsic to the BCS theory. We investigated the performance of a fully gate-tunable heat engine that can provide, with realistic parameters, a nonlinear Seebeck coefficient up to \(\sim 3\) mV/K at subKelvin temperatures. The effect can be implemented in a device that can produce gate-controlled single-electron thermoelectricity in a fully superconducting design, thereby fostering interest for on-chip energy harvesting and management, single-charge electronics, and single-photon detection [39].
Figure 4: **a**: Seebeck voltage (left y-axis) and nonlinear Seebeck coefficient \(|\mathcal{S}|\) (right y-axis) versus \(N_{G}\) for different values of \(T_{hot}\). **b**: Maximum thermocurrent versus \(N_{G}\) for different values of \(T_{hot}\). **c**: Power output \(P_{out}=-\widetilde{I}\)\(\widetilde{V}\) versus \(N_{G}\) for different values of the load resistor. \(\widetilde{I}\) and \(\widetilde{V}\) are the solutions of the intersection of the \(IV\) characteristics with the load line of the resistor. Note that among all possible solutions only the electrically stable one, i.e., that with \(dI/dV>0\), can be operated by the engine [41, 34].
We acknowledge the EU's Horizon 2020 Research and Innovation Framework Programme under Grant No. 964398 (SUPERGATE), No. 101057977 (SPECTRUM), and the PNRR MUR project PE0000023-NQSTI for partial financial support. A.B. acknowledges the Royal Society through the International Exchanges between the UK and Italy (Grants No. IEC R2 192166.).
|
2304.12894 | An evolutionary model for V404 Cyg system | V404 Cyg is a Low Mass X-Ray Binary (LMXB) system that has undergone
outbursts in 1938, 1989, and 2015. During these events, it has been possible to
determine relevant data of the system; such as the masses of the compact object
(a black hole, BH) and its companion, the orbital period, the companion
spectral type, and luminosity class, among others. Remarkably, the companion
star has a metallicity appreciably higher than solar. All these data allow us
to construct theoretical models to account for its structure, looking for its
initial configuration and predicting its final fate. Assuming that the BH is
already formed when the primary star reaches the Zero Age Main Sequence, we
used our binary evolution code for such a purpose. We obtained that the present
characteristics of the system are nicely accounted for by a model with initial
masses of 9 solar masses for the BH, 1.5 solar masses for the companion star,
an initial orbital period of 1.5 d and considering that at most 30% of the mass
transferred by the donor is accreted by the BH. The metallicity of the donor
for our best fit was Z = 0.028 (twice solar metallicity). We also studied the
evolution of the BH spin parameter assuming that initially, it is not rotating.
Remarkably, the spin of the BHs in our models is far from reaching the
available observational determination. This may indicate that the BH in V404
Cyg is initially spinning, a result that may be relevant for understanding the
formation BHs in the context of LMXB systems. | L. Bartolomeo Koninckx, M. A. De Vito, O. G. Benvenuto | 2023-04-25T15:06:06Z | http://arxiv.org/abs/2304.12894v2 | # An evolutionary model for the V404 Cyg system
###### Abstract
Context:V404 Cyg is a low mass X-Ray binary (LMXB) system that has undergone outbursts in 1938, 1989, and 2015. During these events, it has been possible to make determinations for the relevant data of the system. This data include the mass of the compact object (i.e., a black hole; BH) and its companion, the orbital period, the companion spectral type, and luminosity class. Remarkably, the companion star has a metallicity value that is appreciably higher than solar. All these data allow for the construction of theoretical models to account for its structure, determine its initial configuration, and predict its fate. Assuming that the BH is already formed when the primary star reaches the zero age main sequence, we used our binary evolution code for this purpose. We find that the current characteristics of the system are nicely accounted for by a model with initial masses of 9 M\({}_{\odot}\) for the BH, 1.5 M\({}_{\odot}\) for the companion star and an initial orbital period of 1.5 d, while also considering that at most 30% of the mass transferred by the donor is accreted by the BH. The metallicity of the donor for our best fit is \(Z=0.028\) (twice solar metallicity). We also studied the evolution of the BH spin parameter, assuming that is not rotating initially. Remarkably, the spin of the BHs in our models is far from reaching the available observational determination. This may indicate that the BH in V404 Cyg was initially spinning, a result that may be relevant for understanding the formation BHs in the context of LMXB systems.
## 1 Introduction
Close binary systems with a black hole (BH) component have been studied since the first detection of accreting BHs in binary systems by Roche lobe overflow (RLOF) in the 1960 decade, alongside with the first missions containing X-ray detectors (see, e.g., Giacconi et al. 1962; Lewin et al. 1967). The material lost by a normal companion of low mass, known as low mass X-Ray binary (LMXB) or of high mass, namely, a high mass X-Ray binary (HMXB) forms an accretion disk around the BH. Mass and angular momentum are thereby transferred to the BH, releasing an intense X-ray flux. This particular group of binary systems has been studied from both, an observational and a theoretical point of view (among the most recent works: e.g., You et al. 2023, Mikolajewska et al. 2022, Mata Sanchez et al. (2021), Fukumura et al. 2021, Langer et al. 2020, Ivanova et al. 2017)
V404 Cyg is a member of the LMXB family. It was discovered by the space satellite _Ginga_ in May of 1989 as the transient X-ray source GS 2023+338 (Makino 1989). Its optic counterpart was identified as the variable star V404 Cyg (Wagner et al. 1989). Later, Charles et al. (1989) identified the source as an LMXB. The binary has an orbital period of \(P=6.473\pm 0.001\) d (Casares et al. 1992) and a mass function of \(f(M)=6.08\pm 0.06\) M\({}_{\odot}\) (Casares & Charles 1994). This high value of the mass function suggests the nature of the accretor is that of a BH. The mass ratio was determined by Casares et al. (1992) as \(q\equiv M_{\rm d}/M_{\rm BH}=0.06^{+0.004}_{-0.008}\), with \(M_{\rm d}\) and \(M_{\rm BH}\) as the masses of the donor star and the BH, respectively. The companion was confirmed as a giant star when Khargharia et al. (2010) determined its spectral type as K3 III. They also found the binary's inclination, \(i=67^{+5}_{-1}\). Knowing all these parameters, the determination of the masses of the components is immediate, namely, \(M_{\rm d}=0.54\pm 0.05\) M\({}_{\odot}\) and \(M_{\rm BH}=9.0^{+0.2}_{-0.6}\) M\({}_{\odot}\), for the donor and the BH, respectively. In 2009 a precise estimation of the distance was taken, giving a value of \(d=2.39\pm 0.14\) kpc by the measure of the parallax of the system on radio waves (Miller-Jones et al. 2009). Ziolkowski & Zdziarski (2018) (henceforth ZZ18) presented, based on these observational data, the radius of the donor star, \(R_{\rm d}=5.50^{+0.17}_{-0.18}\) R\({}_{\odot}\), the effective temperature (from the spectral type) \(T_{\rm eff}=4274^{+116}_{-113}\) K and the luminosity of the donor star, \(L_{\rm d}=8.7^{+1.7}_{-1.4}\) L\({}_{\odot}\).
Many studies have sought to calculate the accretion rate onto the compact object during the various outburst that had taken place in 1938, 1989, and 2015 (Chen et al. 1997; Zycki et al. 1999; Motta et al. 2017). The system presented two episodes of outburst in 2015, namely: in July (Barthelmy et al. 2015) and in December (Marti et al. 2016; Motta et al. 2016). Due to the large absorption reported (Kimura et al. 2016), it was challenging to construct an X-ray luminosity curve this year and, thus, to estimate a mass accreted during this event. With the information provided by the above-mentioned authors, ZZ18 stated that the value \(\langle\dot{M}_{\rm BH}\rangle=4.0\times 10^{-10}\) M\({}_{\odot}\)\({\rm\ yr}^{-1}\) is most likely an upper limit for the accretion rate onto the BH in V404 Cyg. This value is lower than the estimated mass loss rate for the donor star \(\langle-\dot{M}_{\rm d}\rangle=1.1\times 10^{-9}\) M\({}_{\odot}\)\({\rm\ yr}^{-1}\) that was predicted using the equa
tion 25a from Webbink et al. 1983 for the system V404 Cyg. ZZ18 also re-obtained these values using an updated evolutionary model. The difference between the amount of mass lost by the donor and the mass accreted by the BH has been attributed to the mass that gets lost from the system, advecting angular momentum along with it. These mass and angular momentum losses make the system evolve in a non-conservative way. In addition, there are observational indications that V404 Cyg is currently losing mass (Munoz-Darias et al. 2016).
In this work, we consider non-conservative close binary evolutionary models with the objective of reproducing observational data available for the main parameters of V404 Cyg, with the aim of obtaining a possible progenitor for the system. We also predict possible results for the evolution of the system's donor and show some theoretical parameters that we expect to be observationally measured in the future, such as the time derivative of the orbital period. On the other hand, we also study the evolution of the spin parameter in the context of our models. We specify the numerical code used in Section 2, show the results of our models in Section 3, and present our conclusions in Section 4.
## 2 The binary evolution code
The main tool for this work is the binary evolutionary code described in Benvenuto & De Vito (2003); De Vito & Benvenuto (2012); Benvenuto et al. (2012). When components remain detached, it works as a standard evolutionary code for isolated stars. In the case of semi-detached configurations, our code includes the mass transfer rate, \(\dot{M}_{1}\), as a new variable in the difference equations. Then, the mass of the donor is \(M_{1}=M^{1,prev}+\dot{M}_{1}\Delta t\), where \(M_{1,prev}\) is the mass of the donor in the previous stage and \(\Delta t\) is the time step. As \(M_{1}\) appears in the equations of the structure of the entire model, \(\dot{M}_{1}\) is treated as a _global_ variable to be solved. This is in contrast with all the other variables that are local and meant to be relaxed. When handling the corresponding generalized Henyey matrix, this treatment involves a non-zero column. The resulting matrix equation can be solved with a slight modification of the standard algebra. This solves the structure of the donor star, the orbital evolution, and the value of the mass transfer rate simultaneously in a fully implicit way, which makes the algorithm numerically stable. A detailed explanation of the procedure is given in Benvenuto & De Vito (2003). We assume that the mass is only transferred via Roche lobe overflow (RLOF). As for opacities, we used OPAL libraries (Iglesias & Rogers 1996) for temperatures of \(T\geq 10^{4}\) K and molecular opacities computed by Ferguson et al. (2005) for lower values of \(T\). A detailed description of how the code works may be found in Benvenuto et al. (2012).
Regarding the abundances assumed for our models, in the first step of our calculations, we followed ZZ18 to employ the solar metallicities. We have set them to \(X=0.710\), \(Y=0.276\), and \(Z=0.014\), whereas the mixing length parameter has been set to \(\alpha_{\rm MLT}=1.50\). With these values, our code is able to compute a solar structure compatible with observations at its present age. We remark here that these abundances are slightly different from those given in Asplund et al. (2021) who measured values of \(X=0.7438\pm 0.0054\), \(Y=0.2423\pm 0.0054\), and \(Z=0.0139\pm 0.0006\) at the surface of the Sun. If we set the abundances to these values, the Sun would be slightly under-luminous by 0.05 dex, which is a small discrepancy since the physical ingredients employed by Asplund et al. (2021) are different from ours. So, we decided to slightly adjust the initial abundances to produce a Sun compatible with observations. In the second step of the model calculations, we took into account the determination of abundances for the donor star in V404 Cyg, presented in Gonzalez Hernandez et al. (2011), and we employed \(X=0.71\), \(Y=0.262\) and \(Z=0.028\).
### Non-conservative mass transfer and orbital evolution
For cases of conservative binary evolution calculations, total mass, and orbital angular momentum remain as constants. However, in analyzing the difference between the estimated mass loss rate from the donor component and the estimated accretion rate on the BH from V404 Cyg, it is commonly assumed that mass gets lost in a non-conservative mass transfer episode advecting angular momentum away from the system (Webbink et al. 1983; Chen et al. 1997; Zycki et al. 1999; Motta et al. 2017; ZZ18). This is expected to occur in some astrophysical scenarios of interest, so this phenomenon was included in the calculations.
We employed the usual equation to compute the evolution of the orbital semi-axis. This can be obtained using the definition of the angular moment combined with Kepler's Third Law. The episode of non-conservative mass transfer is specified by two free parameters, as in Rappaport et al. (1982, 1983): 1) the fraction \(\beta\) of mass lost by the primary star1 that is accreted by the secondary star, and 2) the specific angular momentum of matter lost away from the system \(\alpha\) in units of the same quantity for the compact object. We assume that the orbit is always well approximated by a circle of radius \(r_{\rm orb}\) (where \(r_{\rm orb}\) is a function of time) and we have neglected the rotational angular momentum of the components. In the case where the angular momentum is lost only by mass ejection from the system, we find:
Footnote 1: We will name as primary (i.e., with the sub-index 1) to the object that starts losing mass. In this case, we will name this way to the donor star and we will refer to the BH as secondary with the sub-index 2.
\[\frac{dJ_{\rm ME}}{dt}=\alpha(1-\beta)\sqrt{GMr_{\rm orb}}\left(\frac{M_{1}}{M }\right)^{2}\dot{M}_{1}, \tag{1}\]
where \(G\) is the gravitational constant and \(M=M_{1}+M_{2}\) is the total mass of the system, with \(M_{1}\) and \(M_{2}\) the masses for the donor star and the BH, respectively.
Angular momentum can also be lost from the system by gravitational radiation and it is calculated according to the standard formula (Landau & Lifshitz 1971):
\[\frac{dln(J_{\rm RG})}{dt}=-\frac{32G^{3}\mu}{5c^{5}}\frac{M^{2}}{r_{\rm orb} ^{4}}, \tag{2}\]
where \(c\) is the vacuum speed of light and \(\mu=\frac{M_{1}M_{2}}{M}\).
The code also considers angular momentum loss due to magnetic braking, using the prescription of Rappaport et al. (1983), based on the magnetic-braking law of Verbunt & Zwaan (1981),
\[\frac{dJ_{\rm MB}}{dt}=-3.8\times 10^{-30}M_{1}R_{1}^{4}\omega^{3}\,{\rm dyn \,cm}, \tag{3}\]
where \(\omega\) is the angular rotation frequency of the donor star, assumed to be synchronized with the orbit, and \(R_{1}\) is the donor's radius. The code includes full magnetic braking when the star has a sizable convective envelope embracing a mass fraction \(\geq 0.02\).
Replacing Equations (1) - (3) in the expression for the evolution of the angular momentum and considering \(\dot{M}_{2}=-\beta\dot{M}_{1}\) from the definition of \(\beta\), we obtain a differential equation for the orbital separation, which has no analytical solution.
### Eddington limit and black hole spin parameter
This is the first work in which we employ our code to calculate the evolution of a binary system with a BH; in previous papers, the companion was a neutron star or a normal star. Thus, we had to change the accretion efficiency of the compact object. Furthermore, we are interested in calculating the evolution of the spin angular momentum of the BH as it receives mass and angular momentum from its companion. For that purpose, we followed the prescriptions given in Podsiadlowski et al. (2003) (henceforth, PRH03):
The luminosity released due to accretion onto the BH is:
\[L_{2}=\eta\dot{M}_{2}c^{2}, \tag{4}\]
where \(\dot{M}_{2}\) is the BH accretion rate, \(c\) is the vacuum speed of light, and \(\eta\) is the efficiency with which the BH radiates, determined by the last stable particle orbit. This parameter can be expressed as:
\[\eta=1-\sqrt{1-\left(\frac{M_{2}}{3M_{\rm BH}^{0}}\right)}, \tag{5}\]
where the quantities \(M_{\rm BH}^{0}\) and \(M_{2}\) are the initial and present mass of the BH, respectively.
Equaling the BH luminosity \(L_{2}\) with Eddington's luminosity and assuming spherical accretion, an expression for the maximum accretion rate onto de BH can be obtained (see PRH03) as:
\[\dot{M}_{\rm Edd}\simeq 2.6\times 10^{-7}{\rm M_{\odot}yr^{-1}}\left(\frac{M_{ 2}}{10{\rm M_{\odot}}}\right)\left(\frac{\eta}{0.1}\right)^{-1}\left(\frac{1+X }{1.7}\right)^{-1}, \tag{6}\]
where \(X\) is the hydrogen mass fraction. The BH accretion rate is limited by this value through its evolution.
On the other hand, the accretion phenomena not only affects the mass of the BH. As it accretes matter, and since that matter carries angular momentum with it, the spin parameter of the BH defined as \(a^{*}\equiv cJ_{2}/GM_{2}^{2}\) also increases, according to:
\[a^{*}=\left(\frac{2}{3}\right)^{\frac{1}{2}}\frac{M_{\rm BH}^{0}}{M_{2}}\left\{ 4-\left[18\left(\frac{M_{\rm BH}^{0}}{M_{2}}\right)^{2}-2\right]^{\frac{1}{2} }\right\}, \tag{7}\]
It is important to remark that these expressions are adequate for an initially non-rotating BH (see PRZ03) and are valid when \(M_{2}<\sqrt{6}M_{\rm BH}^{0}\), which is the case for all our calculations (for a detailed treatment see Bardeen 1970; King & Kolb 1999).
## 3 Models and results
Our primary objective in this work is to obtain possible progenitors for the binary system V404 Cyg. With this goal, we analyzed the results generated by different sets of initial parameters: masses of the donor star and the BH (\(M_{\rm d}^{0}\) and \(M_{\rm BH}^{0}\), respectively), the orbital period \(P_{\rm orb}^{0}\) and the fraction \(\beta\) of the mass lost by the donor that is accreted by the BH (as defined in Section 2.1). We fixed the free parameter that describes the specific angular momentum of matter lost as \(\alpha=1\).
An important consideration that stands out is that this code calculates the evolution of the donor star with an already existing BH. That is to say, we are not discussing how the compact object formed and we are also avoiding the common envelope phase. The code assumes that the orbit of the system components is circularized, so the circular restricted three-body problem can be applied. This is a very reasonable assumption for V404 Cyg, where we can get the orbital eccentricity using the definition of the mass function and the observational estimations for the orbital period, the K semi-amplitude, and the mass function given by Casares et al. (1992) obtaining a value of \(e\sim 0.024\).
### Models with solar abundances
We computed 72 evolutionary sequences with solar abundances exploring the combinations of initial donor masses of 1.5 and 2.0 M\({}_{\odot}\), initial BH masses of 7, 8, and 9 M\({}_{\odot}\), initial orbital period of 0.75, 1.00, and 1.25 days and values of 0.1, 0.3, 0.7 and 1.0 for the \(\beta\) fraction. The identification of each model was built using the first character associated with the initial masses of the model (see Table 1), followed by labels that are related to the initial orbital period and the \(\beta\) parameter. For example, the model C_075_07 was calculated with initial masses of 1.5 M\({}_{\odot}\) and 8 M\({}_{\odot}\) for the donor and the BH, respectively, as well as an initial orbital period of 0.75 d and \(\beta=0.7\).
We introduce the function \(\epsilon^{2}\) that helps us to determine how close the parameters given by our models are from the ones observationally acquired. This quantity is defined as:
\[\epsilon^{2}=\sum_{i}\epsilon_{i}^{2}\,\ \ \ \ \ \epsilon_{i}=\frac{E_{i}-E_{i}^{ \rm obs}}{E_{i}^{\rm obs}} \tag{8}\]
where \(E_{i}\) and \(E_{i}^{\rm obs}\) are the model and observational data for the parameter \(i\), and \(i=1,...,5\) correspond to the BH mass (\(M_{\rm BH}\)), the donor's mass (\(M_{\rm d}\)), the orbital period (\(P_{\rm orb}\)), effective temperature (\(T_{\rm eff}\)) and luminosity (\(L_{\rm d}\)) of the donor star, respectively. For example, \(\epsilon_{1}=(E_{1}-E_{1}^{\rm obs})/E_{1}^{\rm obs}\) means \(\epsilon_{M_{\rm BH}}=(M_{2}-M_{\rm BH})/M_{\rm BH}\). The time dependence for \(\epsilon_{i}^{2}\) will be given by the change of the quantities along the evolutionary sequences. All values used for the observational measures can be found in Table 2. For \(\epsilon^{2}=0\), we have a case where all the \(i\) parameters calculated from the models are equal to the ones obtained observationally. This makes it easier to see that this quantity helps us to determine when and how the models represent the system's observational data simultaneously.
Computing this function along the calculated sequences allows us to obtain a minimum value for each of them. This minimum value corresponds to the time when the quantities modeled are closest to the ones observationally estimated. We consider that the model represents the characteristics of the observed system well enough when the minimum value of the \(\epsilon^{2}\) function is lower than 0.0485, which is the value of the sum obtained when \(E_{i}=E_{i}^{\rm obs}+\sigma_{i}^{\rm obs}\) for each \(i\), where \(\sigma_{i}^{\rm obs}\) is the observational error for each parameter that can be found on Table 2. This restriction guarantees that the modeled parameters are not far from their
\begin{table}
\begin{tabular}{c|c|c} \hline \hline
**Group** & \(\mathbf{M_{\rm d}^{0}[M_{\odot}]}\) & \(\mathbf{M_{\rm BH}^{0}[M_{\odot}]}\) \\ \hline A & 1.5 & 7 \\ B & 2.0 & 7 \\ C & 1.5 & 8 \\ D & 2.0 & 8 \\ E & 1.5 & 9 \\ F & 2.0 & 9 \\ \hline \end{tabular}
\end{table}
Table 1: Groups of models divided by the initial masses considered for the donor (1.5 and 2.0 M\({}_{\odot}\)) and for the BH (7, 8, and 9 M\({}_{\odot}\)).
observational uncertainties. Only two of our models calculated with solar abundances satisfy this condition, namely: E_100_01 with \(\epsilon_{\rm min}^{2}=0.0449\) and E_100_03 with \(\epsilon_{\rm min}^{2}=0.0482\), as seen on Figure 1. In Sections 3.1.1 and 3.1.2, we present our results for these models.
#### 3.1.1 Mass transfer
Among other studies, ZZ18 have studied the non-conservative mass transfer episode for this system. In their work, they estimated the accretion rate over intervals between outbursts and got a likely upper limit for it, with a value of \(\langle\dot{M}_{\rm BH}\rangle=4\times 10^{-10}\)\({\rm M_{\odot}~{}yr^{-1}}\). They also stated that their models with \(\beta\lesssim 0.33\) were the ones that are in agreement with this limit. For the donor mass loss rate, the existing estimation is of \(\dot{M}_{4}=1.1\times 10^{-9}\)\({\rm M_{\odot}~{}yr^{-1}}\), value taken from Webbink et al. (1983) using the equation 25a with the V404 Cyg system's parameters.
\begin{table}
\begin{tabular}{l c c l} \hline \hline
**Parameter** & **Value** & **Error** & **References** \\ \hline \(M_{\rm BH}\) & 9 \({\rm M_{\odot}}\) & \({}^{+0.2}_{-0.6}\) & Casares \& Charles (1994); Khargharia et al. (2010) \\ \hline \(M_{\rm d}\) & 0.54 & \(\pm 0.05\) & Casares \& Charles (1994); Khargharia et al. (2010) \\ \hline \(P_{\rm orb}\) & 6.47 d & \(\pm 0.001\) & Charles et al. (1989) \\ \hline \(T_{\rm eff}\) & 4274 K & \({}^{+116}_{-113}\) & Cox (2000); Khargharia et al. (2010) \\ \hline \(L_{\rm d}\) & 8.7 \({\rm L_{\odot}}\) & \({}^{+1.7}_{-1.4}\) & Cox (2000); Khargharia et al. (2010); ZZ18 \\ \hline \end{tabular}
\end{table}
Table 2: Observational data for V404 Cyg system. Each quantity comes accompanied with the corresponding error (Column 3) and the reference where it was taken from (Column 4).
Figure 1: Quantity \(\epsilon^{2}\) as a function of time for our best models with solar composition: E_100_01 and E_100_03. With a dashed line and a grey area is indicated the value of \(\epsilon^{2}=0.0485\) and the acceptance region we considered for our models. With a black solid line, the value \(\epsilon^{2}=0\) represents the situation where all the parameters modeled are equal to the ones observed simultaneously.
Figure 3: BH accretion rate for the models with solar abundances of the E group with \({\rm P_{\rm orb}^{0}=1~{}d}\). The qualitative form of the functions is related to the mass loss rate of the donor star by the relation \(\dot{M}_{2}=-\beta\dot{M}_{1}\). This is, for a higher value for \(\beta\) larger would be the accretion rate onto the compact object. The shaded area represents the zone below the likely upper limit for the mass accretion rate \(\left(\left\langle\dot{M}_{2}\right\rangle=4\times 10^{-10}{\rm M_{\odot}yr^{-1}}\right)\) given by ZZ18. Models with \(\beta\leq 0.3\) agree with this upper limit and the results reached by the mentioned authors.
Figure 2: Donor star mass loss rate for each of the best models with solar composition: E_100_01 and E_100_03. As the only variation between these models is on the \(\beta\) value, and this parameter does not affect strongly the mass loss episode, their plots are mostly overlapping. The black dashed horizontal lines represent the estimation for the mass loss rate for V404Cyg of \(\dot{M}_{\rm d}=1.1\times 10^{-9}\)\({\rm M_{\odot}~{}yr^{-1}}\) is denoted with a dashed horizontal line. As for the dashed vertical lines, they represent the times of the minimum value of the epsilon squared function.
Our results for the mass transfer episode are resumed in Figure 2, where the donor mass loss rate (\(\dot{M}_{1}\)) for the best models is shown. In Figure 3, we show the accretion rate on the compact object (\(\dot{M}_{2}\)) for the same models.
For the first case (shown in Figure 2), the existing estimation has been featured with a horizontal dashed line. We have highlighted the age predicted by our models when the epsilon squared function reaches its minimum value (see Figure 1) with a vertical dashed line. At this time, the mass loss rate of the donor star for models E_100_01 and E_100_03 nicely agrees with the above-quoted estimation.
As for the mass accretion rate onto the BH (Figure 3), we added to our two best models the ones from the same group and initial orbital period so the effect of the variation of \(\beta\) becomes evident. As is typical in the computation of close binary models, this parameter relates the mass loss by the donor with the one that is accreted by the compact object as \(\dot{M}_{2}=-\beta\dot{M}_{1}\). This relation makes the mass accretion rate function similar in form to the mass loss rate function of the previous figure, but a change in the \(\beta\) parameter does not provoke a strong variation to \(\dot{M}_{1}\) as to \(\dot{M}_{2}\). The shaded area on this figure represents the values below the likely upper limit suggested by ZZ18. We found that both of our models with \(\beta\leq 0.3\) remains under this limit, confirming previous results.
#### 3.1.2 Orbital period and BH spin parameter
A non-conservative mass transfer episode in a binary system essentially consists of a fraction of the mass transferred by the donor being accreted by the companion, while the rest is lost from the system. The mass accreted by the BH accelerates its rotation. This phenomenon can be studied by analyzing the evolution of the BH spin parameter \(a^{*}=cJ_{BH}/GM_{BH}^{2}\), with \(J_{BH}\) as the rotational angular momentum of the BH.
The only available data for the spin parameter of V404 Cyg BHs is given in Walton et al. (2017). This study used a spectral analysis of \(NuSTAR\) X-ray observations of V404 Cyg for its 2015 outburst and proposed different models that fit the observations using the reflection method. Although these authors obtained multiple solutions for the BH spin parameter, they stated that the most robust one to be \(a^{*}>0.92\) with a 99% of statistical uncertainty.
For the computation of this parameter, assuming that the BH is initially not rotating, we employed Equation 7. We obtained the evolution for the BH spin parameter over time, as shown in Figure 4. In this graph (and Table 3), we show our results for all the models from group E with an initial orbital period of 1 d, so it becomes evident that the larger the \(\beta\) value, the faster the BH's final rotation.
The solutions for the BH spin parameter at the presumed ages for the system (see Table 3) correspond to a slow rotation regime when the existing estimations (even the ones for slow rotators obtained by Walton et al. 2017) are much higher.
As matter leaves the system it carries away angular momentum, as described by Equation 1. Other effects that also modify the orbital period are gravitational radiation (Equation 2) and magnetic braking (Equation 3). In this work, we consider all three effects together, ultimately finding that the orbital period mostly increases with time, reaching values of \(\mathrm{P_{orb}}=46\) - 48 d at the end of our calculations (age of the donor star of 14 Gyr). The evolution of this quantity for each of our best models is shown on the top panel of Figure 5, where the initial orbital period is \(P_{\mathrm{orb}}^{0}=1.00\) d for both. Our results are in good agreement with the well-determined value of \(P_{\mathrm{orb}}=6.47\) d, when \(\epsilon^{2}\) has its minimum value. As for the bottom panel, we show the time derivative of the orbital period of our models, calculated for each time as an approximation of an incremental quotient. The results for this quantity and the characteristic timescale (\(P_{\mathrm{orb}}/P_{\mathrm{orb}}\)) at the expected ages of the system can be found in Table 4. We found an increased timescale of \(\sim 2\times 10^{8}\) yr, which is in good agreement with the one predicted by ZZ18.
Although the time derivative of the orbital period would be very useful to test the evolutionary scenario, this quantity is not yet known and there are no prospects for deriving it any time soon. For measuring such quantity with an adequate degree of certainty, we would need a time basis that is far longer than what is presently available. King & Lasota (2021) stated that this time basis should be long enough for the radii of the donor
\begin{table}
\begin{tabular}{c|c|c|c|c} \hline \hline
**Model** & **t\({}_{\mathrm{min}}\) [Gyr]** & **P\({}_{\mathrm{orb}}\)[d]** & **P\({}_{\mathrm{orb}}\)** & **P\({}_{\mathrm{orb}}\)/P\({}_{\mathrm{orb}}\) [yr]** \\ \hline E\_100\_01 & 3.596 & 5.65 & \(7.49\times 10^{-11}\) & \(2.1\times 10^{8}\) \\ E\_100\_03 & 3.601 & 5.78 & \(7.63\times 10^{-11}\) & \(2.0\times 10^{8}\) \\ \hline \end{tabular}
\end{table}
Table 4: Orbital period with its time derivative value and the characteristic increase time-scale evaluated on the estimated age for the system for models with Z=0.014.
Figure 4: BH spin parameter for the V404 Cyg compact object for each of the best models, according to Equation (7). As the BH accretes matter it spins up, provoking the increase of \(a^{*}=cJ_{2}/GM_{2}^{*}\). Thus, the lower the \(\beta\), the lower the BH spin. The vertical dashed lines are the predicted ages for V404 Cyg. The values of the BH spin for these times are given in Table 3.
\begin{table}
\begin{tabular}{c|c|c} \hline \hline
**Model** & **t\({}_{\mathrm{min}}\) [Gyr]** & **BH spin parameter \(a^{*}\)** \\ \hline E\_100\_01 & 3.596 & 0.04 \\ E\_100\_03 & 3.601 & 0.11 \\ E\_100\_07 & 3.616 & 0.23 \\ E\_100\_10 & 3.628 & 0.32 \\ \hline \end{tabular}
\end{table}
Table 3: Value for the BH spin parameter \(a^{*}\) at the time of the minimum on \(\epsilon^{2}\) quantity. It becomes evident that the more mass the BH accretes, the more it finally spins up.
star and its Roche lobe to vary at least on a density scale height, \(H_{\rho}\) (\(H_{\rho}\equiv-dr/d\ln\rho\)), which corresponds to thousands of years. Any measurement in the near future will surely reflect the occurrence of short-timescale phenomena, neglected in our calculations. In this sense, the values of \(\dot{P}_{\rm orb}\) we have presented above are related to the ingredients considered for a modeling of the evolution of the whole system (magnetic braking, gravitational radiation, and mass loss from the system).
### Models with higher metallicity
As described above and as done in ZZ18, we initially considered solar abundances. Nevertheless, Gonzalez Hernandez et al. (2011) presented a chemical abundance analysis for the donor star, and obtained \(\rm[Fe/H]=0.23\pm 0.19\). This value is well addressed with a metallicity of \(\rm Z=0.028\), two times the value corresponding to the Sun. Therefore, we calculated 30 additional models with this new value of Z, fixing the hydrogen abundance on \(\rm X=0.71\). For this instance, we fixed the value of the initial donor's mass at 1.5 M\({}_{\odot}\) and explored only the values of \(\beta=0.3\) and 0.1, based on the results obtained from the solar metallicity analysis. For the values of the initial BH mass, we still considered \(\rm M^{0}_{BH}\) =8, 9, and 10 M\({}_{\odot}\), and we explored the same interval of initial orbital periods, \(\rm P^{0}_{\rm orb}\) =0.75, 1.00, and 1.25 d, adding 1.50 and 1.75 d values to the analysis. These models are identified similarly to those corresponding to solar metallicity, but adding "_Z028_" at the end of the name.
The best models we obtained are: E_150_01_Z028 and E_150_03_Z028, with minimum \(\epsilon^{2}\) values of 0.0119 and 0.0132, respectively (see Figure 6). These models not only reach lower values than our acceptance one, but also each of their parameters gets within their respective observational uncertainty listed on Table 2 at ages of 5.170 and 5.176 Gyr 2.
Footnote 2: Note: these values slightly differ (\(<0.02\) Gyr) from the time of the minimum value of the epsilon squared function
Figure 5: Orbital period as a function of time for our best models with solar metallicity (solid lines), observational estimation for the present orbital period \(P_{\rm orb}=6.47\) d (dashed horizontal line) and age of the system for the system predicted for each model (dashed vertical lines) shown at the top. The bottom panel shows the time derivative for the orbital period for the same models and ages.
Figure 6: Quantity \(\epsilon^{2}\) as a function of time for our best models with the change on metallicity: E_150_01_Z028 and E_150_03_Z028. The elements represented are the same as in Figure 1.
#### 3.2.1 Mass transfer
A change in the metallicity of the donor star implies a change in the donor's outer opacities and, thus, in its structure as well. As for the mass transfer episode, the results can be seen in Figures 7 and 8. We estimate the present mass loss rate as \(1.24\times 10^{-9}\) M\({}_{\odot}\)/yr, which is also in good accordance with the estimation given by ZZ18.
Considering the accretion rate, the model computed with \(\beta=0.3\) seems to exceed the upper limit of \(4.0\times 10^{-10}\) M\({}_{\odot}\)/yr at some parts of its evolution, but is below this limit near the present age. The other model, computed with \(\beta=0.1\), is still fully under the limit.
#### 3.2.2 Orbital period and BH spin parameter
As the BH accretion episode has not changed quantitatively, a significant amount from the models with lower metallicity, it is expected that the results for the BH spin parameter do not differ too much from the ones presented above. This parameter evolution for the two models that have been taken into account is shown in Figure 9. Once again, our models do not reach the observational estimation given by Walton et al. (2017).
The orbital period evolution considering these models can be resumed in Figure 10 with some results given in Table 5. Models with this metallicity and initial orbital period reach the observed one for V404 Cyg at the same moment when the other quantities are still within their observational uncertainties while predicting a present value of \(\dot{P}_{orb}\sim 9.2\times 10^{-11}\). This value deduces a characteristic increase time consistent with the one obtained by ZZ18. Our models with this metallicity deduced final orbital periods between 68 - 70 d.
Figure 8: BH accretion rate for the two best models with double solar metallicity. The elements represented are the same as in Figure 3.
Figure 7: Donor star mass loss rate for each of the best models with the metallicity change: E\(\_\)150_01_Z028 and E\(\_\)150_03_Z028. The elements represented are the same as in Figure 2.
\begin{table}
\begin{tabular}{l|c|c|c|c} \hline \hline
**Model** & **t\({}_{min}\) [Gyr]** & **P\({}_{orb}\)[d]** & **P\({}_{orb}\)** & **P\({}_{orb}\)/P\({}_{orb}\) [yr]** \\ \hline E\(\_\)150\_01\_Z028 & 5.170 & 6.47 & \(9.19\times 10^{-11}\) & \(1.9\times 10^{8}\) \\ E\(\_\)150\_03\_Z028 & 5.176 & 6.47 & \(9.24\times 10^{-11}\) & \(1.9\times 10^{8}\) \\ \hline \end{tabular}
\end{table}
Table 5: Orbital period with its time derivative value and the characteristic increase time-scale evaluated on the estimated age for the system for models with Z=0.028.
Figure 9: BH spin parameter evolution for the V404 Cyg compact object according to Equation (7). The values of the BH spin for the predicted ages for V404 Cyg (vertical dashed lines) are very similar to the two best models computed with solar composition in Table 3.
### Donor star evolution and proposal of our bests progenitors
This work aims to model the characteristics of V404 Cyg to get a predecessor system and also to analyze its present and future evolution. With this in mind, we calculated our models up to an age of \(t=14\) Gyr. This allowed us to make some estimations for the complete evolution of the system.
Figure 11 shows the different evolutionary tracks for the models in the Hertzsprung-Russell diagram for some of our best progenitors. Here, we can offer some remarks. As for the tracks that are calculated with solar abundances, the mass-transfer episode occurs in three parts. The first one begins on the main sequence for both models (mass transfer episode in case A; Kippenhahn & Weigert 1967) and ends abruptly due to the contraction of the donor when the central hydrogen is exhausted. After a very short time, the second mass transfer episode begins. This behavior cannot be appreciated in the evolutionary tracks because of the small portion of the diagram covered while the donor star is detached from its Roche lobe. Once the donor evolves blueward and gets dimmer, a hydrogen thermonuclear flash occurs making the star raise its luminosity and expand. Then, it starts the third mass transfer episode. The whole thermonuclear flash event is very fast, making the third mass transfer episode look like a Dirac's delta distribution.
Because the donor star is not massive enough to start the \(3\alpha\) reactions, no helium burning occurs. Our prediction for the final fate of the V404 Cyg donor star is to become a low-mass helium white dwarf with a mass of 0.28 M\({}_{\odot}\) and a radius of \(\sim 0.02\) R\({}_{\odot}\) with a hydrogen-rich surface.
As for the present state of the donor star, observational data can be seen in Figure 11. These data were placed on the HR diagram using the estimations of \(L_{\rm d}=8.7\) L\({}_{\odot}\) and \(T_{\rm eff}\sim 4200\) K and its respective error bars (see Table 2). Also, our theoretical models predict that the donor star is currently on the red giant branch and getting near the end of the first mass transfer episode (remaining \(\sim 0.2\) Gyr).
The evolutionary tracks corresponding to a metallicity of \(Z=0.028\), shown in Figure 11 (bottom panel), shares lots of characteristics in common with our previous analysis for the models with solar abundances. However, we do note that the mass transfer episode occurs in two parts, where the first one begins after core hydrogen exhaustion (case B offmass transfer episode; Kippenhahn & Weigert 1967). The remnant compact object is still a helium white dwarf of mass M \(\sim 0.29\) M\({}_{\odot}\). The estimated age of the system predicts that the donor star is losing mass on the main mass transfer episode and there are \(\sim 0.3\) Gyr remaining for the end of it. The entirety of the mass transfer episodes takes place within \(\sim 1\) Gyr.
In Figure 12, we present the contribution for every parameter considered in the construction of the epsilon squared function for one of the best models with each metallicity. For the case of solar abundances (left panel), we can see that near the minimum value, the parameters that dominate the total epsilon squared function are the system orbital period and the donor's luminosity with a great contribution of the donor's mass, while with Z=0.028 (right panel) the best models get a better estimation for this last parameter but it's behavior is still dominated by the luminosity and orbital period. The evolution of binaries is very sensitive to the variation of these parameters since the orbital evolution of the system takes a primary role in the RLOP episodes and the donor's initial mass determines the initial position on the ZAMS and the way it evolves. So, even with a thin grid on these parameters, we would have to get very specific for these initial parameters to find better models. As for the donor's luminosity, this is the observed parameter that has the largest rel
Figure 10: Orbital period and its time derivative as a function of time for models with a metallicity of Z=0.028. The represented elements are the same as the ones shown in Figure 5 but adequate to these models.
ative error. This is due to the fact that the system is located in the bulge of our galaxy, which is a very obscured area, so we do not rely much on this quantity.
## 4 Conclusions
Based on the calculations and analysis we performed in this work, we are in a position to propose a plausible progenitor for V404 Cyg. We considered, in the first step of calculations, solar abundances, and in the second step, a more metallic donor star. From the results given by our models and the analysis performed in the previous section, we found that two models with each metallicity considered could represent adequately the current state of V404 Cyg.
Considering the epsilon squared (\(\epsilon^{2}\)) function (see Equation 8), we selected the two best models for each metallicity: E_100_01 reaching \(\epsilon^{2}_{\rm min}=0.0449\) and E_100_03 with \(\epsilon^{2}_{\rm min}=0.0482\) (solar abundances, Z=0.014), and E_150_01_Z028 with \(\epsilon^{2}_{\rm min}=0.0119\) and E_150_03_Z028, with \(\epsilon^{2}_{\rm min}=0.0132\) (Z=0.028). The last two, not only reached minimum values lower than the accepted one but also each quantity issimultaneously within its observational uncertainty.
Then, we consider that the best progenitor for V404 Cyg is a system formed by a BH of 9 \(M_{\odot}\) together with a normal star of 1.5 \(M_{\odot}\), with a metal content of \(Z=0.028\). The orbital period of this progenitor was 1.5 d, and the BH accretes between 10 and 30% of the mass lost by its companion.
This model predicts that the donor star of this system may have a hydrogen thermonuclear flash event leading to a short mass loss episode. The remnant of the evolution for the donor star is predicted to be a low-mass helium white dwarf with a hydrogen-rich envelope of mass M = 0.29 M\({}_{\odot}\) and radii R = 0.02 R\({}_{\odot}\).
Although most of the main characteristics of the V404 Cyg system are accounted for by our models, there is one specifically that is not. This is the BH spin parameter, for which we obtained values that are far below the only observational available one, presented in Walton et al. (2017). It seems natural to consider that this discrepancy is due to the assumption that the BH is initially not rotating. Thus, our results may be interpreted as giving some evidence that in the context of close binary systems, stellar mass BHs may be born with appreciable angular momentum.
###### Acknowledgements.
The authors want to thank our anonymous referee for his/her comments and suggestions that helped us to improve our work. They also thank Florencia Vigroro and Federico Garcia, whose comments were of very good use.
Figure 11: Evolutionary track in an HR diagram for two of our models._Top:_ Model E_100_01 for the case of solar metallicity. _Bottom:_ Model E_150_01_Z028 for the case of higher metallicity. The cases of the best models with higher \(\beta\) value are practically overlapped with the models shown. On the left panels, the whole track is shown with a shaded area when the mass loss episode occurs. There can be seen the observationally estimated position of the donor star and the position for the age of the system predicted by our models (with circles on the evolutionary track). Once the star is getting near to becoming a white dwarf it suffers a thermonuclear flash, making it expand and lose mass again for a very short period of time. The end of these tracks are very similar, the donor becomes a low-mass white dwarf with a hydrogen-rich surface. A dashed line denoting \(R=0.02\) R\({}_{\odot}\) was included; it approximately corresponds to the asymptotic radius for the white dwarf. In the right panels, the zoom is on the nearness of this estimation, where the shaded area considers the observational error for the effective temperature and luminosity. |
2308.12599 | Exploiting Time-Frequency Conformers for Music Audio Enhancement | With the proliferation of video platforms on the internet, recording musical
performances by mobile devices has become commonplace. However, these
recordings often suffer from degradation such as noise and reverberation, which
negatively impact the listening experience. Consequently, the necessity for
music audio enhancement (referred to as music enhancement from this point
onward), involving the transformation of degraded audio recordings into
pristine high-quality music, has surged to augment the auditory experience. To
address this issue, we propose a music enhancement system based on the
Conformer architecture that has demonstrated outstanding performance in speech
enhancement tasks. Our approach explores the attention mechanisms of the
Conformer and examines their performance to discover the best approach for the
music enhancement task. Our experimental results show that our proposed model
achieves state-of-the-art performance on single-stem music enhancement.
Furthermore, our system can perform general music enhancement with multi-track
mixtures, which has not been examined in previous work. | Yunkee Chae, Junghyun Koo, Sungho Lee, Kyogu Lee | 2023-08-24T06:56:54Z | http://arxiv.org/abs/2308.12599v1 | # Exploiting Time-Frequency Conformers for Music Audio Enhancement
###### Abstract.
With the proliferation of video platforms on the internet, recording musical performances by mobile devices has become commonplace. However, these recordings often suffer from degradation such as noise and reverberation, which negatively impact the listening experience. Consequently, the necessity for music audio enhancement (referred to as music enhancement from this point onward), involving the transformation of degraded audio recordings into pristine high-quality music, has surged to augment the auditory experience. To address this issue, we propose a music enhancement system based on the Conformer architecture that has demonstrated outstanding performance in speech enhancement tasks. Our approach explores the attention mechanisms of the Conformer and examines their performance to discover the best approach for the music enhancement task. Our experimental results show that our proposed model achieves state-of-the-art performance on single-stem music enhancement. Furthermore, our system can perform general music enhancement with multi-track mixtures, which has not been examined in previous work. Audio samples enhanced with our system are available at: [https://tinyurl.com/smps19999](https://tinyurl.com/smps19999)
music enhancement, self-attention, TF-Conformer 2023
Yunkee Chae, Junghyun Koo, Sungho Lee, and Kyogu Lee 2023Exploiting Time-Frequency Conformers for Music Audio EnhancementIn Proceedings of the 31st ACM International Conference on Multimedia (MM '23), October 29-November 3, 2023, Ottawa, ON, Canada
3 2023Crowright held by the owner/author(s). Publication rights licensed to ACM. ISSN 979-8-4007-0108-5/23/10...$15.00 [https://doi.org/10.1145/3581783.3612269](https://doi.org/10.1145/3581783.3612269)
3
## 1. Introduction
The rapid expansion of social media platforms on the internet has led to unprecedented accessibility to a large number of videos and audio recordings captured by unprofessional devices, such as smartphones. Despite the convenience of these recordings, their quality often suffers due to various factors, such as background noise, reverberation, and frequency responses of the microphone devices. In particular, live performance recordings on platforms like YouTube typically lack the audio quality expected from professionally recorded studio tracks. As a result, the auditory experience of these recordings is significantly diminished, creating a growing demand for music enhancement techniques that can restore the original quality of the audio. Therefore, post-processing tools that can enhance distorted musical recordings to resemble studio-quality audio are essential for an improved personal listening experience.
Music enhancement, which involves transforming distorted audio recordings into pristine high-quality music, plays a crucial role in augmenting the auditory experience for listeners. Recent studies have sought to address the challenge of enhancing audio quality in such recordings, with many leveraging deep learning techniques to achieve impressive results. Kandapal _et al._[(14)] made significant contributions to the field by enhancing music recordings using the Pix2Pix [(13)] and Diffwave [(19)] approaches in their study, a pioneering work in music enhancement. While their work has been influential, it focused primarily on single-stem music datasets, which led to certain limitations. For instance, their models were not designed to handle general multi-track music, as they were trained on pre-defined classes of instruments using the Medley-solos-DB [(21)] dataset, which limited their ability to generate waveforms of other general instruments.
Schaffer _et al._[(26)] investigated music enhancement to improve the performance of music source separation models using generative modeling. Their proposed post-processor effectively enhanced the estimates of bass and drums, as demonstrated through subjective evaluations. However, their method was primarily focused on
Figure 1. Framework overview of music enhancement task.
enhancing the outputs of the source separation model, which are expected to be a single stem.
Despite the advancements made by these works, there remains room for further exploration in multi-track music enhancement. In particular, the development of methods that can effectively handle a diverse range of instruments and musical genres would be a valuable addition to the field. Moreover, creating systems capable of enhancing multi-track mixtures could unlock new possibilities for music enhancement tasks, enabling users to experience high-quality audio across a broader spectrum of musical compositions.
Accordingly, we propose a new system for music enhancement tasks based on the widely-used Conformer architecture (Cheng et al., 2017). We introduce new modules called TF-Conformers, which adopt attention mechanisms for time-frequency representations of musical signals. Our contributions are as follows:
* The proposed system achieves state-of-the-art performance in single-stem music enhancement tasks.
* We extend the validation of our system by investigating its applicability to multi-track mixture music enhancement, an area that has not yet been extensively explored in the literature.
* We explore new TF-Conformer modules, incorporating attention mechanisms, and evaluate their effectiveness through several ablation studies.
The remainder of this paper is organized as follows: Section 2 provides an overview of recent work in the research field related to music enhancement, focusing on deep learning-based approaches. Section 3 details our proposed Conformer-based music enhancement system, including the architecture and attention mechanisms employed. Section 4 presents the dataset used in our experiments, implementation details, and evaluation metrics. Section 5 reports the results of both objective and subjective evaluation from our experiments. Finally, Section 6 offers concluding remarks, summarizes the contributions of this paper, and discusses future work in this area.
## 2. Related Works
Kandpal _et al._(Kandpal et al., 2017) proposed the _Mel2Mel + Diffwave_ framework for music enhancement tasks, wherein Mel2Mel and Diffwave models are based on (Kandpal et al., 2017) and (Kandpal et al., 2017), respectively. Mel2Mel is employed to enhance the mel-spectrogram of distorted music while Diffwave serves as a vocoder, converting the mel-spectrogram to waveform. They trained _Mel2Mel + Diffwave_ models independently or jointly, with each approach offering distinct benefits. Independent training promotes robustness to artifacts in the enhanced mel-spectrogram, as the Diffwave vocoder is trained exclusively on clean mel-spectrograms. On the other hand, joint training yields a higher FAD score (Kandpal et al., 2017), which the authors argue is closely related to human perception.
In the domain of music source separation, Schaffer _et al._(Schaffer et al., 2017) presented the Make it Sound Good (MSG) post-processor to enhance the output of the music source separation system. They are tailored to counter the perceptual deficiencies of music source separation systems, such as the emergence of superfluous noise or the elimination of harmonics. The authors conducted their work utilizing generative modeling, with the generator derived from an architecture akin to Demucs v2 (Dosov et al., 2016) and the discriminator adopted from HiFi-GAN (Kandpal et al., 2017), optimized with the losses of LSGAN (Leskovec et al., 2017) and deep feature matching loss (Leskovec et al., 2017). The authors employed their post-processing model on state-of-the-art music source separators, including a separator not seen during training. Their findings demonstrated that MSG can significantly enhance the quality of the source estimates.
The field of audio enhancement has predominantly focused on speech enhancement (Leskovec et al., 2017; Kandpal et al., 2017; Kandpal et al., 2017; Kandpal et al., 2017). Especially, recent deep learning-based speech enhancement models have adopted self-attention mechanisms (Leskovec et al., 2017) due to their ability to capture long-range dependencies in sequential features. In particular, various methods apply self-attention to both the frequency-axis and time-axis in different configurations, which we refer to as _TF-self-attentions_. These methods have demonstrated the effectiveness of such attention mechanisms in enhancing audio quality (Leskovec et al., 2017; Kandpal et al., 2017; Kandpal et al., 2017; Kandpal et al., 2017). Meanwhile, Conformer-based models have achieved state-of-the-art performances in speech enhancement tasks (Kandpal et al., 2017; Kandpal et al., 2017; Kandpal et al., 2017; Kandpal et al., 2017). Specifically, CMGAN (Kandpal et al., 2017) and Uformer (Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,U,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,U,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,U,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,U,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,Uformer,U,Uformer,
halves the frequency dimension. Each convolutional layer is followed by instance normalization (Shi et al., 2017) and PReLU (Chen et al., 2017) subsequently. The dilated DenseNet has dilation factors of 1, 2, 4, and 8 for each convolutional layer. Based on our observations, we found that incorporating a residual connection between the encoder's output and the decoder's input results in faster convergence. Therefore, we added this residual connection to the original CMGAN generator's encoder to enhance its performance.
The CMGAN generator consists of two decoders: the magnitude mask decoder and the complex residual decoder. The mask decoder calculates the mask for element-wise multiplication with the input low-quality magnitude spectrogram. Meanwhile, the complex decoder computes real and imaginary residuals to be added to the masked magnitude spectrogram along with the original input phase, producing the final complex spectrogram output. Both decoders employ dilated DenseNets and a subpixel convolution layer (Wang et al., 2017) to upsample the frequency dimension. Subsequent convolutional layers adjust the number of output channels to 1 and 2, respectively, followed by instance normalization and an additional convolutional layer. The PreLU activation function (Chen et al., 2017) is applied exclusively to the mask decoder.
The final output \((\hat{X}_{r},\hat{X}_{i})\) of the CMGAN generator is calculated as follows:
\[\hat{X}_{r}=\hat{X}_{m}^{\prime}\cos{(\angle Y_{o})}+\hat{X}_{r}^{\prime}, \quad\hat{X}_{i}=\hat{X}_{m}^{\prime}\sin{(\angle Y_{o})}+\hat{X}_{i}^{\prime} \tag{2}\]
where \(\hat{X}_{m}^{\prime}\) is the input magnitude spectrogram masked by the output of the mask decoder and \(\hat{X}_{r}^{\prime}\), \(\hat{X}_{i}^{\prime}\) denotes the outputs of the complex decoder.
### TF-Conformer
TF-self-attention mechanisms have been employed by numerous speech enhancement tasks (Shi et al., 2017; Wang et al., 2017; Wang et al., 2018; Wang et al., 2018). We explore various attention mechanisms in the context of Conformers to examine which method is optimal for music enhancement.
We introduce our proposed _TF-Conformers_ (TFC) in Fig. 3, which are based on the conformers depicted in 2-(b). _T-Conformer_ and _F-Conformer_ transform the input dimensions to \(\mathbb{R}^{(BF)\times T\times C}\) and \(\mathbb{R}^{(BF)\times F\times C}\), respectively, and the final reshape block restores the output to its original shape, matching the input dimension. These Conformers are designed to model the sequential features of the time and frequency in spectrograms, respectively. The T-Conformer focuses on capturing the sequential features along the time axis, treating the frequency axis as a global feature for each time frame.
Figure 2. Overall enhancement process of the proposed system based on CMGAN generator (Chen et al., 2017). The model takes a compressed spectrogram of noisy music, \(Y\in\mathbb{C}^{\mathbb{B}\times T\times F}\), as input and outputs the real and imaginary parts of the compressed spectrogram \(\hat{X}\) for the enhanced music, denoted by \(\hat{X}_{r}\) and \(\hat{X}_{i}\). These outputs are then compared to the ground truth clean compressed spectrogram \(X\) using various loss functions. \(Y_{m}\), \(Y_{r}\), and \(Y_{i}\) represent the magnitude spectrogram, and the real and imaginary parts of \(Y\), respectively. The mask decoder estimates a mask for the noisy magnitude spectrogram, while the complex decoder estimates the residuals of the real and imaginary parts to derive the enhanced spectrogram.
This enables the T-Conformer to effectively model temporal dependencies and dynamics present in the spectrogram. Similarly, the F-Conformer is tailored to model sequential features along the frequency axis, considering the time axis as a global feature for each frequency frame. This approach allows the F-Conformer to capture spectral patterns and dependencies within the spectrogram. By separately modeling time and frequency features, the T- and F-Conformers can better comprehend and represent the complex structure of spectrograms, ultimately leading to enhanced performance in music enhancement tasks.
It should be noted that the Cascade TF-Conformer (TFC-C) is similar to the two-stage Conformer block (Beng et al., 2017) except that it applies the F-Conformer before the T-Conformer. Parallel TF-Conformer (TFC-P) processes input through two parallel branches, similar to Adaptive Time-Frequency Attention (ATFA) module in (Wang et al., 2018). Parallel-Cascade TF-Conformer (TFC-PC) takes inspiration from the Axial Self-Attention (ASA) module of (Wang et al., 2019). We have applied the methodology of ASA to the multi-head attention (MHA) module within the Conformer module. In this module, the MHA module of the T-Conformer receives the value input from the output of the F-Conformer, as depicted in Fig. 3-(c). Furthermore, we propose the Cascade-Parallel module, including Cascade-Parallel-value (TFC-CPv) module and the Cascade-Parallel-query (TFC-CPq) module. The TFC-CPv processes input sequentially through the F- and T-Conformers, where the MHA of the T-Conformers receives the value directly from the input of the TFC-CPv module. The TFC-CPq architecture is designed akin to the TFC-CPv; however, it acquires the query input for the MHA directly from the input of the TFC module, while key and value inputs are obtained from the output of the F-Conformer. This configuration can align more closely with the conceptual underpinnings of key, value, and query in the Transformer (Wang et al., 2019) model, in comparison to the TFC-CPv and TFC-PC cases. To the best of our knowledge, these methods have not been proposed in the context of TF-self-attention-based models. It is worth noting that all of the TF-Conformers have the same number of parameters.
### Loss Functions
We define magnitude, real and imaginary parts \(X_{m}(t,f)\), \(X_{r}(t,f)\), \(X_{l}(t,f)\) of compressed spectrogram \(X\) using the clean music source spectrogram \(X_{o}\in\mathbb{C}^{T\times F}\), same as in equation (1):
\[X=|X_{o}|^{c}e^{j\angle X_{o}}=X_{m}e^{j\angle X_{o}}=X_{r}+jX_{i}. \tag{3}\]
Meanwhile, we compute the estimates of the clean spectrogram and waveform as follows:
\[\hat{X}_{m}=\sqrt{\hat{X}_{r}^{2}+\hat{X}_{i}^{2}},\] \[\hat{x}=\mathrm{iSTFT}(\hat{X}_{m}^{(1-c)/c}(\hat{X}_{r}+j\hat{X} _{i})) \tag{4}\]
where \(\mathrm{iSTFT}(\cdot)\) denotes the inverse short-time Fourier transform. Subsequently, we employ a linear combination of magnitude loss,
Figure 3. Description of the proposed TF-Conformer modules. We evaluate the enhancement performance of each module. \(Q\),\(K\), and \(V\) denote the query, key, and value of the multi-head attention, respectively. T- and F-Conformers correspond to the structures depicted in Fig. 2-(b), with the distinction that in (c), (d), and (e), the multi-head self-attention is substituted with multi-head attention.
complex loss, and time loss, following the approach in (Beng et al., 2017):
\[\begin{split}&\mathcal{L}_{\text{Mag}}=\mathbb{E}_{X_{m},\hat{X}_{m} }[\|X_{m}-\hat{X}_{m}\|^{2}]\\ &\mathcal{L}_{\text{RI}}=\mathbb{E}_{X_{r},\hat{X}_{r}}[\|X_{r}- \hat{X}_{r}\|^{2}]+\mathbb{E}_{X_{i},\hat{X}_{i}}[\|X_{i}-\hat{X}_{i}\|^{2}]\\ &\mathcal{L}_{\text{Time}}=\mathbb{E}_{x,\hat{x}}[\|x-\hat{x}\|_ {1}]\end{split} \tag{6}\]
where \(x\), \(\hat{x}\) denotes the clean music waveform and the estimated waveform, respectively. Finally, the total loss is \(\mathcal{L}\) is defined by:
\[\mathcal{L}=\gamma_{1}\mathcal{L}_{\text{Mag}}+\gamma_{2}\mathcal{L}_{\text{ RI}}+\gamma_{3}\mathcal{L}_{\text{Time}}. \tag{7}\]
## 4. Experiments
### Dataset
Our models are trained on two distinct datasets: the Medley-solos-DB dataset (Miyi et al., 2017) (_solo_) for comparison with previous models, and MUSDB18 (Miyi et al., 2018) for general multi-track music enhancement. Initially, we use _solo_ dataset excluding the distorted electric guitar class, which served as the training data for _Mel2Mel + Diffwave_(Miyi et al., 2017), the prior state-of-the-art model. The dataset comprises 19,171 single-stem recordings of approximately 3 seconds each and is divided into training, validation, and test subsets containing 5,437, 2,999, and 11,281 samples, respectively. Additionally, we compare the performance of models using a piano-only _solo_ dataset, which includes a total of 5,672 piano samples-2,041 for the training set, 1,022 for validation, and 2,609 for the test set. Furthermore, we employ the MUSDB18 (Miyi et al., 2018) dataset, consisting of 150 multi-track songs for general multi-track music enhancement. Excluding the 50 songs designated for the test set, we randomly select 90 songs for the training set and 10 songs for the validation set. Furthermore, we partitioned the MUSDB18 data into 3-second segments, resulting in 6800, 847, and 4166 samples for the training, validation, and test sets, respectively. We use only the mixture files from each song and segment them to match the length of samples in the _solo_ dataset.
To fairly compare our approach with the _Mel2Mel + Diffwave_ model, we adopt the same data simulation schemes as described in (Miyi et al., 2017). In order to create aligned pairs of clean high-quality and corrupted low-quality music, we convolve the clean music source with room impulse responses from the DNS Challenge dataset (Dong et al., 2018), simulating a reverberant environment. Additionally, we introduce realistic noise from the ACE challenge dataset (Dong et al., 2018), scaled according to randomly sampled signal-to-noise ratios (SNR) between 5dB and 30dB. We simulate various frequency responses of low-quality microphones by applying random gain between [-15dB, 15dB] to four different frequency bands ([0, 200], [200, 1000], [1000, 4000], and [4000, 8000] Hz). In accordance with (Miyi et al., 2017), we also implement a low-cut filter to remove nearly imperceptible frequencies below 35 Hz. Additionally, we normalize the waveforms by dividing each segment by its maximum absolute value and scaling them by a factor of 0.95.
### Implementation Details
We perform STFT with a Hamming window using an FFT size of 1024 samples and 75% overlap. All waveforms are processed at a 16 kHz sampling rate. For _Mel2Mel + Diffwave_, we follow the configuration described in (Miyi et al., 2017). Although (Miyi et al., 2017) proposes several training schemes for _Mel2Mel + Diffwave_, we employ the independent training method, which trains Mel2Mel and Diffwave separately. We use pre-trained weights from the official repository1 to estimate enhancement performance on samples from the _solo_ dataset. For the CMGAN generator bottleneck, we apply two TF-Conformer modules to each of our proposed models. The compression exponent, denoted as \(c\), is set to 0.3 for magnitude spectrograms. Meanwhile, the values for \(\gamma_{1}\), \(\gamma_{2}\), and \(\gamma_{3}\) in the loss function are set to 0.15, 0.85, and 0.1, respectively, based on a grid search. All the proposed models are trained for 50 epochs on both the _solo_ and MUSDB18 datasets. For the piano-only _solo_ dataset, the training epoch is set to 250. The optimization process employs the AdamW optimizer with a learning rate of 0.00005 and a batch size of 8.
Footnote 1: [https://github.com/nkandpa2/music_enhancement](https://github.com/nkandpa2/music_enhancement)
### Evaluation Metrics
We employ objective metrics in line with the baseline system, including frequency-weighted segmental SNR (fwSNR) (Miyi et al., 2017), multi-resolution spectrogram loss (MRS) (Shen et al., 2018), \(l_{1}\)-spectrogram distance, and Frechet Audio Distance (Fried et al., 2018). Moreover, we evaluate our models using the standard signal-to-distortion ratio (SDR) metric with
\begin{table}
\begin{tabular}{l c c c c c c c c c c c c c c} \hline \hline \multirow{2}{*}{Model} & \multicolumn{4}{c}{Medley-solos-DB (_solo_)} & \multicolumn{4}{c}{Medley-solos-DB (piano-only)} & \multicolumn{4}{c}{MUSDB18} \\ \cline{2-13} & fwSNR \(\uparrow\) & MRS \(\downarrow\) & L1 \(\downarrow\) & SDR \(\uparrow\) & FAD \(\downarrow\) & fwSNR \(\uparrow\) & MRS \(\downarrow\) & L1 \(\downarrow\) & SDR \(\uparrow\) & FAD \(\downarrow\) & fwSNR \(\uparrow\) & MRS \(\downarrow\) & L1 \(\downarrow\) & SDR \(\uparrow\) & FAD \(\downarrow\) \\ \hline LQ & 5.68 & 1.75 & 2.69 & -2.69 & 5.70 & 6.88 & 1.98 & 3.09 & -2.90 & 6.87 & 6.72 & 1.64 & 2.36 & -3.39 & 11.30 \\ _M+D_(Miyi et al., 2017) & 8.03 & 1.45 & 2.24 & -2.31 & 3.68 & 8.96 & 1.44 & 2.56 & -1.33 & 4.05 & - & - & - & - & - \\ \hline TFC-C & 10.48 & 1.05 & 1.74 & 3.91 & 0.77 & 11.30 & 1.06 & 1.65 & 3.93 & 0.86 & 12.49 & 0.83 & 1.28 & 4.48 & 0.65 \\ TFC-P & 10.49 & 1.04 & 1.72 & 3.77 & 0.65 & 11.03 & 1.03 & 1.61 & 4.09 & 0.47 & **12.73** & **0.81** & 1.26 & 4.52 & 0.77 \\ TFC-PC & 10.10 & 1.05 & 1.71 & 3.69 & 1.02 & **11.37** & **0.99** & **1.57** & 4.27 & 0.50 & 12.22 & 0.83 & 1.28 & 4.43 & 0.77 \\ TFC-CPv & **10.69** & 1.01 & 1.65 & 3.91 & 0.68 & 11.21 & 1.03 & 1.65 & 4.02 & **0.30** & 12.57 & 0.84 & 1.30 & 4.21 & 0.66 \\ TFC-CPq & 10.51 & **0.99** & **1.63** & **4.13** & **0.62** & 11.07 & 1.03 & 1.62 & **4.43** & 0.45 & 12.68 & **0.81** & **1.25** & **4.66** & **0.53** \\ \hline TFC-CPq (M) & 9.73 & 1.16 & 1.93 & 3.90 & 2.38 & 10.99 & 1.36 & 2.28 & 4.10 & 3.26 & - & - & - & - \\ \hline \hline \end{tabular}
\end{table}
Table 1. Objective evaluation results for _solo_ and MUSDB18 dataset, where LQ and _M+D_ denotes low-quality musical recordings and _Mel2Mel + Diffwave_, respectively. On the Medley-solos-DB (with only piano) columns, we reported the evaluations trained only on piano samples of the _solo_ dataset. In the TFC-CPq (M) row, we report the evaluations of enhanced samples of the _solo_ dataset from proposed models trained on the MUSDB18 dataset.
mus_eval2(Rendle et al., 2017), which is widely adopted in music source separation tasks (Beng et al., 2017; Chen et al., 2017; Chen et al., 2018).
Footnote 2: [https://github.com/sigsep/sigep-mus-eval](https://github.com/sigsep/sigep-mus-eval)
In addition to objective metrics, we assess our models through a subjective evaluation by conducting a Mean Opinion Score (MOS) test on Amazon Mechanical Turk. Listeners are asked to rate the acoustic quality of the samples on a scale from 1 to 5 based on unpleasant distortions, such as noise and reverberation. Each sample presented falls into one of the following: 1) clean music recordings or 2) clean music corrupted with random noise, reverberation, and four-frequency band equalization, as described in Section 4.1. To evaluate the system's performance at different noise levels, we organize the listening test by using corrupted samples with SNRs of 5, 10, and 15 dB. For reference ratings, we provided listeners with uncorrupted clean music recordings as an example of a score of 5, and identical recordings corrupted with 0dB Gaussian noise as an example of a score of 1. In total, we collect 1,950 responses from 65 participants for the solo dataset and 1,911 responses from 91 participants for the MUSDB18 dataset.
## 5. Results
### Objective Evaluation
Table 1 presents the objective measures of _Mel2Mel + Diffwave_ and our proposed models on the _solo_ dataset, the piano-only _solo_ dataset, and the MUSDB18 dataset. For the _solo_ dataset, our proposed models outperform the previous state-of-the-art model by a considerable margin. The TFC-CPq model achieves significant gains on all metrics except fwSNR. There is a gain in SDR by 6.82dB and 6.44dB compared to the corrupted low-quality musical recordings and their enhanced outputs of _Mel2Mel + Diffwave_, respectively for the TFC-CPq model. The TFC-CPv model achieves the largest gain on fwSNR by 2.66 compared to _Mel2Mel + Diffwave_. Additionally, our models perform well on the FAD metric, which is closely aligned with perceptual quality in music enhancement tasks, according to (Beng et al., 2017). Among the TF-Conformer modules, TFC-PC has the worst performance on all metrics except \(l_{1}\)-spectrogram distance, where the outcome is the poorest for the TFC-C model.
We also train and evaluate our models on the piano-only _solo_ dataset, since there are officially available pre-trained parameters
Figure 4. Comparison between the _Mel2Mel+Diffwave_(Beng et al., 2017) and the proposed system with four different individual instruments in the Medley-solos-DB dataset. _Ground truth_ refers to the clean audio within the dataset, while _Low-quality_ denotes the synthesized noisy audio containing noise and reverberation. Our system more effectively denoises background noise, recovers high-frequency harmonics, and exhibits a more precise reconstruction of spectral features compared to the _Mel2Mel+Diffwave_.
of _Mel2Mel + Diffwave_. In this scenario, TEC-PC achieves the best results in fwSNR, \(I_{1}\)-spectrogram distance, and MRS. From the perspective of SDR, TEC-CPq performs the best, demonstrating performance gains of 7.33dB and 5.76dB compared to low-quality samples and _Mel2Mel + Diffwave_-enhanced samples, respectively.
When it comes to enhancing the mixtures of MUSDB18, which is a multi-track dataset, the Diffwave vocoder is unable to learn how to transform mel-spectrograms to waveforms, generating only Gaussian noise. Compared with noisy inputs, we can observe for TEC-CPq an 8.05dB gain on SDR. As in the case of the _solo_ dataset, TEC-CPq achieves the most significant gain on all metrics except fwSNR. The worst performances among the TF-Conformer modules are TEC-CPv on MRS, \(I_{1}\) spectrogram distance, and SDR, and TFC-PC on fwSNR and FAD.
Additionally, we reported the evaluation results for _solo_ dataset samples enhanced by models trained on the MUSDB18 dataset in the last row of Table 1. One can observe that the enhanced results of the single-stem _solo_ dataset are comparable to those of _Mel2Mel + Diffwave_, even though the proposed models were trained on the multi-track mixtures of the MUSDB18 dataset, which has a different data distribution. This implies that our proposed models can enhance more general musical recordings.
### Analysis of Spectrograms for Enhanced Samples
Fig. 4 and Fig. 5 display the spectrograms of low-quality, enhanced, and ground truth musical recordings, containing the outputs of TEC-CPq module which demonstrate the best performance as shown in Section 5.1. In Fig. 4, we present the outcomes for various instruments in the Medley-solos-DB dataset. It is evident that both _Mel2Mel + Diffwave_ and our proposed model effectively enhance low-quality music. In the flute example, we can observe that our model effectively denoises the background clutter noise, which impacts a wide range of frequency bands in narrow time bins, particularly in the early frames. The piano example demonstrates that
Figure 5. Magnitude spectrogram results of MUSDB18 dataset enhanced by the proposed system. \(\Delta(GT,LQ)\) refers to the difference between the ground truth and low-quality music spectrograms, while \(\Delta(GT,Proposed)\) denotes the difference between the ground truth and the enhanced music spectrogram. In other words, within the \(\Delta(GT,\cdot)\) representation, red bins signify the absence of expected components, whereas blue bins denote the presence of undesired additional components.
our proposed model more elaborately recovers harmonics in higher frequencies compared to the previous state-of-the-art model. In the violin and clarinet examples, both models effectively eliminate noise across most frequency bands; however, our proposed model demonstrates more precise restorations of spectral features. This could be attributed to the benefits of using a mask-based model over a generative model.
Figure 5 displays the results for the MUSDB18 dataset. As the Diffwave vocoder (MUSDB18) failed to converge during training for the MUSDB18 dataset, we only report the results for our proposed model. The performance is comparable to that observed in the Medley-solos-DB dataset. Example 1 demonstrates effective restoration of the low-frequency band, where the bass sound is primarily located. It is noteworthy that examples 2 and 3 exhibits proficient enhancement performance for different types of corruption within a similar frequency band. The low-quality audio in example 2 is corrupted by equalization, simulating the effect of a low-quality microphone. Our proposed model successfully recovers the corrupted spectral components. Example 3 presents audio corruption in a similar frequency band, in this case, with an unexpectedly higher gain. Our model effectively suppresses this corruption as well, as evidenced by the difference in spectrograms between the low-quality and enhanced audio compared to the clean ground truth audio. On the other hand, example 4 presents a distinct scenario in which the corrupted frequency bands lie within a lower range compared to those in examples 2 and 3, where spectral features are also successfully recovered. Furthermore, as depicted in example 5, our model is capable of enhancing music corrupted across both low- and high-frequency bands.
### Subjective Evaluation
The Table 2 displays the Mean Opinion Scores (MOS) of the corrupted samples of _solo_ dataset and MUSDB18 dataset enhanced by the TFC-CPq model, which achieved the highest scores in FAD. For the _solo_ dataset, the MOS results indicate that our TFC-CPq model may have better perceptual quality than the previous _Mel2Mel + Diffwave_ model for all SNRs. Notably, for samples with 5dB SNR, TFC-CPq achieves significant MOS improvements of 1.15 for the _solo_ dataset and 0.40 for the MUSDB18 dataset compared to low-quality ones, respectively. When compared to the _Mel2Mel + Diffwave_ model, TFC-CPq has an MOS gain of 0.06, 0.17, and 0.05 for SNRs of 5dB, 10dB and 15dB, suggesting that TFC-CPq may have a tendency to produce more perceptually pleasing samples.
Even for samples with 15dB SNR, where the presence of additive noise is relatively low, TFC-CPq scored 0.5 and 0.23 higher MOS compared to low-quality samples for both datasets, implying that our models can enhance the musical recordings with improved reverberation and equalization from a perceptual standpoint. These results suggest that our proposed TFC-CPq model could potentially be effective in enhancing the subjective quality of music recordings across various SNRs. For the MUSDB18 dataset, the MOS results also show that our TFC-CPq model outperforms the low-quality baseline. Specifically, for samples with 5dB SNR, TFC-CPq achieves an MOS improvement of 0.40 compared to low-quality samples.
Despite these encouraging results, our models have some limitations. They often fail to denoise impulses, such as the sound of objects colliding. Additionally, since they recognize the babbles as vocal, our models tend to sharpen rather than erase them. We will address these issues in future work.
In summary, the subjective evaluation results support the findings of the objective evaluation, demonstrating that our proposed TFC-CPq model achieves superior perceptual quality compared to the _Mel2Mel + Diffwave_ model and the low-quality baseline across various SNRs and datasets. The consistent performance of our model in both _solo_ and MUSDB18 datasets suggest that our proposed methods can generalize well to different types of musical recordings.
## 6. Conclusions
In this study, we proposed the use of TF-Conformers to perform music enhancement tasks, which demonstrated significant improvements in performance compared to existing methods. Furthermore, we expanded our investigation to multi-track musical recordings, an area previously unexplored in the literature. In future works, our research will explore additional applications, such as music source separation and speech enhancement. Additionally, we plan to apply our method to larger real-world datasets, such as YouTube recordings, by utilizing unsupervised techniques to further advance the field of music enhancement.
## 7. Acknowledgements
This work was partly supported by Institute of Information & communications Technology Planning & Evaluation (IITP) grant funded by the Korea government(MSIT)[No. 2022-0-00641, XVoice: Multi-Modal Voice Meta Learning, 50%], [NO.2021-0-01343, Artificial Intelligence Graduate School Program (Seoul National University), 10%], and Culture, Sports and Tourism R&D Program through the Korea Creative Content Agency grant funded by the Ministry of Culture, Sports and Tourism in 2022 [No.R2022020066, 40%].
\begin{table}
\begin{tabular}{l c c c} \hline \hline Model & SNR 5 & SNR 10 & SNR15 \\ \hline HQ & \multicolumn{3}{c}{\(4.05\pm 0.12\) (no noise)} \\ \hline LQ & \(2.61\pm 0.16\) & \(3.06\pm 0.17\) & \(3.30\pm 0.17\) \\ _M+D_ & \(3.62\pm 0.16\) & \(3.56\pm 0.16\) & \(3.75\pm 0.15\) \\ TFC-CPq & \(\mathbf{3.76\pm 0.16}\) & \(\mathbf{3.73\pm 0.16}\) & \(\mathbf{3.80\pm 0.15}\) \\ \hline \multicolumn{4}{c}{**(a) MOS results of _solo_ dataset.**} \\ \hline Model & SNR 5 & SNR 10 & SNR15 \\ \hline HQ & \multicolumn{3}{c}{\(4.02\pm 0.09\) (no noise)} \\ \hline LQ & \(3.22\pm 0.12\) & \(3.50\pm 0.14\) & \(3.47\pm 0.14\) \\ TFC-CPq & \(\mathbf{3.62\pm 0.13}\) & \(\mathbf{3.64\pm 0.11}\) & \(\mathbf{3.70\pm 0.12}\) \\ \hline \multicolumn{4}{c}{**(b) MOS results of MUSDB18 dataset.**} \\ \hline Model & SNR 5 & SNR 10 & SNR15 \\ \hline HQ & \multicolumn{3}{c}{\(4.02\pm 0.09\) (no noise)} \\ \hline LQ & \(3.22\pm 0.12\) & \(3.50\pm 0.14\) & \(3.47\pm 0.14\) \\ TFC-CPq & \(\mathbf{3.62\pm 0.13}\) & \(\mathbf{3.64\pm 0.11}\) & \(\mathbf{3.70\pm 0.12}\) \\ \hline \multicolumn{4}{c}{**(b) MOS results of MUSDB18 dataset.**} \\ \hline Model & SNR 6 & SNR 6 & SNR 6 \\ \hline Model & SNR 7 & SNR 8 & SNR 8 \\ \hline Model & SNR 9 & SNR 10 & SNR 15 \\ \hline Model & SNR 10 & SNR 10 & SNR 15 \\ \hline Model & SNR 10 & SNR 10 & SNR 15 \\ \hline Model & SNR 10 & SNR 10 & SNR 15 \\ \hline Model & SNR 10 & SNR 10 & SNR 15 \\ \hline Model & SNR 10 & SNR 10 & SNR 15 \\ \hline Model & SNR 10 & SNR 10 & SNR 15 \\ \hline Model & SNR 10 & SNR 10 & SNR 15 \\ \hline Model & SNR 10 & SNR 10 & SNR 15 \\ \hline Model & SNR 10 & SNR 10 & SNR 15 \\ \hline Model & SNR 10 & SNR 10 & SNR 15 \\ \hline Model & SNR 10 & SNR 10 & SNR 15 \\ \hline Model & SNR 10 & SNR 10 & SNR 15 \\ \hline Model & SNR 10 & SNR 10 & SNR 15 \\ \hline Model & SNR 10 & SNR 10 & SNR 15 \\ \hline Model & SNR 10 & SNR 10 & SNR 15 \\ \hline Model & SNR 10 & SNR 10 & SNR 15 \\ \hline Model & SNR 10 & SNR 10 & SNR 15 \\ \hline Model & SNR 10 & SNR 10 & SNR 15 \\ \hline Model & SNR 10 & SNR 10 & SNR 15 \\ \hline Model & SNR 10 & SNR 10 & SNR 15 \\ \hline Model & SNR 10 & SNR 10 & SNR 15 \\ \hline Model & SNR 10 & SNR 10 & SNR 15 \\ \hline Model & SNR 10 & SNR 10 & SNR 15 \\ \hline Model & SNR 10 & SNR 10 & SNR 15 \\ \hline Model & SNR 10 & SNR 10 & SNR 15 \\ \hline Model & SNR 10 & SNR 10 & SNR 15 \\ \hline Model & SNR 10 & SNR 10 & SNR 15 \\ \hline Model & SNR 10 & SNR 10 & SNR 15 \\ \hline Model & SNR 10 & SNR 10 & SNR 15 \\ \hline Model & SNR 10 & SNR 10 & SNR 15 \\ \hline Model & SNR 10 & SNR 10 & SNR 15 \\ \hline Model & SNR 10 & SNR 10 & SNR 10 \\ \hline Model & SNR 10 & SNR 10 & SNR |
2308.02946 | Solving a Random Asymmetric TSP Exactly in Quasi-Polynomial Time w.h.p | Let the costs $C(i,j)$ for an instance of the Asymmetric Traveling
Salesperson Problem (ATSP) be independent copies of an random variable $C$ that
(i) satisfies $\Pr(C\geq x)=1-x+O(x^2)$ as $x\to 0$ and (ii) has an exponential
tail. We describe an algorithm that solves ATSP exactly in time
$e^{\log^{2+o(1)}n}$, w.h.p. | Tolson Bell, Alan Frieze | 2023-08-05T20:02:20Z | http://arxiv.org/abs/2308.02946v12 | # On the expected efficiency of branch and bound for the asymmetric TSP
###### Abstract
Let the costs \(C(i,j)\) for an instance of the asymmetric traveling salesperson problem be independent uniform \([0,1]\) random variables. We consider the efficiency of branch and bound algorithms that use the assignment relaxation as a lower bound. We show that w.h.p. the number of steps taken in any such branch and bound algorithm is \(e^{\Omega(n^{a})}\) for some small absolute constant \(a>0\).
## 1 Introduction
Given an \(n\times n\) matrix \(C=(C(i,j))\) we can define two discrete optimization problems. Let \(S_{n}\) denote the set of permutations of \([n]=\{1,2,\ldots,n\}\). Let \(T_{n}\subseteq S_{n}\) denote the set of _cyclic_ permutations i.e. those permutations whose cycle structure consists of a single cycle. The _Assignment Problem_ (AP) is the problem of minimising \(C(\pi)=\sum_{i=1}^{n}C(i,\pi(i))\) over all permutations \(\pi\in S_{n}\). We let \(Z_{\mathrm{AP}}=Z_{\mathrm{AP}}^{(C)}\) denote the optimal cost for AP. The _Asymmetric Traveling-Salesperson Problem_ (ATSP) is the problem of minimising \(C(\pi)=\sum_{i=1}^{n}C(i,\pi(i))\) over all permutations \(\pi\in T_{n}\). We let \(Z_{\mathrm{ATSP}}=Z_{\mathrm{ATSP}}^{(C)}\) denote the optimal cost for ATSP.
Alternatively, the assignment problem is that of finding a minimum cost perfect matching in the complete bipartite graph \(K_{A,B}\) where \(A=\{a_{1},a_{2},\ldots,a_{n}\}\) and \(B=\{b_{1},b_{2},\ldots,b_{n}\}\) and the cost of edge \((a_{i},b_{j})\) is \(C(i,j)\). The Asymmetric Traveling-Salesperson Problem is that of finding a minimum cost Hamilton cycle in the complete digraph \(\vec{K}_{n}\) where the cost of edge \((i,j)\) is \(C(i,j)\).
It is evident that \(Z_{\mathrm{AP}}^{(C)}\leq Z_{\mathrm{ATSP}}^{(C)}\). The ATSP is NP-hard, whereas the AP is solvable in time \(O(n^{3})\). Several authors, e.g. Balas and Toth [3], Kalczynski [17], Miller and Pekny [22], Zhang [27] have investigated whether the AP can be used effectively in a branch-and-bound
algorithm to solve the ATSP and have observed that the AP gives extremely good bounds on random instances. Experiments suggest that if the costs \(C(i,j)\) are independently and uniformly generated as integers in the range \([0,L]\) then as \(L\) gets larger the problem gets harder to solve. Rigorous analysis supporting this thesis was given by Frieze, Karp and Reed [13]. They showed that if \(L(n)=o(n)\) then \(Z_{\rm ATSP}=Z_{\rm AP}\) w.h.p.1 and that w.h.p. \(Z_{\rm ATSP}>Z_{\rm AP}\) if \(L(n)/n\to\infty\). In some sense this shows why branch and bound is effective for small \(L\).
Footnote 1: with high probability, i.e., with probability 1-o(1) as \(n\to\infty\)
We implicitly study a case where \(L(n)/n\to\infty\). We will assume that the costs \(C(i,j)\) are now independent copies of the uniform \([0,1]\) random variable \(U[0,1]\). This model was first considered by Karp [18]. He proved the surprising result that
\[Z_{\rm ATSP}-Z_{\rm AP}=o(1)\ {\rm w.h.p.} \tag{1}\]
Since w.h.p. \(Z_{\rm AP}>1\) we see that this rigorously explained the observed quality of the assignment bound. Karp [18] proved (1) constructively, analysing an \(O(n^{3})\)_patching_ heuristic that transformed an optimal AP solution into a good ATSP solution. Karp and Steele [19] simplified and sharpened this analysis, and Dyer and Frieze [9] improved the error bound of a related more elaborate algorithm to \(O\left(\frac{(\ln n)^{4}}{n\ln\ln n}\right)\). Frieze and Sorkin [15] reduced the error bound to \(O\left(\frac{(\ln n)^{2}}{n}\right)\) w.h.p. One might think that with such a small gap between \(Z_{\rm AP}\) and \(Z_{\rm ATSP}\), that branch and bound might run in polynomial time w.h.p. Indeed one is encouraged by the recent results of Dey, Dubey and Molinaro [8] and Borst, Dadush, Huiberts and Tiwari [6] that with a similar integrality gap, branch and bound with LP based bounds solves random multi-dimensional knapsack problems in polynomial time w.h.p.
The algorithm with the best known worst-case time for solving the ATSP exactly is the \(O(n^{2}2^{n})\) dynamic programming algorithm of Held and Karp [16]. Given that \(Z_{\rm ATSP}-Z_{\rm AP}\) is usually so small, it is clearly of interest to see if using \(Z_{\rm AP}\) as a bound in a branch and bound algorithm will produce an optimum solution in polynomial time w.h.p. Frieze and Sorkin showed the following improvement over the worst-case:
W.h.p., a random instance of the ATSP can be solved exactly in time \(e^{\tilde{\rm O}(\sqrt{n})}\).
Bellmore and Malone [4] claimed that the expected running time of a branch and bound algorithm for this problem was bounded by \(O(n^{4})\). Lenstra and Rinnooy Kan [20] and Zhang [26] pointed out the failiure to account for conditioning in this analysis.
Our main result shows that w.h.p. branch and bound does not run in polynomial time. Let \(D=\{(i,i):\ i\in[n]\}\). These edges will be excluded from consideration i.e. we make \(C(i,i)=\infty\) for \(i\in[n]\). In any use of \(Z_{\rm AP}\) as a lower bound to ATSP, it is natural to avoid using loops \((i,i)\). Unfortunately it is NP-hard to avoid cycles of length two.
We now set up some notation for what we mean by a branch and bound algorithm. It is a rooted binary tree \(\mathcal{T}\) and each node \(\nu\) is labeled with a triple of disjoint sets \(F(\nu)=(F_{0}(\nu),F_{1}(\nu),G(\nu))\). \(F_{0}(\nu)\) is the set of edges which have been chosen to be excluded from
tours corresponding to this node. \(F_{1}(\nu)\) is the set of edges which must be included in all tours corresponding to this node and \(G(\nu)=(A\times B)\setminus(F_{0}(\nu)\cup F_{1}(\nu)\cup D)\). AP\((F)\) is the assignment problem with the additional restrictions imposed by \(F=F(\nu)\). The children \(\nu+,\nu-\) of \(\nu\) must be such that for some edge \(e\) we have \((F_{0}(\nu+),F_{1}(\nu+))=(F_{0}(\nu),F_{1}(\nu)\cup\{e\})\) and \((F_{0}(\nu-),F_{1}(\nu-))=(F_{0}(\nu)\cup\{e\}\,,F_{1}(\nu))\). In this way, one child adds \(e\) to every solution and the other excludes \(e\). There are some obvious restrictions placed on the set of edges in \(F_{1}\) that make them extendable to a tour. The digraph induced by \(F_{1}\neq\emptyset\) must have maximum in- and out- degree one and cannot contain a cycle \(C,|C|<n\). So, sometimes the child \(\nu+\) is not created. We say that the associated edge \(e\) is _inadmissible_ and we let \(\widehat{F}_{0}\) denote the set of inadmissible edges.
Making the branch and bound tree binary does not lose generality, as we may replace any rooted tree by a binary tree in a natural way, see Figure 1.
In summary
* \(F_{1}\) denotes the edges that are forced to be in the solution to \(AP(F)\) by branching.
* \(F_{0}\) denotes the edges that are forced to be out of the solution to \(AP(F)\) by branching.
* \(\widehat{F}_{0}\) denotes the edges that forced to be out of the solution to \(AP(F)\) through the inclusion of \(F_{1}\).
* \(D=\{(i,i):i\in[n]\}\).
* The optimal value for AP\((F)\) will be denoted by \(Z_{\rm AP}(F)\).
**Theorem 1**: _If \(\xi>0\) is a sufficiently small absolute constant then w.h.p. any branch and bound algorithm for the ATSP that uses \(Z_{\rm AP}\) as a lower bound and branches by including and excluding edges from the solution, must explore at least \(e^{\Omega(n^{\xi})}\) sub-problems._
(In the following analysis, one should equate \(\xi\) with \(\epsilon/3\).)
## 2 Outline Proof of Theorem 1
The result rests on two lemmas. The first lemma proves a high probability lower bound on \(Z_{\rm ATSP}-Z_{\rm AP}\). Throughout the paper \(\epsilon\) is a sufficiently small positive constant. We prove the following lemma in Section 4.
Figure 1: Removing a cycle
**Lemma 2**: \[\mathbf{Pr}\left(Z_{\mathrm{ATSP}}-Z_{\mathrm{AP}}\leq\frac{1}{n^{3/2}}\right)=o( 1).\]
\(\Box\)
We will find that w.h.p. there are many nodes of the branch and bound tree with restrictions \(F\), for which the optimal solution to \(\mathrm{AP}(F)\) will have cost less than \(Z_{\mathrm{AP}}+\frac{1}{n^{3/2}}\). These nodes will not be eliminated by bounding.
We let \(M_{F}\) denote the set of \(n\) edges in the perfect matching that solves \(\mathrm{AP}(F)\), this is unique with probability one. In the subsequent analysis, we concentrate on the case where
\[|F_{1}|\leq n^{3\epsilon/8}\text{ and }|F_{0}|\leq n^{3\epsilon/4}. \tag{2}\]
We prove the following lemma in Section 3.
**Lemma 3**: _W.h.p, simultaneously for every \(F\) that satisfies (2) there exist distinct perfect matchings \(M_{i}=M_{i}(F),\,i=1,2,\ldots,d=\left\lceil n^{\epsilon/3}\right\rceil\) which are feasible for \(\mathrm{AP}(F)\) and whose costs \(C(M_{i}),\,i=1,2,\ldots,d\) satisfy_
\[C(M_{i})-C(M_{F})\leq\frac{1}{n^{3/2+2\epsilon}}.\]
_Furthermore, there exist edges \(e_{1},e_{2},\ldots,e_{d}\notin F\) such that \(e_{i}\in M_{i}\setminus M_{j}\) for all \(1\leq i\neq j\leq d\)._
\(\Box\)
Assume the truth of Lemmas 2, 3 for the moment and let us see how to prove Theorem 1. W.h.p. we can generate a large number of distinct perfect matchings that have a lower cost than \(Z_{\mathrm{ATSP}}\). We construct a \(d\)-ary tree \(\mathcal{T}_{*}\) of depth at most \(d\), where each node \(\nu\) is labelled by a set of restrictions \(F^{(\nu)}=(F_{1}^{(\nu)},F_{0}^{(\nu)},\widehat{F}_{0}^{(\nu)})\) and a perfect matching \(M^{(\nu)}\). Our construction will be such that if node \(\mu\) is a child of node \(\nu\) then
\[F_{1}^{(\mu)}\supseteq F_{1}^{(\nu)}\text{ and }|F_{1}^{(\mu)}|=|F_{1}^{(\nu)}| +1\text{ and }F_{0}^{(\mu)}\supseteq F_{0}^{(\nu)}\text{ and }|F_{0}^{(\mu)}|=|F_{0}^{(\nu)}|+d-1. \tag{3}\]
Let \(e_{i},M_{i}(\emptyset),i=1,2,\ldots d\) be the edges and matchings promised by Lemma 3. (Here we are using the notation of Lemma 3, with \(F=\emptyset\) at the root.) Then for \(i=1,2,\ldots,d\), let \(F_{1}^{(i)}=\{e_{i}\}\) and \(F_{0}^{(i)}=\{e_{j}:j\neq i\}\). The child \(\rho_{i}\) of the root \(\rho\) will be labelled with \((F_{1}^{(i)},F_{0}^{(i)},\widehat{F}_{0}^{(i)})\) and \(M_{i}(\emptyset)\) for \(i=1,2,\ldots d\).
Suppose now that we have constructed \(1\leq k<d\) levels of \(\mathcal{T}_{*}\) and that \(\nu\) is a node at level \(k\). We see that assuming (3), that \(F^{(\nu)}\) satisfies (2). Let now \(e_{i},M_{i}(F^{(\nu)}),i=1,2,\ldots d\) be the edges and matchings promised by Lemma 3. The children of \(\nu\) will be denoted \(\mu_{1},\mu_{2},\ldots,\mu_{d}\). Then for \(i=1,2,\ldots,d\), let \(F_{1}^{(\mu_{i})}=F_{1}^{(\nu_{i})}\cup\{e_{i}\}\) and \(F_{0}^{(\nu_{i})}=F_{0}^{(\mu_{i})}\cup\{e_{j}:j\neq i\}\) and \(M^{(\mu_{i})}=M_{i}(F^{(\nu)})\), verifying (3).
The tree \(\mathcal{T}_{*}\) will have \(\lambda=d^{d}\) leaves \(L_{*}\). Furthermore, if \(\nu_{1},\nu_{2}\) are leaves of \(\mathcal{T}_{*}\) then \(M^{(\nu_{1})}\neq M^{(\nu_{2})}\). Let \(\mathcal{M}=\left\{M^{(\nu)}:\nu\in L_{*}\right\}\) and note that if \(\nu\in L_{*}\) then
\[Z_{\mathrm{AP}}(F^{(\nu)})\leq C(M^{(\nu)})\leq Z_{\mathrm{AP}}+\frac{d}{n^{3 /2+2\epsilon}}<Z_{\mathrm{ATSP}} \tag{4}\]
Next let \(X_{+}=\bigcap_{M\in\mathcal{M}}M\) and \(X_{-}=(A\times B)\setminus\bigcup_{M\in\mathcal{M}}M\) be the sets of edges in all or none of the matchings of \(\mathcal{M}\) respectively.
Now consider the actual branch and bound tree \(\mathcal{T}\). We first construct a smaller tree \(\widehat{\mathcal{T}}\subseteq\mathcal{T}\). If at a node \(\nu\) of \(\mathcal{T}\) we branch on \(e\in X_{-}\) then we remove \(\nu_{+}\) and its descendants and replace \(\nu\) by \(\nu_{-}\). If at a node \(\nu\) of \(\mathcal{T}\) we branch on \(e\in X_{+}\) then we remove \(\nu_{-}\) and its descendants and replace \(\nu\) by \(\nu_{+}\). Assume by induction on depth that \(|\widehat{\mathcal{T}}|\geq|\mathcal{M}|\). For each node \(\nu\) of \(\widehat{\mathcal{T}}\) we let \(\mathcal{M}_{\nu}\) denote the matchings in \(\mathcal{M}\) that satisfy the constraints of \(\nu\). Suppose that the children of the root of \(\mathcal{T}\) include/exclude the edge \(e\). Let \(\mathcal{M}_{+}=\{M\in\mathcal{M}:e\in M\}\) and \(\mathcal{M}_{-}=\mathcal{M}\setminus\mathcal{M}_{+}\). Both are non-empty and induction on depth tells us that \(\widehat{\mathcal{T}}\geq|\mathcal{M}_{+}|+|\mathcal{M}_{-}|=|\mathcal{M}|\). The basis of the induction is at the leaves of \(\widehat{\mathcal{T}}\) where there are nodes \(\nu\) for which \(\mathcal{M}_{\nu}=\emptyset\). A node for which \(\mathcal{M}_{\nu}\neq\emptyset\) would cause a branch. This proves Theorem 1.
## 3 Analysis of the Assignment Problem
Let \(K_{A,B:F}\) be obtained from the complete bipartite graph \(K_{A,B}\) by deleting the edges in \(D\cup F_{0}\cup\widehat{F}_{0}\). Recall that \(\mathrm{AP}(F)\) is the problem of finding a minimum cost perfect matching from \(A\) to \(B\) with restrictions defined by \(F\). Our immediate aim is to show that w.h.p. the optimal matching \(M_{F}^{*}\) does not use expensive edges.
Given the bipartite graph \(K_{A,B:F}\), any permutation \(\pi:A\to B\) has an associated matching \(M_{\pi}=\{(x,y):\;x\in A,\,y\in B,\,y=\pi(x)\}\), assuming that \(M_{\pi}\cap(D\cup F_{0}\cup\widehat{F}_{0})=\emptyset\). Define the \(k\)-neighborhood of a vertex \(v\in A\cup B\) to be the \(k\) closest neighbors of \(v\), where distance is given by the matrix \(C\); let the \(k\)-neighborhood of a set be the union of the \(k\)-neighborhoods of its vertices. In particular, for the bipartite graph \(K_{A,B:F}\) and any \(S\subseteq A\), \(T\subseteq B\) and any permutation \(\pi\),
\[N_{k}(S) =\{y\in B:\;\exists s\in S\mbox{ s.t. }(s,y)\mbox{ is one of the $k$ shortest edges of }K_{A,B:F}\mbox{ out of }s\}, \tag{5}\] \[N_{k}(T) =\{x\in A:\;\exists t\in T\mbox{ s.t. }(x,t)\mbox{ is one of the $k$ shortest edges of }K_{A,B:F}\mbox{ into }t\}. \tag{6}\]
All of the edges in \(N_{k}(S),N_{k}(T)\) are oriented from \(A\) to \(B\) and do not belong to \(M_{\pi}\). Given a cost matrix \(C\) and permutation \(\pi\) (perfect matching \(M_{\pi}=\{(i,\pi(i)):i\in[n]\}\)), define the digraph
\[\vec{D}_{F}=\vec{D}_{F}(C,\pi)=(A\cup B,\,\vec{E}_{\pi}) \tag{7}\]
consisting of _backwards_ matching edges and forward "short" edges: Let
\[\zeta=\lceil n^{\epsilon}\rceil\]
and
\[\vec{E}_{\pi}=\{(y,x):\;y\in B,\,x\in A,\,y=\pi(x)\}\cup\;\{(x,y) \notin F_{0}\cup F_{1}\cup\widehat{F}_{0}:\;x\in A,\,y\in N_{\zeta}(x)\}\] \[\cup\;\{(x,y)\notin F_{0}\cup F_{1}\cup\widehat{F}_{0}:\;y\in B, \,x\in N_{\zeta}(y)\}. \tag{8}\]
The edges of directed paths in \(\vec{D}_{F}\) are alternately forwards \(A\to B\) and backwards \(B\to A\) and so they correspond to alternating paths with respect to the perfect matching \(M_{\pi}\). The forward edges will replace the backward ones and so it helps to know (Lemma 4, next) that given \(x\in A,y\in B\) we can find an alternating path from \(x\) to \(y\) with \(O(1)\) edges. Let the unweighted/weighted \(A:B\) diameter denote the maximum over \(a\in A,b\in B\) of the minimum number/weight of edges in a path from \(a\) to \(b\) in \(\vec{D}_{F}\).
**Lemma 4**: _Suppose that \(F\) satisfies (2). Then over random cost matrices \(C\), for every permutation \(\pi\),_
\[{\bf Pr}(\mbox{the unweighted $A:B$ diameter of $\vec{D}_{F}\geq 3/\epsilon$})\leq e^{-\zeta/4}.\]
**Proof.** Let \(\beta=\zeta/10\). We first estimate the probability that for all \(S\subseteq A\) with \(|S|\leq\left\lceil\frac{2n}{3\beta}\right\rceil\), \(|N_{\zeta}(S)|\geq\beta|S|\). Note that only the cheap edges out of \(S\), and not the backward matching edges into it, will be involved here. Note also, that because \(|F_{1}|+|F_{2}|\ll\zeta\), at most \(\zeta\) edges of \(K_{A,B}\) are excluded in \(K_{A,B:F}\) from those incident with a fixed vertex. (At most \(o(\zeta)\) from \(F_{0}\) and at most \(o(\zeta)\) from \(\widehat{F}_{0}\).)
\[{\bf Pr}(\exists S:\;|S|\leq\lceil 2n/(3\beta)\rceil\,,\,|N_{ \zeta}(S)|<\beta|S|) \leq\sum_{s=1}^{\lceil 2n/(3\beta)\rceil}{n\choose s}{n\choose\beta s }\left(\frac{\beta s}{\zeta}\right)^{s}\] \[\leq\sum_{s=1}^{\lceil 2n/(3\beta)\rceil}\left(\frac{ne}{s} \right)^{s}\left(\frac{ne}{\beta s}\right)^{\beta s}\left(\frac{\beta s}{n- \zeta}\right)^{\zeta s}\] \[\leq\sum_{s=1}^{\lceil 2n/(3\beta)\rceil}\left(\left(\frac{\beta s}{n} \right)^{\zeta-\beta-1}e^{\beta+1}e^{2\zeta^{2}/n}\beta\right)^{s}\] \[\leq e^{-\zeta/4}. \tag{9}\]
Similarly, with probability at least \(1-e^{-\zeta/4}\), for all \(T\subseteq B\) with \(|T|\leq\lceil 2n/(3\beta)\rceil\), \(|N_{\vec{D}_{F}}(T)|\geq\beta|T|\). (Again only non-matching edges, are involved.)
In the remainder of this proof, assume that we are in the high-probability "good" case, in which all small sets \(S\) and \(T\) have large vertex expansion.
Now, choose an arbitrary \(x\in A\), and define \(S_{0},S_{1},S_{2},\dots\), by
\[S_{0}=\{x\}\mbox{ and }S_{i}=\pi^{-1}(N_{\zeta}(S_{i-1})).\]
Since we are in the good case, \(|S_{i}|\geq\beta|S_{i-1}|\) provided \(|S_{i-1}|\leq 2n/(3\beta)\), and so there exists a smallest index \(i_{S}-1\leq\log_{b}(2n/(3\beta))\leq\log_{\beta}n-1\) such that \(|S_{i_{S}-1}|>n/\beta\). Arbitrarily discard vertices from \(S_{i_{S}-1}\) to create a smaller set \(S^{\prime}_{i_{S}-1}\) with \(|S^{\prime}_{i_{S}-1}|=\lceil n/\beta\rceil\), so that \(S^{\prime}_{i_{S}}=N_{\zeta}(S^{\prime}_{i_{S}-1})\) has cardinality \(|S^{\prime}_{i_{S}}|\geq\beta|S^{\prime}_{i_{S}-1}|\geq 2n/3\).
Similarly, for an arbitrary \(y\in B\), define \(T_{0},T_{1},\dots\), by
\[T_{0}=\{y\}\mbox{ and }T_{i}=\pi(N_{\zeta}(T_{i-1})).\]
Again, we will find an index \(i_{T}\leq\log_{\beta}n\) whose modified set has cardinality \(|T^{\prime}_{i_{T}}|\geq 2n/3\).
With both \(|S^{\prime}_{i_{S}}|\) and \(|T^{\prime}_{i_{T}}|\) larger than \(n/2\), there must be some \(x^{\prime}\in S^{\prime}_{i_{S}}\) for which \(y^{\prime}=\pi(x^{\prime})\in T^{\prime}_{i_{T}}\). This establishes the existence of a walk and hence a path of length at most \(2(i_{S}+i_{T})\leq 2\log_{\beta}n\approx 2/\epsilon\) from \(x\) to \(y\) in \(\vec{D}_{F}\). \(\Box\)
Let
\[\gamma_{\epsilon}=\frac{30\zeta}{\epsilon n}\]
**Corollary 5**: _Suppose that \(F\) satisfies (2). Then over random cost matrices \(C\), for every permutation \(\pi\),_
\[{\bf Pr}\left(\mbox{the weighted diameter of }\vec{D}_{F}\geq\gamma_{ \epsilon}\right)\leq e^{-\zeta/5}.\]
**Proof.** The Chernoff bounds imply that with probability at least \(1-ne^{-\zeta}\), every edge \((x,y)\) where \(y\in N_{\zeta}(x)\) satisfies \(C(x,y)\leq 10\zeta/n\). The corollary now follows from Lemma 4. \(\Box\)
It follows from this that the minimum cost matching \(M^{*}_{F}\) only contains edges of cost at most \(\zeta_{\epsilon}\).
**Lemma 6**: _Suppose that \(F\) satisfies (2). Then over random cost matrices \(C\), for every permutation \(\pi\),_
\[{\bf Pr}\left(\exists(a_{i},b_{j})\in M^{*}_{F}:C(i,j)\geq\gamma_{\epsilon} \right)\leq e^{-\zeta/5}.\]
**Proof.** If there was an edge \(e=(a_{i},b_{j})\) of cost greater than \(\gamma_{\epsilon}\) in \(M^{*}_{F}\) then we can reduce the cost of \(M^{*}_{F}\) by deleting \(e\) and using an alternating path from \(a_{i}\) to \(b_{j}\) of weight at most \(\gamma_{\epsilon}\) to find a lower cost matching. \(\Box\)
The number of choices for \(F\) satisfying (2) is at most \({n^{2}\choose n^{3\epsilon/8}}{n^{2}\choose n^{3\epsilon/4}}\) and so by the union bound
\[{\bf Pr}\left(\exists F:\exists(a_{i},b_{j})\in M^{*}_{F}:C(i,j)\geq\gamma_{ \epsilon}\right)\leq{n^{2}\choose n^{3\epsilon/8}}{n^{2}\choose n^{3 \epsilon/4}}e^{-\zeta/5}=o(1).\]
### Proof of the lower bound
The assignment problem \(AP(F)\) has a linear programming formulation \(LP_{F}\). Let \(A_{F}=\{x\in A:\not\exists y\in B\mbox{ such that }(x,y)\in F_{1}\}\) and \(B_{F}\;=\;\{y\in B:\not\exists x\in A\mbox{ such that }(x,y)\in F_{1}\}\). Let \(\Omega_{F}=(A_{F}\times B_{F}))\setminus(F_{0}\cup\bar{F}_{0}\cup D)\). In the following \(z_{i,j}\) indicates whether or not \((a_{i},b_{j})\) is an edge of the optimal solution.
\[\begin{array}{ll}{\cal LP}_{F}&\mbox{ Minimise }\sum_{(i,j)\in\Omega_{F}}^{n}C(i,j)z_{i,j}\\ &\mbox{ subject to }\sum_{j:(i,j)\in\Omega_{F}}^{n}z_{i,j}=1,\forall i \in A_{F}.\\ &\sum_{i:(i,j)\in\Omega_{F}}^{n}z_{i,j}=1,\forall j \in B_{F}.\\ & 0\leq z_{i,j}\leq 1,\forall(i,j)\in\Omega_{F}.\end{array} \tag{10}\]
This has the dual linear program:
\[\begin{array}{ll}{\cal DLP}_{F}&\mbox{ Maximise }\sum_{i\notin L_{F}}u_{i}+\sum_{j\notin R_{F}}v_{j}\\ &\mbox{ subject to }u_{i}+v_{j}\leq C(i,j),\forall(i,j)\in\Omega_{F}.\end{array} \tag{11}\]
We now let
\[n_{F}=n-|F_{1}|\mbox{ and }m_{F}=|\Omega_{F}|.\]
**Remark 7**: _Condition on an optimal basis for (10). We may w.l.o.g. take \(u_{1}=0\) in (11), whereupon with probability 1 the other dual variables are uniquely determined. Furthermore, the reduced costs of the non-basic variables \(\bar{C}(i,j)=C(i,j)-u_{i}-v_{j}\) are independently and uniformly distributed, with \(\bar{C}(i,j)=U[\max\left\{0,-u_{i}-v_{j}\right\},1-u_{i}-v_{j}]\). Note also that this implies that with probability one, \(\bar{C}(i,j)>0\) for all non-basic \((i,j)\)._
**Proof.** The \(2n_{F}-1\) dual variables are unique with probability 1 because they satisfy \(2n_{F}-1\) full rank linear equations. The only conditions on the non-basic edge costs are that \(C(i,j)\in[0,1]\) (equivalently \(\bar{C}(i,j)\in[-u_{i}-v_{j},1-u_{i}-v_{j}]\)) and \(\bar{C}(i,j)\geq 0\); intersecting these intervals yields the last claim. \(\Box\)
### Trees and bases
An optimal basis of \({\cal LP}_{F}\) can be represented by a spanning tree \(T_{F}^{*}\) of \(K_{A,B}\) that contains the perfect matching \(M^{*}\), see for example Ahuja, Magnanti and Orlin [1], Chapter 11. We have that for every optimal basis \(T_{F}^{*}\),
\[C(i,j)=u_{i}+v_{j}\mbox{ for }(a_{i},b_{j})\in E(T_{F}^{*}) \tag{12}\]
\[C(i,j)\geq u_{i}+v_{j}\mbox{ for }(a_{i},b_{j})\notin E(T_{F}^{*}). \tag{13}\]
Note that if \(\lambda\) is arbitrary then replacing \(u_{i}\) by \(\widehat{u}_{i}=u_{i}-\lambda,i=1,2,\ldots,n\) and \(v_{i}\) by \(\widehat{v}_{i}=v_{i}+\lambda,i=1,2,\ldots,n\) has no affvect on these constraints. We say that \({\bf u},{\bf v}\) and \(\widehat{\bf u},\widehat{\bf v}\) are equivalent. It follows that we can always fix the value of one component of \({\bf u}\),\({\bf v}\). In the following, we fix \(u_{imin}=0\) where \(imin=\arg\min\left\{l:a_{l}\in A_{F}\right\}\).
**Lemma 8**: _Let \({\cal E}_{1}=\{\max_{i,j}\left\{|u_{i}|,|v_{j}|\right\}\leq 2\gamma_{\epsilon}\}\). Then,_
\[{\bf Pr}({\cal E}_{1})\geq 1-ne^{-\zeta}. \tag{14}\]
**Proof.** For each \(a_{i}\in A_{F}\) there is some \(b_{j}\in B_{F}\) such that \(u_{i}+v_{j}=C(i,j)\). This is because of the fact that \(a_{i}\) meets at least one edge of \(T\) and we assume that (12) holds. We also know that if (13) holds then \(u_{i^{\prime}}+v_{j}\leq C(i^{\prime},j)\) for all \(i^{\prime}\neq i\). It follows that \(u_{i}-u_{i^{\prime}}\geq C(i,j)-C(i^{\prime},j)\geq-\gamma_{\epsilon}\) for all \(i^{\prime}\neq i\). Since \(i\) is arbitrary, we deduce that \(|u_{i}-u_{i^{\prime}}|\leq\gamma_{\epsilon}\) for all \(i,i^{\prime}\in A_{F}\). Because \(u_{imin}=0\), this implies that \(|u_{i}|\leq\gamma_{\epsilon}\) for \(i\in A_{F}\). We deduce by a similar argument that \(|v_{j}-v_{j^{\prime}}|\leq\gamma_{\epsilon}\) for all \(j,j^{\prime}\in B_{F}\). Now because for the optimal matching edges \((i,\phi(i)),i\in A_{F}\) we have \(u_{i}+v_{\phi(i)}=C(i,\phi(i))\), we see that \(|v_{j}|\leq 2\gamma_{\epsilon}\) for \(j\in B_{F}\). The probability bound follows from Lemma 6. \(\Box\)
Fix \(M_{F}^{*}\) and let \(K({\bf u},{\bf v})\) be the subgraph of \(K_{A,B:F}\) induced by the edges \((a_{i},b_{j})\) for which \(u_{i}+v_{j}\geq 0\). We need to know that w.h.p. each vertex \(a_{i}\in A_{F}\) is connected in \(K_{A,B:F}\) to many \(b_{j}\in B_{F}\) for which \(u_{i}+v_{j}\geq 0\).
We fix a tree \(T\) and condition on \(T_{F}^{*}=T\). Let an edge \((a_{i},b_{j})\notin E(T)\) be _non-degenerate_ if a simplex pivot on this edge leads to a change in perfect matching.
For \(i\in A_{F}\) let \(L_{i,+}=\left\{j:(i,j)\mbox{ is non-degenerate}\right\}\), \(L_{j,-}=\left\{i:(i,j)\mbox{ is non-degenerate}\right\}\). Then for \(i=1,2,\ldots,n\) let \({\cal A}_{i,+}\) be the event that \(|\left\{j\in L_{i,+}:u_{i}+v_{j}>0\right\}|\leq\eta n\) and let \({\cal A}_{j,-}\) be the event that \(|\left\{i\in L_{j-}:u_{i}+v_{j}>0\right\}|\leq\eta n\) where \(\eta\) will be some small positive constant.
**Lemma 9**: _Fix a spanning tree \(T\) of \(K_{A,B:F}\)._
\[{\bf Pr}({\cal A}_{i,+}\vee{\cal A}_{j,-}\mid T_{F}^{*}=T)=O(ne^{-\zeta}) \mbox{ for }i,j=1,2,\ldots,n.\]
**Proof.** In the following analysis \(T\) is fixed. Throughout the proof we assume that the costs \(C(i,j)\) for \((a_{i},b_{j})\in T\) are distributed as independent \(U[0,\gamma_{\epsilon}]\). Lemma 6 is the justification for this in that we can solve the assignment problem, only using edges of cost at most \(\gamma_{\epsilon}\). Furthermore, in \(K_{A,B:F}\), the number of edges of cost at most \(\gamma_{\epsilon}\) incident with a fixed vertex is dominated by \(Bin(n,\gamma_{\epsilon})\) and so by the Chernoff bound and by the union bound over choices for \(F\), we see that w.h.p. the maximum degree of the trees we consider can be bounded by \(2n^{2\epsilon}\).
We fix \(s\) and put \(u_{s}=0\). The remaining values \(u_{i},i\neq s,v_{j}\) are then determined by the costs of the edges of the tree \(T\). Let \({\cal B}\) be the event that \(C(i,j)\geq u_{i}+v_{j}\) for \((a_{i},b_{j})\notin E(T)\). Note that if \({\cal B}\) occurs then \(T_{F}^{*}=T\).
Let \(\mathcal{E}_{1}\) be as defined prior to (14). It follows from Lemma 8 that \(\mathcal{B}\subseteq\mathcal{E}_{1}\) and that \(\operatorname{\mathbf{Pr}}(\mathcal{E}_{1})\geq 1-ne^{-\zeta}\).
We now condition on the set \(E_{T}\) of edges (and the associated costs) of \(\{(a_{i},b_{j})\notin E(T)\}\) such that \(C(i,j)\geq 2\gamma_{\epsilon}\). Let \(X_{T}=\{(a_{i},b_{j})\notin E(T)\}\setminus E_{T}\). Note that \(|X_{T}|\) is dominated by \(Bin(n^{2},2\gamma_{\epsilon})\) and so \(|X_{T}|\leq 3n^{2}\gamma_{\epsilon}\) with probability \(1-e^{-\Omega(n^{2}\gamma_{\epsilon})}\).
Let \(Y=\{C(i,j):(a_{i},b_{j})\in E(T)\}\) and let \(\delta_{1}(Y)\) be the indicator for \(\mathcal{A}_{s,+}\wedge\mathcal{E}_{1}\). We write
\[\operatorname{\mathbf{Pr}}(\mathcal{A}_{s,+}\mid\mathcal{B})=\operatorname{ \mathbf{Pr}}(\mathcal{A}_{s,+}\wedge\mathcal{E}_{1}\mid\mathcal{B})=\frac{ \int\delta_{1}(Y)\operatorname{\mathbf{Pr}}(\mathcal{B}\mid Y)d\operatorname {\mathbf{Pr}}}{\int\operatorname{\mathbf{Pr}}(\mathcal{B}\mid Y)d\operatorname {\mathbf{Pr}}} \tag{15}\]
Then we note that since \((a_{i},b_{j})\notin X_{T}\cup E(T)\) satisfies the condition (13),
\[\operatorname{\mathbf{Pr}}(\mathcal{B}\mid Y) =\prod_{(a_{i},b_{j})\in X_{T}}\left(1-(u_{i}(Y)+v_{j}(Y))^{+}\right)\] \[\leq\prod_{(a_{i},b_{j})\in X_{T}}\exp\left\{-(u_{i}(Y)+v_{j}(Y) )^{+}\right\}=e^{-W}, \tag{16}\]
where \(W=W(Y)=\sum_{(a_{i},b_{j})\in X_{T}}(u_{i}(Y)+v_{j}(Y))^{+}\leq 12n^{2}\gamma_{ \epsilon}^{2}=O(n^{2\epsilon})\). Then we have
\[\int_{Y}\delta_{1}(Y)\operatorname{\mathbf{Pr}}(\mathcal{B}\mid Y )\;d\operatorname{\mathbf{Pr}} =\int_{Y}e^{-W}\delta_{1}(Y)\;d\operatorname{\mathbf{Pr}}\] \[\leq\left(\int_{Y}e^{-2W}\;d\operatorname{\mathbf{Pr}}\right)^{1 /2}\times\left(\int_{Y}\delta_{1}(Y)^{2}\;d\operatorname{\mathbf{Pr}}\right) ^{1/2}\] \[=e^{-\operatorname{\mathbf{E}}(W)}\left(\int_{Y}e^{-2(W- \operatorname{\mathbf{E}}(W))}d\operatorname{\mathbf{Pr}}\right)^{1/2}\times \operatorname{\mathbf{Pr}}(\mathcal{A}_{s,+}\mid\mathcal{E}_{1})^{1/2}\] \[\leq e^{-\operatorname{\mathbf{E}}(W)}e^{O(n^{2\epsilon})} \operatorname{\mathbf{Pr}}(\mathcal{A}_{s,+}\mid\mathcal{E}_{1})^{1/2}. \tag{17}\] \[\int\operatorname{\mathbf{Pr}}(\mathcal{B}\mid Y)d\operatorname{ \mathbf{Pr}} =\operatorname{\mathbf{E}}\left(\prod_{(a_{i},b_{j})\in X_{T}} \left(1-(u_{i}(Y)+v_{j}(Y))^{+}\right)\right)\] \[\geq\operatorname{\mathbf{E}}\left(\prod_{(a_{i},b_{j})\in X_{T} }e^{-(u_{i}(Y)+v_{j}(Y))/(1-2\gamma_{\epsilon})}\right)\] \[\geq\operatorname{\mathbf{E}}(e^{-(1+3\gamma_{\epsilon})W})\] \[\geq e^{-(1+3\gamma_{\epsilon})\operatorname{\mathbf{E}}(W)}. \tag{18}\]
It then follows from (15),(17) and (18) that
\[\operatorname{\mathbf{Pr}}(\mathcal{A}_{s,+}\mid\mathcal{B})\leq e^{O(n^{2 \epsilon})}\operatorname{\mathbf{Pr}}(\mathcal{A}_{s,+}\mid\mathcal{E}_{1}) \tag{19}\]
Let \(b_{j}\) be a neighbor of \(a_{s}\) in \(K_{A,B:F}\) and let \(P_{j}=(i_{1}=s,j_{1},i_{2},j_{2},\ldots,i_{k},j_{k}=j)\) define the path from \(a_{s}\) to \(b_{j}\) in \(T\). Then it follows from (12) that \(v_{j_{l}}=v_{j_{l-1}}-C(i_{l},j_{l-1})+C(i_{l},j_{l}))\). Thus \(v_{j}\) is the final value \(S_{k}\) of a random walk \(S_{t}=X_{0}+X_{1}+\cdots+X_{t},t=0,1,\ldots,k\), where
\(X_{0}\geq 0\) and each \(X_{t},t\geq 1\) is the difference between two independent copies of \(U[0,\gamma_{\epsilon}]\). Given \({\cal E}_{1}\) we can assume that the partial sums \(S_{i}\) satisfy \(|S_{i}|\leq 2\gamma_{\epsilon}\) for \(i=1,2,\ldots,k-1\). Assume for the moment that \(k\geq 4\) and let \(x=u_{i_{k-3}}\in[-2\gamma_{\epsilon},2\gamma_{\epsilon}]\). Given \(x\) we see that there is some positive probability \(p_{0}=p_{0}(x)\) that \(S_{k}>0\). Indeed,
\[p_{0}={\bf Pr}(S_{k}>0\mid{\cal E}_{1})={\bf Pr}(x+Z_{1}-Z_{2}>0\mid{\cal E}_{1 })\geq{\bf Pr}(x+Z_{1}-Z_{2}>0)-ne^{-\zeta}, \tag{20}\]
where \(Z_{1}=Z_{1,1}+Z_{1,2}+Z_{1,3}\) and \(Z_{2}=Z_{2,1}+Z_{2,2}\) are the sums of independent \(U[0,\gamma_{\epsilon}]\) random variables, each conditioned on being bounded above by \(\gamma_{\epsilon}\) and such that \(|x+\sum_{j=1}^{t}(Z_{1,j}-Z_{2,j})|\leq 2\gamma_{\epsilon}\) for \(t=1,2\) and that \(|x+Z_{1}-Z_{2}|\leq 2\gamma_{\epsilon}\). The absolute constant \(\eta_{0}=p_{0}(-2\gamma_{\epsilon})>0\) is such that \(\min\left\{x\geq-2\gamma_{\epsilon}:p_{0}(x)\right\}\geq\eta_{0}\).
We have so far demonstrated that if \((a_{i},b_{j})\) is non-basic then there is at least a positive probability \(\eta_{0}\) that \(u_{i}+v_{j}>0\). We need to be careful, because some pivots are degenerate. We avoid this problem as follows: if we delete \((a_{i},b_{\phi(i)})\in M^{*}\) from \(T_{F}^{*}\) then we obtain two trees, one of which \(T_{i}\) say, will have at least \(n_{F}\) vertices. Assume without loss of generality that \(a_{i}\in T_{i}\) and let \(B_{i}=V(T_{i})\cap B\) and note that \(|B_{i}|\geq n_{F}/2\). The point of this partition is that adding the edge \((a_{i},b)\) to \(T_{F}^{*}\) creates a cycle that alternates between edges in \(M^{*}\) and edges that are not in \(M^{*}\). So, if \(u_{i}+v_{j}>0\) then \((i,j)\) is non-degenerate and the corresponding pivot produces a new perfect matching with an increase in cost of \(\hat{C}(i,j)\).
We partition (most of the \(B_{i}\)-neighbors of \(a_{s}\) into \(N_{0},N_{1},N_{2}\), \(N_{t}=\{b_{j}:k\geq 3,k\mod 3=t\}\), \(k\) being the number of edges in the path \(P_{j}\) from \(a_{i}\) to \(b_{j}\). Now because \(T\) has maximum degree \(2n^{2\epsilon}\), as observed at the beginning of the proof of this lemma, we know that there exists \(t\) such that \(|N_{t}|\geq(n_{F}/2-(2n^{2\epsilon})^{3})/3-|F_{1}|\geq n/7\). It then follows from (20) that \(|L_{s,+}|\) dominates \(Bin(n/7,\eta_{0})\) and then \({\bf Pr}(|L_{s,+}|\leq\eta_{0}n/10)=O(e^{-\Omega(n)})\) follows from the Chernoff bounds. Similarly for \(L_{1,-}\). Applying the union bound over \(r\) choices for \(s\) and applying (19) gives the lemma with \(\eta=\eta_{0}/10\). \(\Box\)
Conditional on events with probability of failure \(O(ne^{-\zeta})\), the number of non-degenerate non-basic edges dominates \(Bin(n^{2}/2-o(n),\eta n^{-(3/2+\epsilon)})\) and each such edge yields an \(M_{i}\). Furthermore, the number of choices for \(F\) satisfying (2) is at most \(\binom{n^{2}}{n^{\epsilon/3}}\binom{n^{2}}{n^{2\epsilon/3}}=o(n^{-1}e^{\zeta})\) and so we take a union bound over \(F\). Lemma 3 follows immediately.
## 4 Lower bound on TSP/Assignment gap - Proof of Lemma 2
This section deals with AP, ATSP without any restrictions, other than \(z_{i,i}=0,i\in[n]\). For notational reasons, we will assume for this section that \(A=[n]\) and that \(B\) is a disjoint copy of \(A\). This is so that we can refer to the edges of \(K_{A,B}\) as \((i,j)\) where \(i,j\in[n]\). We hope that this does not cause confusion. \(i\) will always refer to the \(A\)-side and \(j\) will always refer to the \(B\)-side.
Having solved \(LP_{\emptyset}\), we will have \(n\) basic variables \(z_{i,j}\), \((i,j)\in I_{1}\), with value \(1\) and \(n-1\) basic variables \(z_{i,j}\), \((i,j)\in I_{2}\), with value \(0\). The edges \((i,j)\in I=I_{1}\cup I_{2}\) form a tree \(T^{*}=T^{*}_{\emptyset}\) in
\(K_{A,B}\). Let \(\pi\) be the permutation of \([n]\) associated with \(I_{1}\) i.e. \(I_{1}=\{(i,\pi(i))\}\), \(i=1,2,\ldots,n\). Given \(T^{*}\) we obtain another tree \(T=\phi(T^{*})\) on vertex set \([n]\) by contracting the \(n\) edges \((i,j)\in I_{1}\). \(T\) has an edge \((i_{1},i_{2})\) for every pair \((i_{1},\pi(i_{2}))\in I_{2}\). We orient this edge from \(i_{1}\) to \(i_{2}\) and let vertex \(i\) have out-degree \(d_{i}\) in \(T\) so that \(d_{1}+d_{2}+\cdots+d_{n}=n-1\).
**Lemma 10**: _Given \(T=\phi(T^{*})\), the distribution of \(I=E(T^{*})\) is_
\[\{(i,\rho\pi_{0}(i)):\;i\in[n]\}\cup\bigcup_{i\in[n]}\{(i,\rho(\xi(i,t))):\;t= 1,2,\ldots,d_{i}\} \tag{21}\]
_where \(\pi_{0}\) is a fixed permutation of \([n]\), \(\rho\) is a random permutation of \([n]\) and the edges \(\{(i,\pi_{0}(i)):\;i\in[n]\}\cup\bigcup_{i\in[n]}\{(i,\xi(i,t)):\;t=1,2,\ldots, d_{i}\}\) are those of some fixed tree \(T^{*}_{0}\) for which \(\phi(T^{*}_{0})=T\)._
**Proof.** Given \(T^{*}\) and a permutation \(\rho\) of \(B\) we obtain a new spanning tree \(\rho T^{*}\) of \(A\cup B\) by replacing each \((i,j)\in I\) by \((i,\rho(j))\). If \(T^{*}_{1},T^{*}_{2}\) are spanning trees of \(K_{A,B}\) containing perfect matchings defined by \(\pi_{1},\pi_{2}\) respectively then
\[\phi(T^{*}_{1})=\phi(T^{*}_{2})\Longleftrightarrow T^{*}_{2}=\rho T^{*}_{1} \tag{22}\]
where \(\rho=\pi_{2}\pi_{1}^{-1}\).
We will condition on \(\phi(T^{*})\) and consider the conditional distribution of the edges in \(I_{2}\). Now because replacing \(C(i,j)\) by \(C(i,\rho(j))\) for all \((i,j)\) does not change the distribution of \(C\), we have
\[{\bf Pr}(T^{*}=T^{*}_{1})={\bf Pr}(\rho T^{*}=\rho T^{*}_{1})={\bf Pr}(T^{*}= \rho T^{*}_{1}). \tag{23}\]
Let \(T^{*}_{1},T\) be fixed trees such that \(\phi(T^{*}_{1})=T\) and let \(T^{*}\) be the random optimal basic tree and \(\rho=\pi_{2}\pi_{1}^{-1}\) where \(\pi_{2}\) is a random permutation of \([n]\) and \(\pi_{1}\) is defined by \(T^{*}_{1}\). From (22) and (23) we have
\[{\bf Pr}(T^{*}=T^{*}_{1}\mid\phi(T^{*})=T)=\frac{{\bf Pr}(T^{*}=T_ {1})}{{\bf Pr}(\phi(T^{*})=T)}\\ =\frac{{\bf Pr}(T^{*}=\rho T^{*}_{1})}{{\bf Pr}(\phi(T^{*})=T)}={ \bf Pr}(T^{*}=\rho T^{*}_{1}\mid\phi(T^{*})=T).\]
\(\Box\)
The lemma follows.
It will be convenient to condition on the number of cycles of length \(i\) in the optimal assignment. Let \(\Pi\) denote the set of permutations of \(A\) with \(k_{i}\) cycles of length \(i=2,3,\ldots,n\). Let \(\pi\) be any fixed permutation with the given cycle structure. (For example, if \(t_{1}=0\), \(t_{\sigma+1}=n\), and the multi-sets \(\{t_{j+1}-t_{j}:\;j\in[\sigma]\}\) and \(\{k_{i}\times i:\;i\in[n]\}\) coincide, then we may define \(\pi\) by: If \(x,y\in C_{j}\) and \(y=x+1\mod t_{j+1}-t_{j}\) then \(\pi(x)=y\).) Then given a bijection \(f:A\to A\) we define a permutation \(\pi_{f}\) on \(A\) by \(\pi_{f}=f^{-1}\pi f\). Each permutation \(\pi\in\Pi\) appears precisely \(\prod_{i=1}^{n}k_{i}!^{k_{i}}\) times as \(\pi_{f}\). Thus choosing a random mapping \(f\), chooses a random \(\pi_{f}\) from \(\Pi\).
A natural way to look at this is to think of having oriented cycles on the plane whose vertices are at points \(A_{1},A_{2},\ldots,A_{n}\) and then randomly labelling these points with \(A\). Then if \(P^{\prime}\) follows \(P\) on one of the cycles and \(P,P^{\prime}\) are labelled \(x,x^{\prime}\) by \(f\) then \(\pi_{f}(x)=x^{\prime}\).
We now look at the probability that the gap \(\pi_{C}=Z_{\text{ATSP}}^{(C)}-Z_{\text{AP}}^{(C)}\) is at most \(\frac{1}{n^{3/2}}\). To go from the optimal assignment to a tour, we will, for some \(2\leq k\leq n\) have to:
1. Delete \(k\) edges from the optimal cycle cover, deleting at least one edge from each cycle.
2. Order the paths \(P_{1},P_{2},\ldots,P_{k}\) produced.
3. Add \(k\) edges to make a tour.
This must be done in such a way that the increased cost is at most \(\frac{1}{n^{3/2}}\). Let us call this a \(k\)-_substitution_.
**Lemma 11**: \[\mathbf{Pr}(\exists k\text{-substitution})\leq e^{2k/n^{1/2}}\frac{(k-1)!}{(n )_{k}}\sum_{\begin{subarray}{c}S=\{i_{1},i_{2},\ldots,i_{k}\}\\ S\text{ covers all cycles}\end{subarray}}\prod_{t=1}^{k}d_{i_{t}}\] (24)
_The statement "covers all cycles" refers to having at least one \(i_{j}\) in each cycle of the permutation \(\pi\). The \(d_{i}\) are the degrees as in (21),_
**Proof.** Suppose that the path \(P_{t}\) joins \(y_{t}\) to \(z_{t}\) for \(t=1,2,\ldots,k\). We must add edges \((z_{t},y_{t+1})\) for \(t=1,2,\ldots,k\) and for \(\pi_{C}\) to be less than \(\frac{1}{n^{3/2}}\) these edges will have to be either (i) basic (with value zero) or (ii) non-basic with reduced cost less than \(\frac{1}{n^{3/2}}\). Here basic/non-basic refers to the optimal solution to \(LP_{F}\) of Section 3.1.
Condtional on the event \(\mathcal{E}_{1}\) described prior to (14) we have
\[\mathbf{Pr}\left(\bar{C}(i,j)\leq\frac{1}{n^{3/2}}\bigg{|}(i,j)\text{ is non-basic}\right)\leq\frac{1}{n^{3/2}(1-|u_{i}|-|v_{j}|)}\leq\frac{2}{n^{3/2}}. \tag{25}\]
Now let us consider the probability that we can join \(P_{t}\) to \(P_{t+1}\), given that we have joined up \(P_{1},P_{2},\ldots,P_{t}\). We need to estimate the probability that \((u,v)=(z_{t},y_{t+1})\) is a basic edge since (25) deals with the possibility of a short non-basic edge. Having exposed the status of the edges \((z_{i},y_{i+1}),1\leq i\leq t\) we see from (21) that
\[\mathbf{Pr}((z_{t},y_{t+1})\text{ is a basic edge }|\text{ we have joined up }P_{1},P_{2},\ldots,P_{t})\leq\frac{d_{z_{t}}}{n-t+1}. \tag{26}\]
To have a positive probability of creating a tour, the previous edge exposures must not concern edges with tail \(z_{t}\) or with head equal to an out-neighbor of \(z_{t}\). There are \(d_{z_{t}}\) random choices of out-neighbor \(\xi\) of \(z_{t}\) and at this point \(\rho(x_{i})\) is random, subject only to \(t-1\) previous selections.
Putting (25) and (26) together we see that
\[{\bf Pr}(\mbox{we can join }P_{1},\ldots,P_{k})\leq\prod_{t=1}^{k}\left(\frac{2}{n^{3/2 }}+\frac{d_{z_{t}}}{n-t+1}\right)\leq e^{2k/n^{1/2}}\prod_{t=1}^{k}\frac{d_{z_{ t}}}{n-t+1}.\]
Thus,
\[\Pi_{k}={\bf Pr}(\exists k\mbox{-substitution})\leq e^{2k/n^{1/2}}\frac{(k-1)!}{(n)_{k}}\sum_{\begin{subarray}{c}S=\{i_{1},i_{2},\ldots,i_{k}\}\\ S\ covers\ all\ cycles\end{subarray}}\prod_{t=1}^{k}d_{i_{t}}.\]
\(\Box\)
Suppose now that there are \(a_{r}\) vertices for which \(d_{i}=r\), \(1\leq r\leq n\). We need to argue that \(a_{0}\geq\eta n\) w.h.p. for some small positive constant \(\eta\). Each leaf of \(T\) has out-degree zero and so we only need to show that w.h.p. \(T\) at least \(\eta n\) leaves.
**Lemma 12**: _There exists a small positive constant \(\eta\) such that w.h.p. \(T\) has at least \(\eta n\) leaves._
**Proof.** Note that each \(T\) arises from exactly \(2^{n-1}\) distinct \(T^{*}\)'s. This is because we have two choices as to how to configure each edge that is not part of the matching. (An edge \((i,j)\) in \(T^{\prime}\) can in \(T^{*}\) be expanded to \((x_{i},y_{j})\) or to \((x_{j},y_{i})\).) Let \(b(T)=b(T^{*})\) denote the number of branching nodes (degree \(\geq 3\)) of \(T\) and \(T^{*}\). A tree \(T\) is \(\eta\)-bushy if \(b(T^{\prime})\leq\eta n\). Bohman and Frieze used this concept in [5] and showed that the number of \(\eta\)-bushy trees is at most \(n!e^{\theta(\eta)n}\) where \(\theta(\eta)\to 0\) as \(\eta\to 0\). It follows that the number of \(\eta\)-bushy trees of \(K_{A,B}\) which have a perfect matching is at most \(e^{\theta(\eta)n}2^{n-1}n!\). Observe that the number of leaves in \(T\) is at least \(b(T)\). We show that, for a sufficiently small constant \(\eta\),
\[{\bf Pr}(T^{*}\mbox{ is $\eta$-bushy})=o(1). \tag{27}\]
This will prove the lemma. For any tree \(T^{*}\) with a perfect matching, we can put \(u_{1}=0\) and then solve the equations \(u_{i}+v_{j}=C(i,j)\) for \((x_{i},y_{j})\in T^{*}\) to obtain the associated dual variables. \(T^{*}\) is optimal if \(\bar{C}(i,j)=C(i,j)-u_{i}-v_{j}\geq 0\) for all \((x_{i},y_{j})\notin T^{*}\). Let \(Z_{T^{*}}=\sum_{i}u_{i}+\sum_{j}v_{j}\). Now w.h.p. the optimal tree \(T^{*}\) satisfies \(Z_{T^{*}}\in[1.6,1.7]\), because \(Z_{T^{*}}\) is the optimal assignment cost. We know both that the expectation of \(Z_{T^{*}}\) is in the stated range and that the actual value is concentrated about the expectation, see Talagrand [25]. Then if \({\cal E}\) denotes the event \(\{{\cal E}_{1}\) and \(Z_{T^{*}}\in[1.6,1.7]\}\), for any tree \(T^{*}\), over random matrices
\(C(i,j)\),
\[\begin{split}&\mathbf{Pr}(Z_{T^{*}}\in[1.6,1.7]\text{ and }\mathcal{E}_{1}\text{ and }\bar{C}(i,j)\geq 0,\ \forall\text{non-basic }(i,j))\\ \leq&\mathbf{Pr}(\bar{C}(i,j)\geq 0,\ \forall(i,j) \notin T^{*}\mid\mathcal{E})\times\mathbf{Pr}(Z_{T^{*}}\in[1.6,1.7])\\ \leq&\frac{1.7^{n}}{n!}\,\mathbf{E}\left(\prod_{(x_ {i},y_{j})\notin T}(1-(u_{i}+v_{j})^{+})\ \bigg{|}\ \mathcal{E}\right)\\ \leq&\frac{1.7^{n}}{n!}\,\mathbf{E}\left(\exp\left\{- \sum_{(x_{i},y_{j})\notin T}(u_{i}+v_{j})\right\}\ \bigg{|}\ \mathcal{E}\right)\\ \leq&\frac{1.7^{n}}{n!}\,\mathbf{E}\left(e^{-nZ_{T^ {*}}}\exp\left\{\sum_{(x_{i},y_{j})\in T}(u_{i}+v_{j})\right\}\bigg{|}\ \mathcal{E}\right)\\ \leq&\frac{1.7^{n}}{n!}e^{-1.6n+O(\gamma_{\epsilon}n )}.\end{split} \tag{28}\]
**Explanation for (28)**\(\frac{1.7^{n}}{n!}\) bounds the probability that the sum of the lengths of the edges in the perfect matching of \(T\) is at most \(1.7\). The product term is the probability that each non-basic reduced cost is non-negative.
Thus
\[\begin{split}&\mathbf{Pr}(\exists\text{ an }\eta\text{-bushy tree }T^{*}:Z_{T^{*}}\in[1.6,1.7]\text{ and }(14)\text{ and }\bar{C}(i,j)\geq 0\ \forall(i,j)\notin I)\\ &\quad\leq n!2^{n}e^{\theta(\eta)n}\times\frac{1.7^{n}}{n!}e^{-1. 6n+O(\gamma_{\epsilon}n)}\\ &\quad=o(1),\end{split}\]
for \(\eta\) sufficiently small. This implies (27). \(\Box\)
### Small \(k\)
In this section we assume that \(k_{0}\leq k\leq k_{1}\) where \(k_{0}=\frac{1}{2}\log n\) and \(k_{1}=n^{2/3}\). The lower bound follows from the fact that w.h.p. a random permutation has at least this many cycles. Then, from (24),
\[\sum_{k=k_{0}}^{k_{1}}\Pi_{k} \leq \sum_{k=k_{0}}^{k_{1}}e^{2k/n^{1/2}}\frac{(k-1)!}{(n)_{k}}\sum_{ k_{1}+\cdots+k_{n}=k}\prod_{r=1}^{n}\binom{a_{r}}{k_{r}}r^{k_{r}}\] \[\leq \sum_{k=k_{0}}^{k_{1}}e^{2k/n^{1/2}+2k^{2}/n}\frac{(k-1)!}{n^{k} }\sum_{k_{1}+\cdots+k_{n}=k}\prod_{r=1}^{n}\frac{(ra_{r})^{k_{r}}}{k_{r}!}\] \[\leq \sum_{k=k_{0}}^{k_{1}}e^{2k/n^{1/2}+2k^{2}/n}\frac{(k-1)!}{n^{k} }\frac{1}{k!}\left(\sum_{r=1}^{n}ra_{r}\right)^{k}\] \[\leq \sum_{k=k_{0}}^{k_{1}}e^{2k/n^{1/2}+2k^{2}/n}\frac{(1-\eta)^{k}}{ k}=o(1).\]
### Large \(k\)
In this section we assume that \(k>n^{2/3}\). Write
\[\Pi_{k}\leq e^{2k/n^{1/2}}\frac{1}{k\binom{n}{k}}\sum_{S=\{i_{1},i_{2},\ldots,i_{ k}\}}\prod_{t=1}^{k}d_{i_{t}}. \tag{30}\]
Suppose now that we have \(n\) bins, where bin \(i\) contains \(d_{i}\) distinguishable balls. The RHS of (30) (ignoring the term \(e^{2k/n^{1/2}}\)) is \(k^{-1}\) times the probability \(\tilde{P}_{k}\) that if we choose \(k\) of the balls at random, we never choose two balls from the same bin. (The sum is the number of allowed choices and \(\binom{n}{k}\) is the total number of choices). Because at least \(\eta n\) of the bins are empty, we can find at least \(\eta n/2\) pairs of balls \(\alpha_{i},\beta_{i}\) are such that \(\alpha_{i}\) and \(\beta_{i}\) are in the same bin. Now choose the balls in two sets of size \(k/2\) each. The probability that fewer than \(\eta k/4\) of the \(\alpha_{i}\) are chosen is, by the Chernoff bound, at most \(e^{-\eta k^{2}/(32n)}\) and given that at least \(\eta k/4\) are chosen, the probability that a corresponding \(\beta_{i}\) is never chosen is at most \(\left(1-\frac{\eta k}{4n}\right)^{k/2}\leq e^{-\eta k^{2}/(8n)}\). Thus,
\[\sum_{k\geq k_{1}}\Pi_{k} \leq\sum_{k\geq k_{1}}e^{2k/n^{1/2}}k^{-1}(e^{-\eta k^{2}/(32n)}+e ^{-\eta k^{2}/(8n)})\] \[\leq\frac{1}{k_{1}}\sum_{k\geq k_{1}}e^{-\eta k^{2}/(40n)}\] \[\leq e^{-\eta^{2}\log^{2}n/50}.\]
This completes the proof of Lemma 2. \(\Box\)
## 5 Summary and open questions
Theorem 1 answers the question as to whether or not the assignment problem is a good enough bound for branch and bound to run in expected polynomial time. It is not. One can strengthen this bound by replacing AP by the subtour elimination LP of Dantzig, Fulkerson and Johnson [7]. Perhaps this leads to a branch and bound algorithm that runs in polynomial time w.h.p.
Less is known probabilistically about the symmetric TSP. Frieze [10] proved that if the costs \(C(i,j)=C(j,i)\) are independent uniform \([0,1]\) then the asymptotic cost of the TSP and the cost 2F of the related 2-factor relaxation are asymptotically the same. The probabilistic bounds on \(|TSP-2F|\) are inferior to those given in [15]. Still, it is conceivable that the 2-factor relaxation or the subtour elimination constraints are sufficient for branch and bound to run in polynomial time w.h.p. Frieze and Pegden [14] and Pegden and Severaki [24] have studied branch and bound in the context of random instances of the Eulidean TSP. They show that adding sub-tour elimination inequalities do not make branch and bound run in polynomial expected time. Indeed branch and bound runs in exponential time w.h.p. The latter paper [24] even allows the addition of comb inequalities. |
2306.14450 | Eigenvalues of regular symmetric Hall-plates | I discuss uniform, isotropic, plane, singly connected, electrically linear,
regular symmetric Hall-plates with an arbitrary number of N peripheral contacts
exposed to a uniform perpendicular magnetic field of arbitrary strength. In
practice, the regular symmetry is the most common one. If the Hall-plates are
mapped conformally to the unit disk, regular symmetry means that all contacts
are equally large and all contacts spacings are equally large, yet the contacts
spacings may have a different size than the contacts. Such Hall-plates do not
change when they are rotated by 360{\deg}/N. Their indefinite conductance
matrices are circulant matrices, whose complex eigenvalues are computable in
closed form. These eigenvalues are used to discuss the Hall-output voltage, the
maximum noise-efficiency, and Van-der-Pauw's method for measuring sheet
resistances. For practical use, I report simple approximations for Hall-plates
with four contacts and 90{\deg} symmetry with popular shapes like disks,
rectangles, octagons, squares, and Greek crosses with and without rounded
corners. | Udo Ausserlechner | 2023-06-26T06:43:30Z | http://arxiv.org/abs/2306.14450v1 | # Eigenvalues of regular symmetric Hall-plates
###### Abstract
I discuss uniform, isotropic, plane, singly connected, electrically linear, regular symmetric Hall-plates with an arbitrary number of \(N\) peripheral contacts exposed to a uniform perpendicular magnetic field of arbitrary strength. In practice, the regular symmetry is the most common one. If the Hall-plates are mapped conformally to the unit disk, regular symmetry means that all contacts are equally large and all contacts spacings are equally large, yet the contacts spacings may have a different size than the contacts. Such Hall-plates do not change when they are rotated by \(360^{o}/N\). Their indefinite conductance matrices are circulant matrices, whose complex eigenvalues are computable in closed form. These eigenvalues are used to discuss the Hall-output voltage, the maximum noise-efficiency, and Van-der-Pauw's method for measuring sheet resistances. For practical use, I report simple approximations for Hall-plates with four contacts and \(90^{o}\) symmetry with popular shapes like disks, rectangles, octagons, squares, and Greek crosses with and without rounded corners.
+
Footnote †: preprint: AIP/123-QED
## I Introduction
Hall-plates are thin flat pieces of (semi-)conductors with a large mobility of the majority charge carriers. If a magnetic field acts on the carriers the Lorentz force diverts the current streamlines, and this builds up a Hall-electric field \(\mathbf{E}_{H}\). The exact solution of the electric field problem in closed analytical form is not trivial, all the more if the Hall-plates have a non-symmetric geometry with extended contacts. Even if the potential in response to a single input current is known everywhere in the Hall-plate, it is still much work to compute the conductance matrix, which relates the voltages at the contacts of the plate to the currents into all contacts. Luckily, all shapes, which are equivalent by conformal transformation, have the same conductance matrix [1]. Therefore, it is sufficient to study simple geometries, like circular disk Hall-plates, because from Riemann's mapping theorem [2] we know that there always exists a conformal transformation that maps the disk to any other singly connected domain. Recently, Hometcovschi and Murray found a general method to compute the conductance matrix of cicular disk Hall-plates without conformal transformation [3]. For singly-connected 2D-domains with \(N\) extended peripheral contacts, they defined two matrices, whose entries are numerical integrals that depend on the locations of the vertices of the contacts and on the Hall-angle. One of the matrices needs to be inverted and multiplied by the other one to get the resistance matrix normalized by the sheet resistance. The method is very general, but it does not readily give closed formulae for the resistance matrix of Hall-plates. Yet, for _symmetric_ Hall-plates with \(N\) equal contacts the equations can be simplified and one gets closed formulae. This has been done for _strictly regular symmetric_ Hall-plates [4]--they have the highest possible degree of symmetry (see Fig. 1). Instead of computing the resistance matrix directly, it turned out to be simpler to compute the eigenvalues of the indefinite conductance matrix. In this work I apply the same techniques from Ref. [4] to less symmetric Hall-plates -- _regular symmetric_ instead of strictly regular symmetric Hall-plates.
This paper starts with the definitions of the indefinite conductance matrix, Ohm's law, and the stream function. The regular symmetry leads to a symmetry in the indefinite conductance matrix--it is a circulant matrix, whose eigenvalues and eigenvectors are computed in closed form in Section III. In Section IV I apply these findings to the theory of Hometcovschi and Murray, to get a closed formula for the complex eigenvalues. Section V discusses some basic properties of these eigenvalues. Section VI shows how to compute the resistance matrix from the eigenvalues. Section VII computes the output voltage of regular symmetric Hall-plates with four contacts. It also gives a very good approximation for the Hall-geometry factor of this most common type of Hall-plate and compares it to formulae given in the literature. Section VIII compares the maximum noise efficiency of regular symmetric versus strictly regular symmetric Hall-plates, and Section IX explains how to generalize van-der-Pauw's method for regular symmetric Hall-plates with four contacts with or without applied magnetic field. The appendix specifies how Hall-plates with popular shapes (Greek crosses, octagons, and rectangles) are equivalent to disk shaped Hall-plates.
## II Definitions
In electrically linear Hall-plates the currents into the terminals are linear combinations of the potentials at the terminals. Suppose that the Hall-plate has \(N\) terminals. Let us group all \(N\) currents to a vector \(\mathbf{I}\) and all \(N\) voltages to a vector \(\mathbf{V}\). Then we can express the linear combination as a matrix multiplication \(\mathbf{I}=\mathbf{G}\mathbf{V}\), with the _indefinite_ conductance matrix \(\mathbf{i}^{\prime}\mathbf{G}\). Hereby I use the historic nomenclature in circuit theory [5; 6; 7; 8], where 'indefinite' denotes _undefined_ reference potential (ground node). In a mathematical sense the indefinite conductance matrix is not indefinite, but positive semi-definite. Both indefinite and semi-definite matrices have zero determinants, and therefore \(\mathbf{i}^{\prime}\mathbf{G}\) cannot be inverted. If we ground the \(\ell\)-th terminal, we write \(\mathbf{I}=\mathbf{G}\mathbf{V}\), where we delete the \(\ell\)-th current and voltage in \(\mathbf{I},\mathbf{V}\), respectively, and we delete the \(\ell\)-th row and column in \(\mathbf{i}^{\prime}\mathbf{G}\) to get \(\mathbf{G}\). The definite conductance matrix of any passive (= dissipative) system is positive definite, its determinant is positive, and an inverse |
2310.09481 | Coupled metamaterial-phonon terahertz range polaritons in a topological
insulator | We report terahertz time-domain spectroscopy (TDTS) experiments demonstrating
strong light-matter coupling in a terahertz (THz) LC-metamaterial in which the
phonon resonance of a topological insulator (TI) thin film is coupled to the
photonic modes of an array of electronic split-ring resonators. As we tune the
metamaterial resonance frequency through the frequency of the low frequency
$\alpha$ mode of (Bi$_x$Sb$_{1-x}$)$_2$Te$_3$ (BST), we observe strong mixing
and level repulsion between phonon and metamaterial resonance. This hybrid
resonance is a phonon polariton. We observe a normalized coupling strength,
$\eta$ = $\Omega_R$/$\omega_c$ $\approx$ 0.09, using the measured vacuum Rabi
frequency and cavity resonance. Our results demonstrate that one can tune the
mechanical properties of materials by changing their electromagnetic
environment and therefore modify their magnetic and topological degrees of
freedom via coupling to the lattice in this fashion. | Sirak M. Mekonen, Deepti Jain, Seongshik Oh, N. P. Armitage | 2023-10-14T03:52:54Z | http://arxiv.org/abs/2310.09481v3 | # Coupled metamaterial-phonon terahertz range polaritons in a topological insulator
###### Abstract
**We report terahertz time-domain spectroscopy (TDTS) experiments demonstrating strong light-matter coupling in a terahertz (THz) LC-metamaterial in which the phonon resonance of a topological insulator (TI) thin film is coupled to the photonic modes of an array of electronic split-ring resonators. As we tune the metamaterial resonance frequency through the frequency of the low frequency \(\alpha\) mode of (Bi\({}_{x}\)Sb\({}_{1-x}\))\({}_{2}\)Te\({}_{3}\) (BST), we observe strong mixing and level repulsion between phonon and metamaterial resonance. This hybrid resonance is a phonon polariton. We observe a normalized coupling strength, \(\eta=\Omega_{R}/\omega_{c}\approx\) 0.09, using the measured vacuum Rabi frequency and cavity resonance. Our results demonstrate that one can tune the mechanical properties of materials by changing their electromagnetic environment and therefore modify their magnetic and topological degrees of freedom via coupling to the lattice in this fashion.**
Metamaterials (MMs) are artificial composite materials that offer exceptional control of the electromagnetic properties due to the capability to engineer their electric and magnetic resonances by controlling the geometry and size of the individual subwavelength constituents. They offer the possibility to achieve strong coupling between highly confined electromagnetic fields and localized or propagating quasiparticles such as surface plasmon polaritons in metals and superconductors (Gramotnev and Bozhevolnyi, 2010), phonon polaritons in polar dielectrics (Kim et al., 2020; Shelton et al., 2011), and exciton polaritons in organic molecules and transition metal dichalcogenides (As' ham et al., 2022; Dintinger et al., 2005; Ramezani et al., 2017). Recently, MNRs have been used to control the electron-phonon interaction of topological insulators via their surface states (In et al., 2018). Topological insulators (TIs), a class of quantum materials with robust metallic surface states protected by the topological properties of the bulk wavefunctions (Ando, 2013; Autore et al., 2017; Hasan and Kane, 2010; Wu et al., 2013), have gathered a growing interest due to both their interesting fundamental physics as well as potential applications in terahertz (THz) detectors (Zhang et al., 2010) and spintronic devices (Chen et al., 2009). These applications can potentially be realized through the polariton interaction, which arises from strong light-matter interactions between a confined electromagnetic field and a cavity resonance. A strong light-matter interaction between lattice vibration and a confined electromagnetic field can reach the strong coupling regime where coherent exchange of energy between light and matter becomes reversible. In this regime, coupled light-matter polaritons form hybrid states where they can coherently exchange energy at the characteristic rate of the vacuum Rabi frequency \(\Omega_{R}\), which is dominant with respect to other loss mechanisms in the system (Benz et al., 2015; Dovzhenko et al., 2018). A polariton system based on novel functional materials could offer an efficient quantum level system with tunable sources and detectors, optical filters and qubits operating in the far-infrared frequency range (Bakker et al., 1994; Jin et al., 2019; Kojima et al., 2003; Ohtani et al., 2019; Tanabe et al., 2003). They may also afford the possibility of tuning the mechanical properties of materials (and therefore their electronic or magnetic properties through phonon coupling) by changing their electromagnetic environment. Previous investigations have shown a phonon-polariton coupled systems with metamaterials in the mid-infrared range (Pons-Valencia et al., 2019; Shelton et al., 2011) as well as THz range surface-plasmon polaritons (Liang et al., 2015; Liu et al., 2015; Maier et al., 2006).
Fig. 1(a) is a schematic of the unit cell of our BST metasurface that is composed of an array of SRRs deposited on a TI film. The gap in SRRs serve as a capacitor whereas the ring serves as an inductor giving an _LC_ resonance. Generally, the resonance frequency of SRRs can be given as \(f_{0}\approx 1/(2\pi\sqrt{L_{c}C})\), where the inductance \(L_{c}\) and the capacitance \(C\) are determined by the SRRs dimensions and the effective refractive index of the environment. At the _LC_ resonance, the incident electric field induces a large accumulation of surface charges at the ends of the metal strips resulting in a strong electric field confinement in the capacitive gaps (Chen et al., 2006; Kim et al., 2020, 2018; Pendry et al., 1999; Zhang et al., 2021). The resonance frequency of SRR generally scales inversely with its dimension. Fig. 1(c) shows an image of one of the fabricated composite BST-SRR arrays that gives a SRR resonance frequency of 1.5 THz. We used a
spin coated layer of Poly(methyl methacrylate) (PMMA) as a spacer to achieve the desired coupling strength of the resonators to the thin film.
In Fig. 2(a), we show an finite element method (FEM) simulation of a transmission spectrum for SRRs on a Al\({}_{2}\)O\({}_{3}\) substrate. By tuning the lateral dimension \(l\) of the SRR, we expect to tune their resonant frequencies from 1.1 (4.55) - 1.7 (7.03) THz (meV). As the dimension of the SRRs decreased, the absorption exhibits a blueshift. Thus, it is possible to match the uncoupled resonant frequency of a SRR to a resonance of material system. The 5K transmission spectra of a bare BST films (with no SRR) is shown in Fig. 2(b). The absorption peak at \(\approx 1.5\) THz is the transverse optical (TO) \(\alpha\) phonon mode that in the binary compounds Bi\({}_{2}\)Se\({}_{3}\), Bi\({}_{2}\)Te\({}_{3}\), Sb\({}_{2}\)Te\({}_{3}\) and Sb\({}_{2}\)Se\({}_{3}\), is attributed to an \(E_{1}^{u}\) mode that corresponds to sliding motion of atomic planes past each other (Richter and Becker, 1977). In the non-stochiometric compounds, phonons in this spectral range were found to extrapolate smoothly from Bi\({}_{2}\)Te\({}_{3}\) to Sb\({}_{2}\)Te\({}_{3}\) and from Bi\({}_{2}\)Se\({}_{3}\) to In\({}_{2}\)Te\({}_{3}\). This \(\alpha\) mode has been investigated extensively in the context of THz studies of topological insulators (Wu et al., 2013).
In order to adequately sweep the SRR frequency through \(\omega_{Ph}\), we fabricated 7 different SRRs on 20 nm thick BST films (See Supplementary Material). That we can predict the frequencies of the SRRs uncoupled to phonons but deposited on BST is evidenced by the fact that at high temperatures where the phonons are extremely damped, our simulations predict the SRR frequencies accurately (as shown in the Supplementary Material). Going forward, we label the different SRRs in terms of these predicted SRR "bare frequencies".
At low temperatures the behavior is very different. Here the phonon resonance is strong and when the SRR frequency is tuned to it, the effects of mixing and level repulsion are prominent. In Fig. 3, we show the transmission data for the BST-SRR hybrid systems at 5K for \(\omega_{SRR}\) ranging from 1.1 (4.55) to 1.7 (7.03) THz (meV).
Figure 1: **Design of multiscale topological insulator metasurfaces.** a) Schematic of the unit cell showing thin film interface between metallic SRR and Al\({}_{2}\)O\({}_{3}\) substrate with the corresponding electromagnetic excitation configuration. b) Top-view schematic of the unit cell with the relevant geometrical dimensions: \(p_{x}=p_{y}=44\ \mu\)m, \(l=34\ \mu\)m, \(w=3\)\(\mu\)m \(g=1.5\ \mu\)m, and \(t=100\) nm. c) An optical microscope image (20x) of a fabricated BST-SRR array.
Figure 3: **TDTS data on BST-SRR at 5K** a) TDTS transmission spectrum of the seven SRR metamaterial arrays deposited on the BST films.
Figure 2: **Numerical simulations of SRRs and TDTS data on BST at 5K.** a) Simulated THz transmission spectra of SRR’s at different resonance frequencies at different \(l\) b) TDTS transmission spectrum of the phonon resonance frequency of BST.
One can see two notable transmission dips for all samples that indicate two resonances. When their frequencies are far from each other, we can assign a clear local character. Judging from the data in Fig. 2, the more prominent feature has largely SRR character and higher Q-factor. We note though that as the SRR resonance is swept across \(\omega_{Ph}\), the two peaks always maintain a separation and their intensities become similar. As the resonances are tuned through each other, the lower peak gets further damped and the upper peak sharpens indicating that the local character of excitations change as they are tuned through each other.
We fit the data of Fig. 3, to a double-Lorentzan model to extract the eigenfrequencies \(\omega_{-}\) and \(\omega_{+}\), and damping rates, \(\Gamma_{\pm}\) for all SRRs. Representative fits to these spectra can be found in the supplementary material. Our BST-SRR hybrid system can be considered as two coupled oscillators, one of which has a fixed frequency (BST) where its electromagnetic environment is tuned by the SRR frequency. When both oscillators achieve similar frequency, they form a coupled system and an anticrossing phenomenon is observed. This results in a periodic transfer of energy between the phonon and SRR through vacuum Rabi oscillations, which is proportional to the splitting at the anticrossing point (Pal et al., 2015).
In Fig. 4(a), we plot the measured eigenfrequencies at 5K versus the uncoupled resonance frequencies that we have obtained at 297K of the BST-SRR hybrid systems. The experimentally obtained peak positions, \(\omega_{+}\) and \(\omega_{-}\) are shown in circles. One sees a classic signature of level repulsion and mixing of two excitations branches with each other. As the uncoupled resonances approach each other their distinct local characters are lost and new coupled excitations are formed that are symmetric and anti-symmetric combinations of the bare excitation. Our observation is evidence for the formation of a phonon-polariton hybrid from the coupling of the SRR resonance and \(\alpha\) mode phonon.
The observed level repulsion behavior can be understood classically and be described using a coupled oscillators model (Novotny, 2010):
\[\omega_{\pm}^{2}=\frac{1}{2}\left[\omega_{SRR}^{2}+\omega_{Ph}^{2}\pm\sqrt{( \omega_{SRR}^{2}-\omega_{Ph}^{2})^{2}+\Omega_{R}^{2}\omega_{SRR}\omega_{Ph}}\right] \tag{1}\]
By applying Eq.1, we fit the observed level repulsion as indicated in Fig. 4(a) to obtain the coupling parameters. The strength of the coupling, \(\Omega_{R}\), for when \(\omega_{SRR}\approx\omega_{Ph}\), is found to be 0.27 (1.12) THz (meV). The observed splitting is a significant fraction of the \(\alpha\) phonon mode resonance which indicates a strong light-matter interaction at the avoided crossing. The normalized coupling strength ratio, \(\eta=\frac{\Omega_{R}}{\omega_{\pm}}\), between the Rabi frequency and BST-SRR resonance frequency, is found to be \(\eta\approx 0.09\).
It is also interesting to note the behavior of the peak widths as the bare SRR frequency is swept. Fig. 4(b) shows the rates as a function of \(\omega_{SRR}\). One can see that when \(\omega_{SRR}\) is small \(\omega_{-}\) has a lower damping than \(\omega_{+}\) showing its principle SRR character. Near the crossing, the lifetimes are equal showing the mixed character. For large SRR frequency, \(\omega_{+}\), has the smallest damping, shows that now it has largely SRR character (and \(\omega_{-}\) has largely phonon character).
We have demonstrated the ability to achieve a strong light-matter coupling between the \(\alpha\) phonon mode of (Bi\({}_{x}\)Sb\({}_{1-x}\))\({}_{2}\)Te\({}_{3}\) and cavity resonances of planar THz range metamaterials made using standard photolithography techniques. We have given spectroscopic evidence of strong coupling with a normalized coupling strength of \(\eta\approx 0.09\). Consequently, we have observed the formation of THz phonon-polariton resonance emerging from the integration of metamaterials with topological insulators. Our results can be potentially beneficial for TI-based electronics and plasmonic applications. By varying the metamaterial resonance, we have demonstrated the ability to manipulate the mechanical properties of a material by tuning its electromagnetic environment. Via their coupling to phonons this may be used to control magnetic and topological degrees of freedom.
**Acknowledgements:**
Work at JHU was supported by NSF DMR-1905519 and an AGEP supplement. Work at Rutgers was supported by ARO-W911NF2010108 and MURI
Figure 4: **Dispersion and inverse life time of the coupled metamaterial-phonon system.** a) Energy levels of hydbrized phonon modes of BST and LC mode of SRRs. b) Inverse lifetime of resonances.
W911NF2020166. The work reported here was partially carried out in the Nanofabrication Facility at the University of Delaware (UDNF). We would like to thank A. Jackson, K. Katsumi, and L.Y. Shi for helpful discussions.
**Author Contributions:**
Author Contributions: SMM performed the simulation, fabrication and TDTS measurements. DJ grew the thin films. SO and NPA supervised the project. SMM and NPA wrote the manuscript with input from other authors.
The authors declare no competing interests.
**Data Availability Statement:**
Source data are available for this paper. All other data that support the plots within this paper and other findings of this study are available from the corresponding author upon reasonable request.
|
2301.10691 | A weakly universal weighted cellular automaton in the heptagrid with 6
states | In this paper we prove that there is a weakly universal weighted cellular
automaton in the heptagrid, the tessellation {7,3} of the hyperbolic plane,
with 6 states. The present paper improves the same result deposited on
arXiv:2301.10691v1 and also arXiv:2301.10691v2. In the deposited papers, the
result is proved with 7 states. In the present replacement the number of states
is reduced to 6. Such a reducing is not trivial and requires substantial
changes in the implementation. The maximal weight is now 34, a very strong
reduction with the best result with 7 states. Also, the table has 137 entries,
signifcantly less than the 160 entries of the paper with 7 states. The
reduction is obtained by a new implementation of the tracks which play a key
role as far as without tracks there is no computational universality result. | Maurice Margenstern | 2023-01-25T16:52:55Z | http://arxiv.org/abs/2301.10691v4 | # A weakly universal weighted cellular automaton in the heptagrid with 6 states
###### Abstract
In this paper, we prove that there is a weakly universal weighted cellular automaton in the heptagrid, the tessellation \(\{7,3\}\) of the hyperbolic plane, with 6 states. The present paper improves a previous result with 7 states deposited on arXiv:2301.1091v1, arXiv:2301.1091v2 and on arXiv:2301.1091v3.
## 1 Introduction
In the present paper see also [6], the author considers weighted cellular automata in the heptagrid, the tessellation \(\{7,3\}\) of the hyperbolic plane. He proves a theorem about weak universality in that context, following a model used by the same author in many papers about cellular automata in hyperbolic spaces. By _weakly universal_, it is meant that the automaton is able to simulate a universal device starting from an infinite initial configuration. However, the initial configuration should not be arbitrary. It is the case as far as it is periodic outside a large enough circle as in previous mentioned papers, in fact it is periodic outside such a circle in two different directions as far as the simulated device is a two-registered machine. From a result by Minsky, [8], it is enough to simulate any Turing machine. In the heptagrid, the tessellation is based on a regular convex heptagon with the angle \(\dfrac{2\pi}{3}\) between consecutive sides. The other tiles result by copies from that heptagon and, recursively, from the images by reflection in their sides. The heptagrid is defined and explained in Subsection 1.1. In Sub-section 1.2, it is indicated what weighted cellular automata are.
### The heptagrid
As already mentioned, the heptagrid is a tessellation of the hyperbolic plane whose signature is defined as \(\{7,3\}\). In that signature, 7 is the number of sides of a tile, 3 is the number of edges which meet at a vertex. It also means that 3 is
the number of tiles around a vertex. We call that tessellation, the **heptagrid**. The signature also means that the tiles are always copies of a **heptagon**, a regular convex polygon of the hyperbolic plane with seven sides and with the angle \(\dfrac{2\pi}{3}\) between consecutive sides as already said.
The left-hand side part of Figure 1 gives us a representation of the heptagrid. A **path joining**\(A\) to \(B\), where \(A\) and \(B\) are two tiles of the heptagrid, is a sequence \(\{T_{i}\}_{i\in\{0..n-1\}}\) of tiles such that \(T_{i}\) and \(T_{i+1}\) share a common side for \(0\leq i<n\)-1 but \(T_{i}\) and \(T_{j}\) are not adjacent if \(|i-j|>1\). We say that \(n\) is the length of the just mentioned path joining \(A\) to \(B\). We call **distance** from \(A\) to \(B\), denoted by \(\operatorname{dist}(A,B)\), the shortest length for the paths joining \(A\) to \(B\). Clearly, \(\operatorname{dist}(A,B)=0\) if and only if \(A=B\). Clearly too, that distance satisfies the triangular inequality. A circle of radius \(r\) around \(T\), a tile of the heptagrid, is the set of tiles of the heptagrid whose distance from \(T\) is \(r\). A **disc** of radius \(r\) around \(T\) is the set of tiles in all circles of radius \(\rho\) around \(T\) with \(\rho\leq r\). Figure 1 also illustrates the main features which are used to navigate in the tiling. From the point \(M\) of the figure, we draw two rays \(u\) and \(v\) which pass through the mid-points of the sides of tiles they are crossing, a characteristic property. The tiles whose centre lies within the angle defined by the rays \(u\) and \(v\) constitute a **sector**, by definition. The tile of the sector which is the closest to \(M\) is called its **head**. The figure also illustrates the tree structure which allows us to navigate in the tiling. As illustrated by the right-hand side part of Figure 1, the heptagrid is the union of a central tile \(\tau\) and seven sectors whose heads are the neighbours of \(\tau\).
**Figure 1**: _Representation of the heptagrid. To left, The rays \(u\) and \(v\) define a sector. Inside that sector, the tree structure. To right, the decomposition of the heptagrid. Each sector is numbered in its_ **head**_. Accordingly, \(\tau\) in the left-hand side picture is the head of sector \(4\), defined by \(u\) and \(v\); \(\mu\)’s coordinate is \((4,1)\)._
Indeed, if we denote the colours, blue, orange, yellow, orange and green by the letters **B**, **O**, **Y** and **G**, respectively, the tiling in a sector is defined by the rules:
\[\begin{array}{ccccc}\mathbf{B}&\rightarrow&\mathbf{BO}\\ \mathbf{O}&\rightarrow&\mathbf{BYO}\\ \mathbf{Y}&\rightarrow&\mathbf{BYG}\\ \mathbf{G}&\rightarrow&\mathbf{BYG}\end{array} \tag{1}\]
In a rule of (1), we say that the tiles lying on the right hand-side of the arrow are **produced** by the rule from the tile lying on the left hand-side of the arrow. The tree \(\mathcal{T}\) generated by those rules applied in the sector from the closest tile \(\tau\) to \(M\) is called a **Fibonacci tree** as far as there are \(f_{2n+1}\) tiles which are at the distance \(n\) from \(\tau\), where \(f_{n}\) is the sequence defined by
\[f_{0}=f_{1}=1\text{ and }f_{n+2}=f_{n+1}+f_{n}\text{ for }n\in\mathbb{N} \tag{2}\]
The navigation tool in a sector consists in numbering by 1 its closest tile \(\tau\) from \(M\) and numbering the others one by one from a level to the next one, a level in the sector being the set of tiles at the same distance from \(\tau\), and on a level, from the leftmost tile to the rightmost one. That numbering possesses interesting properties, we refer the reader to [2, 4].
It is not difficult to see that if \(A\) is a tile of \(\mathcal{T}\) at the distance \(n\) from \(\tau\) if and only if there is a sequence \(\tau_{i}\) with \(i\in[0..n]\) such that \(\tau_{0}=\tau\), \(\tau_{n}=A\) and for \(0\leq i<n\), \(\tau_{i+1}\) is a son of \(\tau_{i}\), which means that \(\tau_{i+1}\) is produced from \(\tau_{i}\) by one of the rules of (1). Such a sequence is called the **path from \(\tau\) to \(A\) in the sector**. In [4, 2], there are algorithms allowing to compute the path from \(\tau\) to \(A\) from the number attached to \(A\) in the sector. As can be seen on the right-hand side picture of Figure 1, the heptagrid can be split into a central tile \(O\) and into seven disjoint sectors whose union is the complement of \(O\) in the heptagrid. So that if a tile \(A\) of the heptagrid is distinct from \(O\), it can be given two numbers, \((s,n)\), \(s\) in \([1..7]\) defining in which sector it lies and \(n\) being the number of the tile in its sector. Tile \(O\) is given number 0. The tile \(\mu\) in the left-hand side part of Figure 1 is given the coordinate (4,1).
The cellular automaton we construct in Section 2 evolves in the heptagrid. We shall use figures similar to Figure 1 to illustrate key configurations while more complex ones will be illustrated by diagrams.
The grids of Figure 1 can be reused without problem as far as there is no central tile in the heptagrid. The representation we consider is based on the Poincare's disc model of the hyperbolic plane. There is no central point in the hyperbolic plane. We can see the disc model as a window over the hyperbolic plane, as if flying over that plane in an abstract spacecraft. The centre of the circle is the point on which are attention is focused while the circle itself is our horizon. Accordingly, the central tile is the tile which is central with respect to the area under our consideration. It is also the reason to number the central tile by 0.
In a sector, the tile \(\mu\) which is closest to \(M\) is numbered by 1. It is the head of the sector which is sector 4 in Figure 1, to left. The other tiles of that sector are numbered as already mentioned. Table 1 indicates the correspondence between the numbers of the tiles and their relations in the tree. A tile has also a **level** which is its distance in the sector to the tile \(O\). In most representations, we will deal with levels 1,2 and 3 and sometimes, with level 4 too. Note that in
Figure 1 level 4 is hardly visible.
Tiles which share a side are called **neighbours** of each other. Sometime, we say that they can see each other. Accordingly, a path from a tile to another one \(A\) of the sector consists of a sequence of tiles of the sector which successively can see the previous one. When the path goes from \(\mu\) to \(A\) and when each tile of the path is the son of the previous one in the path we say that such a path belongs to the **branch** in the tree which passes through \(A\), as far as for each tile \(A\) of the sector, there is a single branch of the tree passing through \(A\). Tile \(\mu\) is the head of the sector it defines, we also call it the **root** of the tree which spans that sector.
### Weighted cellular automata
Cellular automata are a model of massive parallelism. The base of a cellular automaton is a cell. The set of cells is supposed to be homogeneous in several aspects: the neighbours of each cell consists of a subset which has the same structure; the cell changes its state at each tip of a discrete clock according
\begin{table}
\begin{tabular}{r r r r r r r r r r} level & node & c. & \(\ell\)-s. & \(m\)-s. & \(r\)-s. & level & node & c. & \(\ell\)-s. & \(m\)-s. & \(r\)-s. \\
0 & 1 & G & 2 & 3 & 4 & 3 & 13 & B & 34 & 35 & \\
1 & 2 & B & 5 & 6 & & 14 & O & 36 & 37 & 38 \\ & 3 & Y & 7 & 8 & 9 & & 15 & B & 39 & 40 & \\ & 4 & G & 10 & 11 & 12 & 16 & Y & 41 & 42 & 43 \\
2 & 5 & B & 13 & 14 & & 17 & G & 44 & 45 & 46 \\ & 6 & O & 15 & 16 & 17 & 18 & B & 47 & 48 & \\ & 7 & B & 18 & 19 & & 19 & O & 49 & 50 & 51 \\ & 8 & Y & 20 & 21 & 22 & 20 & B & 52 & 53 & \\ & 9 & G & 23 & 24 & 25 & 21 & Y & 54 & 55 & 56 \\ & 10 & B & 26 & 27 & & 22 & G & 57 & 58 & 59 \\ & 11 & Y & 28 & 29 & 30 & 23 & B & 60 & 61 & \\ & 12 & G & 31 & 32 & 33 & 24 & Y & 62 & 63 & 64 \\ & & & & & & 25 & G & 65 & 66 & 67 \\ & & & & & & 26 & B & 68 & 69 & \\ & & & & & 27 & O & 70 & 71 & 72 \\ & & & & & 28 & B & 73 & 74 & \\ & & & & & 29 & Y & 75 & 76 & 77 & \\ & & & & 30 & G & 78 & 79 & 80 & \\ & & & & 31 & B & 81 & 82 & \\ & & & & 32 & Y & 83 & 84 & 85 & \\ & & & & 33 & G & 86 & 87 & 88 & \\ \end{tabular}
\end{table}
Table 1: _Table of correspondence between numbers and their relations in the tree. Nodes of colour \(\mathbf{B}\) have two sons, while nodes of colours \(\mathbf{O}\), \(\mathbf{Y}\) and \(\mathbf{G}\) have three of them. Also, \(\ell\)-s., \(m\)-s. and \(r\)-s. mean left-hand side son, middle one and right-hand side one respectively. By ’c.’ we mean the colour of the node._
to the states of its neighbours and to its own state. The change is dictated by a finite automaton which is the same for each cell. A regular tiling is an appropriate space for implementing cellular automata: a cell is the combination of a tile together with the finite automaton ruling the change of states. The tile is called the **support** of the cell. Let \(T\) be a tile and let \(N(T)\) be the set of its neighbours. By regular, we mean that the number of elements of \(N(T)\) is the same for any \(T\). The heptagrid satisfies that requirement. Moreover, there is an algorithm to locate the tiles which is linear in time in the size of the code attached to each tile, see [3] for instance. From now on we indifferently say tile or cell for a heptagon of the heptagrid, confusing the cell with its support. Sometimes, we also refer to a tile or to a cell by the number of its support in the sector it is in the figure illustrating the situation in which the cell is considered.
The way the automaton manages the change of states is defined by what is called a **transition function** which is often implemented as a table. That function is called the **program** of the automaton and we shall organise it in a **table** which will be displayed in Section 3. In the present paper, we append a constraint on the transition function: states are affected with **weights**, which are non negative integers. Consider a cell \(c\) together with its neighbours \(c_{i}\), with \(i\in\{1..7\}\). Let \(s_{i}\) be the state of neighbour \(i\), which is \(c_{i}\), and let \(w_{i}\) be the weight of \(s_{i}\). Call the **neighbourhood weight** of \(c\) the sum \(s=\sum_{i=1}^{7}w_{i}\). In our paper, the new state of \(c\) is defined by its current stated together with \(s\), its neighbourhood weight. In particular, the transition from the current state to the new one does not depend on the positions of those neighbours but on the sum of their weights only. That entails another constraint on the program as far as our cellular automaton is deterministic: a current state with a given neighbourhood weight give rise to single new state. A cellular automaton whose transitions obey such a constraint is called **weighted**.
In Section 3, we define tables on which the transitions of the cellular automaton are based. The tables have two entries: the current state of the cell and the neighbourhood weight for that cell. The tables are gathered in Table 7 which provides us with the new state of the cell.
The alphabet \(\mathbb{A}\) of the automaton attached to each cell is the set of the possible states taken by the cell. To each state, we attach a **weight**, as already indicated. The function giving its weight for each state can be represented by a sequence of those weights, giving an order on the states. So that writing \(\mathbb{A}=\{e_{0},...,e_{n}\}\), the weights are \(\{w_{0},...,w_{n}\}\). As far as we already defined the neighbourhood weight of a cell, we presently turn to the construction of the table.
Now that the global setting is given, we shall proceed as follows: Section 2 indicates the main lines of the implementation which is precisely described in Subsection 2.2. At last, Section 3 gives us the tables ruling the transitions followed by the automaton. Subsection 2.2 also contain a few figures which illustrate the application of the function. Those figures were established from pieces of figures drawn by a computer program which applied the transition
function of the automaton to an appropriate window in each of the configurations described in Subsection 2.2. The computer program also computed the neighbourhood weight of a cell. It established the tables displayed in Section 3.
That allows us to prove the following property:
**Theorem 1**: _There is a weakly universal weighted cellular automaton in the heptagrid which has six states. The highest weight of the states is \(34\) and the maximal neighbourhood weight is \(156\). The table contains \(137\) entries._
The states and their weights used in the simulation proving the theorem are the following ones:
\begin{tabular}{l c c c c c c} states & W & Y & B & R & M & V \\ weights & 0 & 1 & 4 & 12 & 29 & 34 \\ \end{tabular}
The construction of the table required several constraints. The obvious one is that a deterministic cellular automaton requires that a single new state is defined for any couple consisting of the state of a cell and of its neighbourhood weight. To satisfy that constraint, the choice of the weights is not arbitrary. In particular, it was not possible to reduce the highest weight.
Note that as far as outside the current state the transition function depends on the neighbourhood weight only means that the state does not depend on the position of the states in the neighbourhood. In particular, it means that automatically, the cellular automaton is rotation invariant.
It is the place here to discuss about the choices I did in order to obtain the result stated in Theorem 1.
First, I deal with the number of states. That number is dictated by the necessity to get two types of locomotive in order to implement the crossings as far as the working of that structure requires that the structure is able to discriminate between the two types. In previous papers, I did that by introducing the BR-pattern as a simple locomotive and the BRR-pattern as a double locomotive. I tried to do that here too and it occurred that even with the weight \(500\) for the heaviest state, it does not work. So, I decided to introduce the pattern MR as another colour for the locomotive. Front and rear have to be different, also different from the blank state W, the colour of the immense majority of cells which remain in that state thanks to line \(0\) of the table. Accordingly, that entails at least four states. But it is not enough. To signalise the path, we need at least one state: either used as marking the track on which the locomotive moves or it is placed around the path in a way to signalise it in a non ambiguous way. This leads us to five states. One more state is the minimum to introduce the distinction required by the different mechanisms used to implement the switches. Indeed, it is not possible to rely on the neighbourhood: a cell of the track has at least two neighbours belonging to the tracks So that sometimes, four to five cells remain free around a cell of the track. Moreover, the track crosses the structures needed to implement the switches.
That discussion leads us to the weights. Their values were chosen in order to, as much as possible, distinguish between the states. I postpone the discussion
to Section 3 as far as the choice of the weights was motivated after scrutinising Tables 2 up to 6.
## 2 Main lines of the computation
In the present paper, as we go back to weak universality, we take the general frame of previous papers of the author, see [5] for references to those papers. Also, we refer the reader to [6, 7] for detailed explanations of the implementation.
### The railway model
The simulation is based on the railway model devised in [9] which lives in the Euclidean plane. It consists of **tracks** and **switches** and the configuration of all switches at time \(t\) defines the configuration of the computation at that time. There are three kinds of switches, illustrated by Figure 2. The changes of the switch configurations are performed by a locomotive which runs over the circuit defined by the tracks and their connections organised by the switches.
A switch gathers three tracks \(a\), \(b\) and \(c\) at a point. In an active crossing, the locomotive goes from \(a\) either to \(b\) or to \(c\). In a passive crossing, it goes to \(a\) either from \(b\) or from \(c\).
In the fixed switch, the locomotive goes from \(a\) to always the same track: either \(b\) or \(c\) which is called the **selected track**. The passive crossing of the fixed switch is possible and does not change the selected track. The flip-flop switch is always crossed actively only. If the locomotive is sent from \(a\) to \(b\), to \(c\) by the switch, it will be sent to \(c\), to \(b\) respectively at the next passage. The memory switch can be crossed actively or passively. However, the track taken by the locomotive in an active passage is the track taken by the locomotive in the last passive crossing. At initial configuration, the crossing of the memory switches is fixed by the configuration.
Figure 3 illustrates the circuit which stores a one-bit information. The locomotive may enter the circuit either through the gate \(R\) or through the gate \(W\).
If it enters through the gate \(R\) where a memory switch sits, it goes either through the track marked with 1 or through the track marked with 0. When it crossed the switch through track 1, 0, it leaves the unit through the gate \(B_{1}\), \(B_{0}\) respectively. Note that on both ways, there are fixed switch sending the locomotive to the appropriate gate \(B_{i}\). Note that when the locomotive leaves the unit, no switch was changed. If the locomotive enters the unit through the gate \(W\), it is sent to the gate \(R\), either through track 0 or track 1 from \(W\). Accordingly, the locomotive arrives to \(R\) where it crosses the switch passively,
Figure 2: _The switches used in the railway circuit of the model. To left, the fixed switch, in the middle, the flip-flop switch, to right the memory switch. In the flip-flop switch, the bullet indicates which track has to be taken._
leaving the unit through the gate \(E\) thanks to a fixed switch leading to that latter gate. When the locomotive took track 0, 1 from \(W\), the switch after that indicates track 1, 0 respectively and the locomotive arrives at \(R\) through track 1, 0 of \(R\). The tracks are numbered according to the value stored in the unit. Note that when the locomotive leaves the unit, two switches were changed: the flip-flop at \(W\) and the memory switch at \(R\).
**Figure 3**: _The basic element containing one bit of information._
By definition, the unit is 0, 1 when both tracks from \(W\) and from \(R\) are 0, 1 respectively. So that, as seen from that study, the entry through \(R\) performs a reading of the unit while the entry through \(W\), changes the unit from 0 to 1 or from 1 to 0: the entry through \(W\) should be used when it is needed to change the content of the unit and only in that case. The structure works like a memory which can be read or rewritten. It is the reason why we call it the **one-bit memory**.
We shall see how to combine one-bit memories in the next sub-section as far as we introduce several changes to the original setting for the reasons we indicate there.
### Tuning the railway model
We first look at the implementation of the tracks in Sub-subsection 2.2.1 and how it is possible to define the crossing of two tracks. In Sub-subsection 2.2.2 we consider preliminary structures from which we define the switches described in Sub-subsection 2.2.3. Then, in Sub-subsection 2.2.4, we see how the one-bit memory is implemented in the new context and then, in Sub-section 2.2.5, how we use it in various places. At last but not the least, we shall indicate how registers are implemented in Sub-subsection 2.2.6.
#### 2.2.1 The tracks
The tracks play a key role in the computation, as important as instructions and registers: indeed, they convey information without which any computation is impossible. Moreover, as can be seen in many papers of the author, that one included, it is not an obvious issue which must always be addressed.
It is not useful to list the similarities and the distinctions between the present implementation and those of my previous papers. The best is to focus on the implementation used by the present paper. If the reader is interested by the comparison with previous implementations the references already indicated give him/her access to the corresponding papers.
In the present paper and in the case of the heptagrid we shall consider that the tracks are one way. As far as presently the locomotive consists of two consecutive cells, the front one and the rear, one way is not mandatory but it will be more convenient. By construction, the rear is **red**, state R. The front may be either **blue**, state B, or **mauve**, state M. The reason of two kinds of locomotive will be explained later.
Here too, the elements of tracks consist a Y-cell, most often, having two Y-neighbours from side to side of the cell. Note that the weight of Y is 1. The structure is illustrated by Figure 4.
**Figure 4**: _To left, a single element a track. To right, two examples of tracks: one of them following a line, the other, an arc of a circle. To left, note the rays delimiting the sectors in order to facilitate the location of the cells._
The left-hand side picture of Figure 4 illustrates an element of a track: a Y-cell. The left-hand side part of the figure shows us the sectors we use to explain the right-hand side picture. In that part of the figure we show two types of tracks: one of them follows a line of the hyperbolic plane, here a mid-point line and the other follows an arc of a circle of the hyperbolic plane. The supports of the cell constituting the tracks along the line are, from left to right: (3,54), (3,20), (3,7), (3,2), (2,1), (1,1),(1,2), (1,5), (1,13) and (1,34). The other cells of the track are not visible in the figure. For the arc of a circle, the tiles are, again from left to right: (4,2), (4,3), (4,4), (5,2), (5,3), (5,4) and (6,2). Later, Figure 5 presents two segments of lines joined by an arc of a circle.
The motion is organised according to the following scheme:
WWWW FWWW RFWW WRFW WWRF WWWR (3)
were F denotes the front cell and R denotes the rear one. The motion depends on the fact that F and R receive different weights, which allows the motion to take place. As far as the total weight of W and F is involved and that weight only, the motion may occur in both directions on the track. However, we require that a single locomotive moves on a given track at a considered moment and that all tracks are one-way. As far as both versions of F have a weight which is different from that of R, both kinds of locomotive can move. The tracks will be organised along arcs of circles and along segments of lines.
**Figure 5**: _The idle configuration of a path where an arc of a circle joins tow segments of line. Here, the radius of the circle supporting the arc of the Y-tiles is \(2\)._
**Figure 6**: _Top two rows: motion of a blue locomotive on the track of Figure 5. Bottom rows: motion of a mauve locomotive on the same track._
Figure 5 illustrates a path where an arc of a circle joins two segments of line. The arc of a circle goes from (2,2) to (5,2), the Y-cell being on a circle of radius \(2\) around the central cell. One segment goes through (2,2), (1,1), (7,1), (7,2), (7,5), (7,13) and (7,34). The other segment of lines goes through (5,2),
(5,7), (5,20) and (5,54).
Figure 6 illustrates the motion of a blue locomotive, top two rows, and that of a mauve one, bottom two rows, on the track just defined and illustrated by Figure 5. Figure 5 illustrates the notion of **idle configuration**: it is a window in which the locomotive is not present in a disc of radius 3 around cell 0.
#### 2.2.2 Auxiliary structures and crossings
In order to implement switches in our setting, we need auxiliary structures we already introduced in some other papers. Those structures are the fork, the converters and the filters whose idle configurations are given by Figure 7.
The fork allows the duplication of a locomotive, whichever its colour. The converters change a blue locomotive to a mauve one and conversely: the converter with two B-cells changes a mauve locomotive to a blue one and that with two M-cells performs the opposite conversion. The figures are focused on a disc of radius 3 around a central cell \(c\): it is a **window** which allows us to see what happens in the neighbourhood of \(c\).
The cells of the tracks joining at the fork are: (5,40), (5,15), (5,5) and (5,2) for the arriving track at (4,1), then (3,1), (3,2), (3,6), (3,17) and (3,46) for the left-hand side track leaving the fork, and then (5,1), (6,2), (6,6), (6,17) and (6,46) for the right-hand side track leaving that structure, see the leftmost picture in the top row of Figure 7 to locate the cells. In the figures, the cells of the track consists of Y-cells.
Figure 8 illustrates the move of a locomotive through the fork: top row, a blue one, bottom row, a mauve one. Note how two locomotives are created from the arriving one in the neighbourhood of cell 0.
Figure 7: Idle configurations: top row: from left to right, the sectors, the fork and two converters. Bottom row, filters. The sectors allow us to locate the cells in the configurations. The track consists of the Y-cells.
The converter changes the colour of a locomotive into the other colour. The track is defined by the cells (6,21), (6,8), (6,3), (6,1), 0, (4,1), (4,4), (4,12) and (4,33). The converter consists of two V-cells and two cells of the colour to which the converter changes the arriving locomotive whose colour is assumed to be the opposite colour to that of the converter. The V-cells are at the tiles (3,1) and (7,1) while the tiles which are both either B- or M- are (1,1) and (2,1). Figure 9 shows us that the change of colour for the front of the locomotive is performed at cell 0 as far as that cell can see the front of the locomotive when it arrives at the cell (6,1).
**Figure 9**: _Top row: the structure changes a blue locomotive into a mauve one. Bottom row : conversely, the change of a mauve locomotive into a blue one._
The filters allow the passage of the locomotive of a given colour and prevents the passage for the locomotive of the other colour. The authorised colour is represented by that of the cell (1,1). The track follows a segment of a line, from right to left: (6,21), (6,8), (6,3), (6,1), (1,1), (3,1), (3,4), (3,12) and (3,33). The central cell has three non W- and non Y-neighbours: a V-one at (7,1), an R-one at (5,1) and the colour of the filter, either B- or M-n at (1,1). Moreover, for technical reasons, cells (1,1) and (7,1) are decorated by V-cells at (1,3), (1,4) and at (7,2), (7,3) respectively. The R-cell at (5,1) is decorated by two M-cells at (5,2) and (5,4). At last and not the least a Y-cell occurs at (1,2), a neighbour of the cell (1,1).
Figure 10 illustrates the move through a filter. The filter let the locomotive
go on its way if it is of the same colour and it stops it when the colour is different. Note the configurations of cell 0 when the filter stops a locomotive. The stopping is obtained as far as cell 0 always remains Y- when the locomotive has not the required colour. The front vanishes and then the rear does the same.
In section 3 we give the entries of the table ruling those different motions. Many entries are used for that purpose.
#### 2.2.3 The switches
The section follows the implementation described in [3] for instance. We reproduce it here for the reader's convenience. However, it is much simplified in the case of several structures. The illustrations of the section show us a window when it is possible to do so or a diagram which illustrates how auxiliary structures are combined to constitute a switch. When a window can be used, we give the picture illustrating the sectors and the idle configuration. Another figure illustrates the motion of a locomotive across the window.
We start the study of the switches by the fixed switch. Remember that such a switch can be crossed actively or passively. As the tracks we consider are all one-way tracks, there is no need of a switch in an active crossing of a fixed switch. When it is needed, the correspondence of the active switch with the passive will be explained. Presently, we consider the passive switch only whose idle configuration is illustrated by Figure 11.
Figure 10: _Top rows: the filter let the locomotive go on its motion, topmost, the blue locomotive, below, the mauve locomotive. Bottom rows: the filter stops the locomotive of the other colour._
The left-hand side part of Figure 11 allows the reader to locate the cells of the right-hand side part of the figure, here, the idle configuration of the fixed switch. Note that the two tracks arriving at cell 0 are defined by the following cells: (3,55), (3,21), (3,8), (3,3) and (3,1) from the left-hand side together with the cells (6,54), (6,20), (6,7), (6,2) and (5,1) from the right-hand side. The exit
Figure 11: _To left, the sectors, to right, the idle configuration of a fixed passive switch._
Figure 12: _The passive crossing of the fixed switch. Top rows, the crossing through the left-hand side branch, bottom rows, the crossing through the right-hand side branch._
track is defined by the cells (4,1), (4,4), (5,5), (5,6), (5,18) and (5,48). That can be checked on the right-hand side part of Figure 11 where the cells of the track are Y-cells. Note the difference of configuration with the fork.
In the fork, there is no special cell at tile 0 which is simply a Y-cell. In the fixed switch, the central cell is a V-cell. It is decorated with another one in order to distinguish that cell from other V-cells which occur in other configurations. The role of the central V-cell is to allow the locomotive arriving at (4,1) from (3,1) to go to (4,4) and the further Y-cells and, at the same time, to prevent the locomotive to go on to (5,1) and the further Y-cells. It is what would happen if the central cell would be Y: we would get a fork. Preventing the locomotive from going to (5,1) is obtained by the change of colour of the cell (1,1): when a locomotive arrives to (3,1) or (5,1), the V-cell at 0 becomes an R-cell for one step of the computation which is enough for our purpose. From the definition of a weighted cellular automaton, it is plain that if the table works for locomotives coming from the left-hand side branch it also works for locomotives coming from the right-hand side branch.
With the structure we have gathered up to now, we can describe how crossings are implemented. With a rather large number of states, it is possible to implement direct crossings, see for instance [1]. When the number of states is relatively small as the case is in previous papers it is needed to associate auxiliary structures in a complex way. Here, it is possible to implement an almost direct crossing as illustrated by Figure 13.
**Figure 13**: _The structure of a crossing._
The idea is to use both the existence of two types of locomotives and the filters which allow a locomotive of a definite colour to go on its way and only it. The required working is facilitated by the fact that the difference of two types of locomotive occurs on the front of it. The idea is to associate one track with the blue locomotives and the other with the mauve ones. By associating a fixed switch with a fork we obtain the configuration of Figure 13. In the crossing of
the figure, a locomotive arriving from \(A\) is supposed to go onto the \(C\)-branch while a locomotive arriving from \(D\) is supposed to go to the \(B\)-one. On the track coming from \(D\) and before the fixed switch, we put a converter which changes a blue locomotive into a mauve one.
Assume for a while that one track is blue and the other is mauve. By those words we mean that on a blue, mauve track a B-, an M-locomotive is assumed to run and not an otherwise coloured one. After the fork, it is enough to place a B-filter on the blue track and and M-one on the mauve track. Accordingly the locomotive of the appropriate colour is allowed to go further. We may extend that situation to any crossing: either the two tracks are of the same colour or of opposite colour. When the tracks have the same colour, it is enough to change the colour of the locomotive on one track by giving it the opposite colour and to restore the required colour after the crossing of the fork. As far as it is possible to change any locomotive in a locomotive of the opposite colour, we may perform any crossing according to what we already said. Figure 13 illustrates the case of a crossing of two blue tracks.
For the crossing, we can use the filters as fixed structure. However to implement the other switches we need to be able to change the working of the filter. We need programmable structures. In fact, the filter as illustrated in Figure 7 can be used to that purpose. Figure 14 illustrates how we can perform the change of a filter.
A close look at Figure 7 shows us one element of track just above the cell indicating the colour chosen by the filter. Accordingly, a locomotive may arrive at that point provided a track arrives there. We assume that a mauve locomotive arrives and it creates a W-cell at (1,2) which is in contact with (1,1) where the cell bears the colour accepted by the filter. That W-cells reduces the neighbouring weight of (1,1) so that it changes its colour: a B-one becomes M- while an M-one becomes B-. The motion is illustrated by Figure 14.
**Figure 14**: _Top, the mauve locomotive changes a blue filter into a mauve one. Bottom, a mauve locomotive again changes the mauve filter into a blue one._
We are now ready to investigate the flip-flop and the memory switches.
The flip-flop switch and both parts of the memory switch require a much more involved situation. The global view of an idle configuration of the flip-flop is illustrated by Figure 15. The switch is crossed actively only and we also impose it is crossed by a blue locomotive only. A fork operates the first action of the switch: the arriving locomotive is duplicated as a copy on each branch
leaving the fork. On one branch a blue filter sits: it lets the locomotive go on along the track: it is the selected branch. On the other branch, the locomotive is stopped by a mauve filter.
**Figure 15**: _Scheme of the implementation of a flip-flop switch. Note the filters. Note the mauve tracks: they are segment of lines, not arc of circles._
However, once the locomotive crossed the switch, the selected branch must change. That action is performed as follows. Before the blue filter, there is a fork \(\varphi\). One branch of it is the track defined by the switch. The other branch of the fork leads to a converter which converts the blue locomotive running through it into a mauve one. After the converter, the track on which the mauve locomotive runs arrives at a fork \(\xi\) whose branches \(\beta_{\ell}\) and \(\beta_{r}\) reach the filters. As far as the filters are reached by a mauve locomotive on the appropriate track, the filters exchange their colours: the blue filter becomes mauve and the mauve one becomes blue. Accordingly, the selected track is changed. Of course, the mauve filter is followed by a fork which operates symmetrically with respect to \(\varphi\): on the other branch, there is a converter from blue to mauve and the mauve track arrives to \(\xi\) too thanks to a fixed switch as illustrated by Figure 15.
Presently, we turn to the memory switch, active and passive parts.
We first deal with the active part. It looks like the flip-flop switch with this Figure 16 illustrates both parts of the memory switch: to left, the active part of the switch, to right, its passive part.
The active switch looks like a simplified flip-flop switch. The action on the filters is operated from a fork \(S\) on the left-hand side part of Figure 16 which may receive a mauve locomotive sent from the structure illustrated by the right-hand side part of that figure.
Let us describe the working of the passive switch.
The locomotive, a blue one, arrives through \(P\) or through \(Q\), each point sitting from side to side of \(F\), as shown on the figure. At \(P\) and \(Q\) a fork is sitting. Assume that the locomotive comes through \(P\), if it comes through \(Q\), the argument is symmetrical. From \(P\), two locomotives are sent: one goes to \(F\) and then follows the exit track of the fixed switch sitting at \(F\) while the other locomotive is used to change the filters at both switches if it is needed. If the selected track is that where \(P\) sits, the filter met by the locomotive is mauve, so that it is stopped: the configuration of the switches, both its active and passive parts, is unchanged. If the side of \(P\) is not the selected branch of the switch, the filter met after \(P\) is blue so that the locomotive goes on its way. It becomes mauve and it is sent to two forks \(S\), on the active switch and \(U\) on the passive one and each one of those forks sends a mauve locomotive to the filters of both switches in order to change the selected track. Note that on the active switch, the selected track bears a blue filter while on the passive switch, the selected track bears a mauve one, as illustrated on Figure 16. Note the more complex configuration of the passive memory switch. We can see a crossing of mauve tracks. It means that mauve locomotive are running on those tracks. Now, as already mentioned, such a crossing is dealt with as the crossing of tracks where blue locomotives are running. The difference is that on one of the mauve tracks, the mauve locomotive becomes blue for a while in order to get a correct crossing. The property of using mauve locomotives as signals allows us to use that property of transporting a distinction detected in one part of the circuit to another part or to several other parts where the distinction need to be reported to.
Note two other points. First, note that each fixed switch and each fork requires a disc of radius at least three, and that a circle of radius 3 contains 56 tiles. In fact, the number of tiles of a circle exponentially rises with the size of its radius. So that the passive memory switch requires a huge amount of tiles. The second remark is that in general a single locomotive operates in
Figure 16: _To left, the active memory switch, to right, the passive one._
the whole circuit of the computation, with the exception of an auxiliary one behaving as a signal sent from a part of the circuit to another one. We neglect here the copied locomotive which is later destroyed by the appropriate filter. That property means that in between two passages of the locomotive to the same passive memory switch, there is enough time for the locomotive signal to operate the change required by the definition of the switch if it is the case to be operated.
#### 2.2.4 The one-bit memory
It is now time to implement the one-bit memory. Figure 17 illustrates the construction.
We can see the active memory switch at \(\mathbf{R}\) and the passive one at \(\mathbf{E}.\) The dark letters which stand by the blue circle indicate **gates** of the one-bit memory: \(\mathbf{W},\)\(\mathbf{R},\)\(\mathbf{E},\)\(\mathbf{0}\) and \(\mathbf{1}\). We can easily see that if the locomotive enters the unit through the gate \(\mathbf{R},\) where an active memory switch is sitting, then it leaves the memory through the gate \(\mathbf{0}\) or through the gate \(\mathbf{1}\) depending on the information stored in the memory: that information is provided the unit by the positions of the switch at \(\mathbf{W}\) and those at \(\mathbf{R}\) and \(\mathbf{E}.\) Note that the positions at \(\mathbf{R}\) and at \(\mathbf{E}\) are connected by the path from \(\mathbf{E}\) to \(\mathbf{R},\) see the figure.
**Figure 17**: _To left,the theoretical structure of a one bit memory: Figure 3 is replicated for the reader's convenience. To right, the idle configuration of the one-bit memory in the heptagrid. Note the four crossings in the implementation. Note that the connection from \(\mathbf{E}\) to \(\mathbf{R}\) is realised by a segment of a line, a shorter path, which is a mauve track as used for a track where a mauve locomotive is supposed to be running. Note the crossings of the mauve track with blue ones which we already discussed._
When the locomotive enters the memory through the gate \(\mathbf{W}\) where a flip-flop switch is sitting, it goes to \(\mathbf{E}\) through one of both tracks leaving the switch. If it goes through the track marked by \(\mathbf{0},\)\(\mathbf{1},\) it arrives to \(\mathbf{E},\) a passive memory switch, by the track marked with the opposite symbol, \(\mathbf{1},\)\(\mathbf{0}\) respectively. Indeed, when the locomotive crosses \(\mathbf{W},\) the passing makes the selected track to be
changed so, if it went through one track, after the passage, in particular when the locomotive arrives at \(\mathbf{E}\), the new selected track at \(\mathbf{W}\) is the track through which the locomotive did not pass. So that the track marked by one symbol at \(\mathbf{W}\) should be marked by the opposite one at \(\mathbf{E}.\) Note that the passive memory switch at \(\mathbf{E}\) and the active memory switch at \(\mathbf{R}\) correspond to the memory switch at \(\mathbf{R}\) in Figure 3 which is reproduced for the convenience of the reader. It is the reason why those active and passive memory switches are connected by a mauve track in Figure 17. Note that in between two consecutive visits to the same one-bit memory, it is assumed that there is enough time in order that possible rewriting of a previous visit are completed.
As the one-bit memory will be used later, we introduce a simplified notation: in Figure 17, the memory structure is enclosed inside a blue circle. At its circumference the gates are labelled by the same symbols as in the hyperbolic picture. In the next figures, when a one-bit memory will be used, we shall indicate it by a light blue disc with, at its border, the five gates mentioned in Figure 17.
#### 2.2.5 From instructions to registers and back
As will be explained in Sub-subsection 2.2.6, the locomotive arrives at a register at a point which depends on the type of the operation to be performed.
It depends on the type only, whether it is to decrement or to increment. It does not depend on namely which instruction of the program required the execution of that operation. Moreover, the return path of the locomotive once it performed its operation is the same in most cases.
Accordingly, when the locomotive goes back from the register to the program, it is important to define the point at which it will return. Correspondingly with what we said, the unique solution is to keep track of that information before the locomotive enters the register where that information disappears when the locomotive goes back after performing its instruction.
To that goal, we define a structure \(\mathbb{D}_{S}\) which is illustrated by Figure 18. Each register is dotted with such a structure.
**The \(\mathbb{D}_{S}\)-structure**
There are two parts in the structure: one for the arrival of the locomotive from the program, the other for the return of the locomotive when it comes back from the register, once its operation is completed. From the program, the locomotive is sent on a specific track attached to the instruction it has to perform. Before the arrival to the register, the locomotive crosses first a structure \(\mathbb{D}_{I}\), \(\mathbb{D}_{D}\) if it has to increment, to decrement the register respectively. That structure we describe further remembers which instruction had to be performed. That information is recovered by the locomotive when it goes back from the register, which allows it to arrive at the appropriate point of the program. Between \(\mathbb{D}_{I}\) or \(\mathbb{D}_{D}\) and the register, the locomotive crosses \(\mathbb{D}_{S}\) which remembers the type of the instruction: to increment the register or to decrement it.
The description of \(\mathbb{D}_{S}\) allows us to introduce various features we use in the description of \(\mathbb{D}_{I}\) and of \(\mathbb{D}_{D}\). In particular, we use different colours for the tracks in order to distinguish those which the locomotive runs when it has to increment the register from those it runs when it has to decrement it.
**Figure 18**: _The structure which memorises which type of instruction is sent to the register. Note the mauve tracks dispatching the information about the type of the instruction to all active memory switches: at \(A\), at \(B\), at \(C\) and at \(E\). The filters sitting close to the entry are also changed but the corresponding mechanism is not illustrated in order to keep the figure as readable as possible. Note that the track from \(A\), \(B\) to \(\mathtt{W}\) crosses the track from \(\mathbf{1}\), \(\mathbf{0}\) to \(A\), \(B\) respectively._
When the locomotive arrives to \(\mathbb{D}_{S}\), it enters through the track marked with \(\mathbf{I}\) or with \(\mathbf{D}\) respectively, lighter blue, lighter green for an incrementing, a decrementing instruction respectively. On the track, the locomotive meets a fork which sends it to a fixed switch whose exit track leads to the R-entry of a one-bit memory. The fork sends a copy of the locomotive to a filter. At initial time, the filter is mauve, blue on the side of the \(\mathbf{I}\), \(\mathbf{D}\)-track respectively. The information possibly delivered by the filters is gathered by the fixed switch which sits on a track joining both filters. That possible information is conveyed through the mauve tracks of the picture to be sent to four active memory switches displayed by the figure. At initial time, all those active memory switches select the track which the locomotive must follow when it has to increment the register.
Consider the case when the locomotive has to increment the register. The locomotive arrives at the R-gate of the one-bit memory \(M\) sitting on the middle of the figure. The locomotive arrives at R through the blue track of the figure, blue on the figure, the colour telling us that such a track is used in both cases, whether to increment or to decrement the register. If it leaves \(M\) through \(\mathbf{1}\), it is the confirmation that the the locomotive has to increment the register and so, it goes to the active memory switch sitting at \(A\) which to its turn send the locomotive to \(C\). If it leaves the one-bit memory through \(\mathbf{0}\), it means
that the previous instruction performed on that register was to decrement it. The locomotive then goes from \(\mathbf{0}\) to \(B\). At \(B\), as far as the locomotive has to increment the register, the switch selects the track leading from \(B\) to the \(\mathbf{W}\)-gate. Crossing \(\mathcal{M}\) through \(\mathbf{W}\) means that the locomotive rewrites the bit contained in \(\mathcal{M}\) from \(\mathbf{0}\) to \(\mathbf{1}\). Leaving \(\mathcal{M}\) through its \(\mathbf{E}\)-gate, the locomotive goes to \(C\) through a fixed switch sitting on the segment of line \(\sigma\) joining the point \(\mathbf{j}\) to \(C\). Now, the active memory switch at \(C\) selects the \(\mathbf{I}\)-track, so that the locomotive later enters the register. If the locomotive leaves \(\mathcal{M}\) through \(\mathbf{1}\), it is led to \(A\) where the active memory switch sends the locomotive to \(\sigma\) so that it arrives at \(C\) where it goes on the \(\mathbf{I}\)-track.
Note that at \(C\), even if the previous instruction was to decrement the register, the selected track is the \(\mathbf{I}\)-one. If the previous information was to decrement the register, the filter closer to the \(\mathbf{I}\)-track entering the structure is blue, so that the information is sent to change the filters at the passive memory switch and at the active memory switches at \(A\), \(B\), \(C\) and \(E\). The information is conveyed by an auxiliary mauve locomotive running on the mauve tracks of the figure. The particularity of the tracks followed by the mauve locomotive is that they consists of segments of line, while the tracks followed by a blue locomotive, the tracks of the figure in a different colour, consists of arcs of circles. The difference is important: the segments of line run by the mauve locomotive are shorter than the arcs of circles followed by the blue locomotive. Accordingly, the signal sent to \(A\), \(B\), \(C\) and \(E\) reaches those switches before a blue locomotive crosses any of them.
Presently, consider the case when the locomotive has to perform an instruction which decrements the register. If the previous instruction was to increment the register, the locomotive, after entering \(M\) through its R-gate, leaves \(M\) through \(\mathbf{1}\). It then reaches \(A\) where the memory switch already selects the track from \(A\) to \(\mathbf{W}\). Accordingly, the locomotive changes the bit contained in \(M\) from \(\mathbf{1}\) to \(\mathbf{0}\). Leaving \(M\) through \(\mathbf{E}\), the locomotive joins \(C\) where it is sent to the \(\mathbf{D}\)-track as far as the mauve locomotive sent from the entry arrived at \(C\) before the blue one does the same. Clearly, the fixed switch sitting between \(B\) and \(C\) is crossed by a possible mauve locomotive before a blue locomotive arrives there. The same observation holds for the switches at \(A\) and at \(B\). If the previous instruction was to decrement the register, no mauve locomotive was sent as far as the selected tracks at the switches sitting at \(A\), at \(B\), at \(C\) and at \(E\) are the right tracks leading to the \(\mathbf{D}\)-track from \(C\).
When the locomotive returns from the register, it arrives to the switch sitting at \(E\). As far as no other locomotive has visited \(\mathbb{D}_{S}\) while the locomotive under consideration performed its instruction on the register, the switch at \(E\) selects the appropriate track leading either to \(\mathbb{D}_{I}\) or to \(\mathbb{D}_{D}\).
We adapt the structure described for \(\mathbb{D}_{S}\) to the next ones we shall study where it is needed to discriminate between two situations.
Now, we turn to the \(\mathbb{D}_{I}\)-structure.
**The \(\mathbb{D}_{I}\)-structure**
It is crossed by the locomotive before arriving to \(\mathbb{D}_{S}\) when the locomotive goes to the register and it is crossed after visiting \(\mathbb{D}_{S}\) which knows which type of instruction the locomotive has performed on the register.
The structure consists of as many units as there are instructions incrementing that register, say \(R\), in the program. Each unit is based on a one-bit memory \(\mathcal{M}\) and Figure 19 illustrates such a unit. In that figure, the arrival is illustrated by blue tracks while the way back from the register is illustrated by light purple and red tracks. The working of \(\mathbb{D}_{I}\) is the following. An instruction for incrementing \(R\) is connected through a path to a specific unit of \(\mathbb{D}_{I}\). The path goes from the program to the \(\mathbf{W}\)-gate of that unit. At the initial time, the configuration of \(\mathbb{D}_{I}\) is such that all its unit contain \(\mathbf{0}\): the switches of the one-bit memory are in a position which, by definition defines bit \(\mathbf{0}\). Accordingly, when the locomotive enters the unit, it will change the flip-flop and the memory switches so that, by definition, the memory contains bit \(\mathbf{1}\). The locomotive leaves the memory through the gate \(\mathbf{E}\) and it meets a flip-flop switch at \(A\) which, in its initial position, sends the locomotive to \(R\). Note that when the locomotive leaves \(\mathbb{D}_{I}\) a single unit of the structure contains the bit \(\mathbf{1}\) and selects a path to the program at the flip-flop switch at \(A\). See the blue track of Figure 19.
**Figure 19**: _The configuration of a unit of the structure which memorises the right incrementing instruction. Note the three crossings in the implementation, note the colours of the tracks explained in the text. In dark purple, we have the tracks which are followed by the locomotive both when it arrives from the program, blue tracks, and when it goes from the \(\mathbf{0}\)-gate to the \(\mathbf{W}\)-one, red tracks._
When the locomotive goes back from the register after it performed its operation, it goes back to \(\mathbb{D}_{S}\) which knows that it incremented the register so that the locomotive is sent to to \(\mathbb{D}_{I}\). The locomotive reads the bit stored in \(\mathcal{M}\). If it is \(\mathbf{0}\), the exit through the \(\mathbf{0}\)-gate of \(\mathcal{M}\) leads the locomotive to the next unit, see the light purple track of Figure 19. If it reads \(\mathbf{1}\), it knows that it reached the appropriate unit. It rewrites the unit of \(\mathcal{M}\) as far as \(\mathbf{1}\) sends the locomotive to the \(\mathbf{W}\)-gate of \(\mathcal{M}\). From \(\mathbf{E}\), the locomotive joins \(A\) where the flip-flop sends it back to the program, see the red tracks of Figure 19. As \(A\) passed by the loco
Figure 19: _The configuration of a unit of the structure which memorises the right incrementing instruction. Note the three crossings in the implementation, note the colours of the tracks explained in the text. In dark purple, we have the tracks which are followed by the locomotive both when it arrives from the program, blue tracks, and when it goes from the \(\mathbf{0}\)-gate to the \(\mathbf{W}\)-one, red tracks._
motive, the flip-flop sitting there selects the track to the register. Accordingly, the unit is ready for a new possible visit for the next instruction incrementing the register, possibly before operating on it or after the operation is completed.
**The \(\mathbb{D}_{D}\)-structure**
Similarly, that structure memorises which instruction required to decrement that register. The structure contains as many unit as there are instructions of the program which decrements that register. The unit also contains a single one-bit memory which is \(\mathbf{0}\) before a locomotive arrives at the structure. The locomotive enters the structure before visiting \(\mathbb{D}_{S}\) on its way to the register and it visits the structure after leaving \(\mathbb{D}_{S}\) which remembers that the locomotive has decremented the register.
**Figure 20**: _The idle configuration of a unit of the structure which memorises the right decrementing instruction. Note that the structure is more complex than that of Figure 19. Note the sketchy representation of the one-bit memory. Note the different colours of the tracks._
We can see on Figure 20 that a unit of \(\mathbb{D}_{D}\) is more complex than that of \(\mathbb{D}_{I}\). Indeed, when a locomotive arrives at a register to decrement it, it may happen that it could not perform the operation because the register was empty, _i.e._ its first unit already contains \(\mathbf{0}\). In that case, the locomotive leaves the register through a special track called the \(\mathbf{Z}\)-track which does not cross the \(\mathbb{D}_{S}\) attached to the register. The \(\mathbf{Z}\)-track goes directly to the \(\mathbb{D}_{D}\)-structure. The track leaving \(\mathbb{D}_{S}\) as a former decrementing instruction is called a \(\mathbf{D}\)-track and it arrives to the \(\mathbb{D}_{D}\)-structure at the same unit as the \(\mathbf{Z}\)-track. Accordingly, in each unit of \(\mathbb{D}_{D}\), it is important to know which track entered the unit after the instruction was performed. It is the reason of the filters displayed on the right-hand side border of the figure. Those filters represent a passive memory switch which allows the unit to discriminate between the arrival from a \(\mathbf{Z}\)- or a \(\mathbf{D}\)-track.
In Figure 20 we also use different colours in order to illustrate the different ways taken by the locomotive. The arriving locomotive from the program is on an orange track and it arrives at a fixed switch whose exit track leads to the **W**-gate of the unit. As far as before the arrival of the locomotive the bit of the one-bit memory \({\cal M}\) of the unit is **0**, the bit has to be rewritten to **1**, so that the locomotive arrives from the program to the **W**-gate of the \({\cal M}\) of that unit. Leaving **E**, the locomotive is sent to a flip-flop switch **ff** at \(A\), see the red track on the figure, and **ff** selects the track leading to the register. After the crossing of \(A\), the flip-flop switch selects the other track which goes to the active memory switch selecting the return **Z**- or **D**-track to the program.
When the locomotive comes back from the register, it arrives to \(\ \mathbb{D}_{D}\) either through a **Z**-track, green track on the figure, or through a **D**-one, purple track on the figure. The filters displayed on the figure represent a passive memory switch. The mauve locomotive possibly sent possibly changes the selection of the active memory switches in order they select the track corresponding to the one observed by the passive memory switch. If the selection corresponds to the arrival track, no change occurs but if it is not the case the change is performed at the passive switch as well as the active memory switches reached by the mauve locomotive before any blue locomotive arrives at the same switches.
Returning from the register either through the **Z**-track or through the **D**-one, the locomotive arrives to the R-gate of \(M\) through a common blue track on the figure. If it reads **0**, it goes to the next unit: from **0**, it goes to \(B\), see the blue track from **0** to \(B\). At \(B\) the active memory switch sends the locomotive to the next unit through the **D**- or the **Z**-track, purple or green tracks on the figure, respectively. If the locomotive reads **1**, it knows that it is at the unit which allows to return to the right place of the program. But before going there, the locomotive has to rewrite the bit in \({\cal M}\) from **1** to **0**. It is why there is a track from the **1**-gate joining it with **W**, see the red track on the figure. To that purpose the track joins that from the program to **W** thanks to a fixed switch whose exit track leads to **W**. The locomotive leaves \({\cal M}\) through **E** from where the track leads it to \(A\) where a flip-flop is sitting, selecting a track to \(C\). Once the flip-flop at \(A\) is crossed, it selects again the track to the register so that at that moment, the locomotive may leave the unit which is in its **0**-state. When it is at \(C\), the active memory switch sitting there has already be informed whether the instruction arrived from a **Z**- or a **D**-track. Accordingly, the locomotive is sent to the right place of the program: either to the instruction which stands after the just executed one or, if the **Z**-track was used, to a particular instruction, performing that way the execution of a jump instruction. The **D**- or **Z**-track is on light green or darker green respectively on Figure 20.
#### 2.2.6 Constitution of a register and operating upon it
The implementation of the register requires a special examination. Weak universality means that the initial configuration is infinite but not arbitrary. In the present paper, it will be periodic outside a large ball containing the implementation part of the program and also the first unit of the two registers
needed for universality, according to Minsky's theorem, see [8]. Each register, in some sense, follows a line and that construction along each line is periodic, being infinite in one direction.
A register consists of infinitely many units which we may index by \(\mathbb{N}\). Let \(\mathcal{R}\) denote a register. By \(\mathcal{R}(n)\), we denote the \(n^{\text{th}}\) unit. We shall call \(\mathcal{R}(0)\) the first unit of the register. Each unit contains a one-bit memory \(\mathcal{M}\). The memory contains \(\mathbf{0}\) or \(\mathbf{1}\). At each time \(t\) of the computation, there is a number \(c_{t}\) such that the bit of \(\mathcal{R}(n)\) is always set to \(\mathbf{0}\) when \(n\geq c_{t}\) and all of them are set to \(\mathbf{1}\) when \(n<c_{t}\). We say that \(c_{t}\) is the **value** of the register. We also say that it is its **content**. When \(c_{t}=0\) we also say that the register is empty. In that case, the bit in the memory of every unit of the register is set to \(\mathbf{0}\). To left, a passive memory switch recognizes the information telling whether the operation to perform is to increment \(\mathcal{R}\) or to decrement it. That information is transferred to the active memory switches \(S0\), \(S1\) and \(So\) in order to appropriately select the tracks. An incrementing instruction changes the first \(\mathbf{0}\)-bit which it meets with \(\mathbf{1}\) and it goes back to \(\mathbb{D}_{S}\). A decrementing instruction detects the first \(\mathbf{0}\)-bit which it meets and it goes back to the previous unit where it changes the \(\mathbf{1}\)-bit to \(\mathbf{0}\) and then it goes back to \(\mathbb{D}_{S}\). Figure 21 illustrates all those workings including the case when \(\mathcal{R}\) is empty. In that case, the return track to the previous unit is replaced by the initial part of the \(\mathbf{Z}\)-track.
**Figure 21**: _The idle configuration of a unit of a register. We can see the one-bit memory and the memory switches devoted to the materialisation of the content of the register. The \(\mathbf{Db/Z}\) mention of the figure indicates that in the first unit if the register that track is the beginning of the \(\mathbf{Z}\)-track. On the other units, \(\mathbf{Db}\) indicates the track leading to the previous unit of the register._
Let us describe the working of an incrementing instruction. Figure 21 represents a unit of the register, colouring the tracks in the same way as in the previous figures. The locomotive arrives through the \(\mathbf{Ii}\)-track, see the figure. The locomotive arrives to R. If it reads \(\mathbf{1}\), leaving \(\mathcal{M}\) through its \(\mathbf{1}\)-gate, it goes
to \(So\) where the active memory switch sends it towards the next unit on the **I**-track. If it reads **0**, the locomotive knows that it is the first time it meets **0** since its arrival to the register. Accordingly, it goes to \(S0\) too where the active memory switch sends it to **W**. It rewrites the bit of \(\mathcal{M}\) from **0** to **1** and then it goes to **E** to join \(\mathbb{D}_{S}\).
Presently consider the working of a decrementing instruction. The locomotive arrives through the **Di**-track, see Figure 21. The locomotive again arrives to R. If it reads **1**, it leaves \(\mathcal{M}\) through its **1**-gate, then it arrives at \(So\) where the active memory switch sends it onto the **D**-track towards the next unit. If the locomotive reads **0** it knows that it reached the first cell after the **1**'s realising the value of the register. So that the locomotive must rewrite the **1**-bit of the previous unit to **0**. Accordingly, going to \(S0\), the active memory switch sends it to the track **Db** which goes to the previous unit. In the previous unit, using the same picture of Figure 21, we can see that the **Db**-track arrives at the **W**-gate of the \(\mathcal{M}\) of that unit through a fixed switch whose exiting track goes to **W**. The locomotive exit through **E**, running on the track which leads to \(\mathbb{D}_{S}\). So that when an operation is performed the locomotive always return through a single track going to \(\mathbb{D}_{S}\). There is an exceptional case: when \(\mathcal{R}\) is empty. In the case of a decrementing instruction, the locomotive reads **0**, so that in that case, the **Db**-track is nothing else than the **Z**-track going into \(\mathbb{D}_{D}\).
## 3 The table
From the definition of a weighted cellular automaton, the transition function is defined by two data: the current state of the cell and the neighbouring weight of that cell. Accordingly, as already mentioned, the input and the output defining the transition can be put in form of a table. The first entry is a state, the second one is a non negative integer.
In order to explain that table, we refer to a table which gathers the maximal amount of information. Table 7 assembles the whole transition function as a table whose columns are headed by explaining labels: 'n" stands for the number of the entry, c-YBRMV:n displays the neighbourhood and the function: 'c' stands for the current state, 'n' stands for the new one and under each letter representing a state \(\sigma\), the number of neighbours under \(\sigma\); at last and not the least '\(\Sigma\)' stands for the neighbouring weight. The representation of the neighbourhood allows us to know the neighbourhood giving rise to the corresponding weight and also to know how the sum can be decomposed into the weights of the neighbouring states. Table 7 gives the above information in the case when both the current state and the new one are not identical to W.
Tables 2 is an extract of Table 7 devoted to the running of a locomotive over tracks an over forks. We can see that the entries of Table 7 split into two cases: when 'c' = 'n' and when 'c' \(\neq\) 'n'. The first case can be decomposed into two sub-cases: when the neighbourhood also does not change and when it does. In the first sub-case we speak of a conservative entry while in the second one, we speak of a witnessing entry. In the case when 'c' \(\neq\) 'n' we speak of a motion
entry. In most cases, motion entries deal with cells belonging to the tracks or which are changed by the close occurrence of the locomotive, witnessing entries deal with cells which can see the locomotive during its motion while conservative entries deal with the cells which, together with their neighbourhood, are never changed or by cells which belong to an idle configuration.
Most often, motion entries obey the following pattern:
\[\begin{array}{ll}\mbox{W,}v{+}4\rightarrow\mbox{B},&\mbox{W,}v{+}29 \rightarrow\mbox{M},\\ \mbox{B,}v{+}12\rightarrow\mbox{R},&\mbox{M,}v{+}12\rightarrow\mbox{R},\\ \mbox{R,}v{+}4\rightarrow\mbox{W},&\mbox{R,}v{+}29\rightarrow\mbox{W},\\ \mbox{W,}v{+}12\rightarrow\mbox{W}&\mbox{W,}v{+}12\rightarrow\mbox{W}\end{array} \tag{4}\]
For the witness cells the entries obey a similar pattern as (4), except that the state is not changed. There is an additional pattern:
\[\eta,v{+}16\rightarrow\eta,\qquad\eta,v{+}41\rightarrow\eta \tag{5}\]
for a blue, mauve locomotive respectively. Those additional formulas are used by cells that can be neighbours of both cells of a locomotive.
Tables 2 up to 6 display the entries as above mentioned together with tables indicating which entries apply to a few selected cells. Later, a condensed table giving the new state for a couple consisting of a state and a neighbourhood weight is given: Table 7. Those tables prove the number of entries for the table mentioned in Theorem 1. The number given in the present paper much improves that given in [7]. In the present paper, the table does not contain repetitions. If a neighbouring occurs several time, each occurrence happens with a different current state.
We illustrate those tables by application of the considered lines to specific cells. Each cell is taken from a figure of the paper. We indicate times starting from 0, a time at which the cell is idle. We also indicate the evolution in time of the state of the cell together with the number of the entry of Table 7 which applies to the cell at that time. We also indicate the neighbouring weight for the corresponding entries.
We start with cell (3,2) of Figure 5. The application of Tables 2, or 7 is to be find in lines (tr b), (tr m) for a blue, mauve locomotive respectively.
\[\begin{array}{ccccccccc}\mbox{time}&0&1&2&3&4&5&6&7\\ \mbox{state}&\mbox{Y}&\mbox{Y}&\mbox{Y}&\mbox{B}&\mbox{R}&\mbox{Y}&\mbox{Y}& \mbox{Y}\\ \mbox{sum}&2&2&5&13&5&13&2&2\\ \mbox{line}&1&1&3&4&5&6&1&1\end{array}\] (tr b)
\[\begin{array}{ccccccccc}\mbox{time}&0&1&2&3&4&5&6&7\\ \mbox{state}&\mbox{Y}&\mbox{Y}&\mbox{Y}&\mbox{M}&\mbox{R}&\mbox{Y}&\mbox{Y}& \mbox{Y}\\ \mbox{sum}&2&2&30&13&30&13&2&2\\ \mbox{line}&1&1&8&9&10&6&1&1\end{array}\]
In those lines, we can see that the front of the locomotive is seen in the cell \(c{-}1\) at time 2, that the front of the locomotive is in the cell \(c\) at time 3, that its rear is at that cell at time 4 while the front is at the cell \(c{+}1\) and that, at time 5 the state of the cell \(c\) is again Y while the rear of the locomotive is
seen in the cell \(c\)+1.
Comparing lines (tr b) and (tr m), we can see the difference of the neighbouring sum entailed by the difference of weight between B and M, 4 and 29 respectively. At time 5, the rear is seen from \(c\), when the same neighbouring sum and the same entry applies. Note that entries 13 and 17 have the same neighbourhood 20100 and the same neighbourhood weight 14 but the current state is different.
\begin{tabular}{r r r r r r r r} time & 0 & 1 & 2 & 3 & 4 & 5 & 6 & 7 \\ state & Y & Y & Y & B & R & Y & Y & Y \\ sum & 3 & 3 & 6 & 14 & 9 & 25 & 3 & 3 \\ line & 11 & 11 & 12 & 13 & 14 & 15 & 11 & 11 \\ time & 0 & 1 & 2 & 3 & 4 & 5 & 6 & 7 \\ state & Y & Y & Y & M & R & Y & Y & Y \\ sum & 3 & 3 & 31 & 14 & 59 & 25 & 3 & 3 \\ line & 11 & 11 & 16 & 17 & 18 & 15 & 11 & 11 \\ \end{tabular}
**Table 2**: _Table explaining the control table for the idle configurations of the tracks, also across the fork. The lines also deal with the motion of the locomotive both for a blue and a mauve one._
\begin{tabular}{r r r r r r r} n\({}^{*}\) & c-YBRMV:n & \(\Sigma\) & c. & F. & n\({}^{*}\) & c-YBRMV:n & \(\Sigma\) & c. & F. \\ & & & & & 10 & 3-10010:1 & 30 & (7,6) & 6 \\
0 & 0-00000:0 & 0 & 0 & 6 & fork & & \\
1 & 1-20000:1 & 2 & (1,5) & id. & 11 & 1-30000:1 & 3 & (4,1) & 8 \\
2 & 1-10000:1 & 1 & (1,14) & id. & & blue locomotive & & \\ & & & & 12 & 1-21000:2 & 6 & id. & id. \\
3 & 1-11000:2 & 5 & (7,8) & id. & 13 & 2-20100:3 & 14 & id. & id. \\
4 & 2-10100:3 & 13 & (7,7) & id. & 14 & 3-12000:1 & 9 & id. & id. \\
5 & 3-11000:1 & 5 & (7,6) & id. & 15 & 1-10200:1 & 25 & id. & id. \\
6 & 1-10100:1 & 13 & (7,7) & id. & & mauve locomotive & & \\
7 & 1-00100:1 & 12 & (7,5) & id. & 16 & 1-20010:4 & 31 & id. & id. \\ & & & & 17 & 4-20100:3 & 14 & id. & id. \\
8 & 1-10010:4 & 30 & (7,8) & id. & 18 & 3-10020:1 & 59 & id. & id. \\
9 & 4-10100:3 & 13 & (7,7) & id. & & & & \\ \end{tabular}
Entries 14 and 18 which appear at time 4 in lines (fk b) and (fk m) mention the occurrences of the front of two locomotives in the neighbourhood of the cell (4,1) from Figures 7 and 8.
Let us now look at the passive fixed switch. Table 3 displays the lines of Table 7 which deal with the passive fixed switch. Here to the information given in Table 7, we indicate for each entry the cell where its application first appears. Lines (fx b) and (fx m) show us which entry apply to cell 0 when a blue and a mauve locomotive respectively is running across the switch. As can be seen, entries applying for a blue locomotive may also apply to a mauve one. For instance, as the central cell remains most always V, the same entry applies to
that cell when it can see the rear of the locomotive in the cell (4,1) as far as the rear is always R.
\begin{tabular}{l c c c c c c c c} time & 0 & 1 & 2 & 3 & 4 & 5 & 6 & 7 \\ state & V & V & V & R & V & V & V & \\ sum & 37 & 37 & 40 & 17 & 48 & 37 & 37 & 37 & (fx b) \\ line & 21 & 21 & 26 & 27 & 34 & 21 & 21 & 21 & \\ time & 0 & 1 & 2 & 3 & 4 & 5 & 6 & 7 & \\ state & V & V & R & V & V & V & V & V \\ sum & 37 & 37 & 65 & 76 & 48 & 37 & 37 & 37 & (fx m) \\ line & 21 & 21 & 39 & 43 & 34 & 21 & 21 & 21 & \\ \end{tabular}
**Table 3**: _Table of the entries of Table 7 which deal with the passive fixed switch._
\begin{tabular}{l c c c c c c c c} n\({}^{*}\) & c-YBRMV:n & \(\Sigma\) & c. & F. & n\({}^{*}\) & c-YBRMV:n & \(\Sigma\) & c. & F. \\ & fixed switch & & & & & & 32 & 3-21001:1 & 40 & (4,1) & 12 \\
19 & 1-30001:1 & 37 & (4,1) & 11 & 33 & 1-10101:1 & 47 & (5,1) & id. \\
20 & 1-20001:1 & 36 & (5,1) & id. & 34 & 5-20101:5 & 48 & 0 & id. \\
21 & 5-30001:5 & 37 & 0 & id. & 35 & 1-20101:1 & 48 & (4,1) & id. \\
22 & 5-00001:5 & 34 & (1,1) & id. & & & & & \\ & blue locomotive & & & & 36 & 1-10011:4 & 64 & (3,1) & id. \\
23 & 1-11001:2 & 39 & (3,1) & 12 & 37 & 4-10101:3 & 47 & (4,1) & id. \\
24 & 2-10101:3 & 47 & id. & id. & 38 & 1-20011:4 & 65 & id. & id. \\
25 & 1-21001:2 & 40 & (4,1) & id. & 39 & 5-20011:3 & 65 & 0 & id. \\
26 & 5-21001:3 & 40 & 0 & id. & 40 & 3-10110:1 & 42 & (3,1) & id. \\
27 & 3-11100:1 & 17 & (3,1) & id. & 41 & 1-10110:1 & 42 & (5,1) & id. \\
28 & 1-11100:1 & 17 & (5,1) & id. & 42 & 4-20200:3 & 26 & (4,1) & id. \\
29 & 2-20200:3 & 26 & (4,1) & id. & 43 & 3-10111:5 & 76 & 0 & id. \\
30 & 3-11101:5 & 51 & 0 & id. & 44 & 3-20011:1 & 65 & (4,1) & id. \\
31 & 5-00100:5 & 12 & (1,1) & id. & & & & & \\ \end{tabular}
As can be seen in the table, the same neighbouring weight with the same decomposition but a different current state may apply to different cells: as an example, 48 is the weight for entries 34 and 35 sharing the neighbourhood 20101. As mentioned in the table, entry 34 applies to the central cell while entry 35 apply to the cell (4,1). In figure 12 entry 34 applies when the locomotive has its rear in the cell (4,1) while entry 35 applies to that cell when the rear is in the cell (4,4).
Let us now look at the converters. Table 4 displays the entries of Table 7 together with the cells to which those entries apply. The cells referred too can be seen on Figure 9. We can see in lines (ch b) entry 54 whose neighbourhood is the maximal in Table 7. The neighbours are given by 10032 which indicates three M-neighbours and two V-ones giving 155 from value 156 of the total neighbouring weight. Lines (ch b) and (ch m) refer to the central cell of Figure 9. In Table 4, we can see, entry 48, that the neighbourhood of cell 0 is 11022 when the front of a blue locomotive is seen by that cell. The B-front is the second **1**. The
corresponding situation in the opposite conversion is given by entry 63 giving 12012 as neighbourhood fo cell 0. The M-front is the **1** at he penultimate digit.
\begin{tabular}{r r r r r r r r r} time & 0 & 1 & 2 & 3 & 4 & 5 & 6 & 7 \\ state & Y & Y & Y & M & R & Y & Y & Y \\ sum & 128 & 128 & 131 & 139 & 156 & 128 & 128 & 128 \\ line & 45 & 45 & 48 & 50 & 54 & 58 & 45 & 45 \\ time & 0 & 1 & 2 & 3 & 4 & 5 & 6 & 7 \\ state & Y & Y & Y & Y & Y & Y & Y & (ch m) \\ sum & 78 & 78 & 106 & 89 & 81 & 89 & 78 & 78 \\ line & 60 & 60 & 63 & 64 & 69 & 72 & 60 & 60 & \\ \end{tabular}
**Table 4**: _Entries of Table 7 devoted to the converters._
\begin{tabular}{r r r r r r r r} n\({}^{*}\) & c-YBRMV:n & \(\Sigma\) & c. & F. & n\({}^{*}\) & c-YBRMV:n & \(\Sigma\) & c. & F. \\ & \multicolumn{4}{c}{converters} & & & & & & & \\ & \multicolumn{4}{c}{mauve to blue} & & & & & \\ & \multicolumn{4}{c}{blue to mauve} & & & 60 & 1-22002:1 & 78 & 0 & 9 \\
45 & 1-20022:1 & 128 & 0 & 9 & 61 & 5-21000:5 & 6 & (3,1) & id. \\
46 & 4-10011:4 & 64 & (1,1) & id. & 62 & 2-11001:2 & 39 & (1,1) & id. \\
47 & 5-20010:5 & 31 & (3,1) & id. & 63 & 1-12012:2 & 106 & 0 & id. \\
48 & 1-11022:4 & 131 & 0 & id. & 64 & 2-12102:3 & 89 & 0 & id. \\
49 & 5-11010:5 & 34 & (3,3) & id. & 65 & 5-02100:5 & 20 & (7,1) & id. \\
50 & 4-10122:3 & 139 & 0 & id. & 66 & 5-12000:5 & 9 & (3,1) & id. \\
51 & 4-00021:4 & 92 & (1,1) & id. & 67 & 2-02001:2 & 42 & (1,1) & id. \\
52 & 3-10011:1 & 64 & (6,1) & id. & 68 & 3-11001:1 & 39 & (6,1) & id. \\
53 & 5-10020:5 & 59 & (3,1) & id. & 69 & 3-13002:1 & 81 & 0 & id. \\
54 & 3-10032:1 & 156 & 0 & id. & 70 & 5-11100:5 & 17 & (7,1) & id. \\
55 & 5-10012:5 & 98 & (1,2) & id. & 71 & 2-01101:2 & 50 & (1,1) & id. \\
56 & 4-00111:4 & 75 & (1,1) & id. & 72 & 1-12102:1 & 89 & 0 & id. \\
57 & 5-00120:5 & 70 & (7,1) & id. & & & & & \\
58 & 1-10122:1 & 139 & 0 & id. & & & & & \\
59 & 5-10110:5 & 42 & (7,1) & id. & & & & & \\ \end{tabular}
Table 5 displays the entries of Table 7 devoted to the filters. Before, we display the entry which apply to the central cell together with the corresponding number of the entry. Lines (ftb b) and (ftb m) show us the entries and the corresponding neighbouring weights applied to cell 0 when a blue and a mauve locomotive respectively comes to the blue filter. Lines (ftm m) and (ftm b) fo the same for a mauve and a blue locomotive respectively with respect to a mauve filter. In lines (ftb b) and (ftm m) we can see that the filter let a locomotive of the same colour cross the filter. Lines (ftb m) and (ftm b) show us that the filter prevents the crossing for a locomotive of an opposite colour.
\begin{tabular}{c c c c c c c c} time & 0 & 1 & 2 & 3 & 4 & 5 & 6 \\ state & Y & Y & B & R & Y & Y & Y & (ftb b) \\ sum & 52 & 55 & 63 & 55 & 63 & 52 & 52 & (ftb b) \\ line & 73 & 84 & 91 & 97 & 102 & 73 & 73 & \\ time & 0 & 1 & 2 & 3 & 4 & 5 & 6 & \\ state & Y & Y & Y & Y & Y & Y & Y & (ftm m) \\ sum & 52 & 80 & 63 & 52 & 52 & 52 & (ftb m) \\ line & 73 & 105 & 102 & 73 & 73 & 73 & 73 & \\ time & 0 & 1 & 2 & 3 & 4 & 5 & 6 & \\ state & Y & Y & M & R & Y & Y & Y & (ftm m) \\ sum & 77 & 105 & 88 & 105 & 88 & 77 & 77 & \\ line & 111 & 118 & 119 & 124 & 127 & 111 & 111 & \\ time & 0 & 1 & 2 & 3 & 4 & 5 & 6 & \\ state & Y & Y & Y & Y & Y & Y & (ftm b) \\ sum & 77 & 80 & 88 & 77 & 77 & 77 & 77 & \\ line & 111 & 105 & 127 & 111 & 111 & 111 & \\ \end{tabular}
In lines (ftb b) and (ftm m) we can see the sums 55, 63 and 105, 88 respectively appear twice with different numbers of entries: 84 and 97 for sum 55, 63 and 102 for sum 91 in (ftb b) and also 118 and 124 for sum 105, 119 and 127 for sum 88 in (ftm m). Note that entry 102, 127 is applied both for the blue, the mauve filter respectively, the second occurrence being for stopping the locomotive of the opposite colour.
Note that in Table 4, several entries are conservative or witnessing. As an example, entries 73 up to 79 are conservative while entry 76, for instance, witnesses the front of a blue locomotive which is also the case, later for cell (1,1) with entry 92. In both cases, in the neighbourhood witness of the states, the witnessing entry has less Y-marks by one and one more B-mark: we have 31002, 22002 for entries 74, 85 respectively and we have 20020, 11020 for entries 77, 90 respectively.
Table 6 displays the last entries of Table 7 which are devoted to the programming of the filters. Lines (ch m.f.) and (ch b.f.) show us which entries apply to the cell (1,1) of a filter, the cell which defines the colour of the filter and which is changed by a mauve locomotive arriving nearby it. We can see that two neighbouring sum occur: 163 and 164, both with two distinct entries: 129, 134 and 115, 75 respectively. Entries 129 and 134 change the colour of the filter: entry 129 changes a mauve filter to a blue one while entry 134 changes a blue filter to a mauve one, as mentioned by the explanation of the entry, 4-10003:2 and 2-10003:4 for entries 129 and 134 respectively. Note that the neighbourhoods are the same in both cases. Entries 75 and 115 are conservative: they keep the appropriate colour in the cell (1,1) which holds the colour of the filter. Entry 75 keeps a blue filter while entry 115 keeps a mauve one. The neighbours are the same in both cases in an idle configuration: 20003.
\begin{table}
\begin{tabular}{l c c c c c c c c c} n\({}^{*}\) & c-YBRMV:n & \(\Sigma\) & c. & F. & n\({}^{*}\) & c-YBRMV:n & \(\Sigma\) & c. & F. \\ & & blue filter & & & 101 & 3-10120:3 & 71 & (5,1) & id. \\ & & permitting & & & 102 & 1-11201:1 & 63 & 0 & 9 \\
73 & 1-21101:1 & 52 & 0 10 & 103 & 1-20102:1 & 82 & (6,1) & id. \\
74 & 5-31002:5 & 75 & (7,1) & id. & & & stopping & \\
[MISSING_PAGE_POST]
& & & \\ \end{tabular}
\end{table}
Table 5: _Entries of Table 7 devoted to the filters._
**Table 6**: _The table displays the entries of Table 7 devoted to the control of the filters._
\begin{tabular}{r r r r r r r r} n\({}^{*}\) & c-YBRMV:n & \(\Sigma\) & c. & F. & n\({}^{*}\) & c-YBRMV:n & \(\Sigma\) & c. & F. \\ & & change filters & & & 132 & 3-10000:1 & 1 & (5,1) & 14 \\ & & mauve to blue & & & 133 & 1-11002:1 & 73 & (1,2) & id. \\
128 & 1-10012:1 & 98 & (1,2) & 14 & & blue to mauve & & \\
129 & 4-10003:2 & 103 & (1,1) & id. & 134 & 2-10003:4 & 103 & (1,1) & id. \\
130 & 5-20012:5 & 99 & (7,1) & id. & 135 & 0-01102:1 & 84 & (1,2) & id. \\
131 & 0-00112:1 & 109 & (1,2) & id. & 136 & 5-21002:5 & 74 & (5,1) & id. \\ \end{tabular}
As far as we had a look on Tables 2 up to 6, scrutinising them, we can see that we have the following maximal coefficients of the weights to the states in the computation of the neighbourhood weight:
\begin{tabular}{r r r r r r} state & W & Y & B & R & M & V \\ rank & 0 & 1 & 2 & 3 & 4 & 5 \\ weight & 0 & 1 & 4 & 12 & 29 & 34 \\ coeff. & \(x\) & 3 & 3 & 2 & 3 & 3 \\ \end{tabular} (6)
where in (6), \(x\) means that the coefficient for W in any entry is the complement to 7 of the sum of the coefficients of the other states.
Let \(w_{i}\) with \(i\in\{0..5\}\) be the weight given to the state of rank \(i\). Let \(\kappa_{i}\) be the maximal coefficient for \(w_{i}\) in Tables 2 up to 6. If we put
\[w_{i}=1+\sum_{0\leq j<i}\kappa_{j}.w_{j} \tag{7}\]
we get a sufficient condition for obtaining the uniqueness of decomposing any \(n\leq\kappa_{4}.w_{4}\) as
\[n=\sum_{0\leq j<4}\alpha_{j}.w_{j} \tag{8}\]
with \(\alpha_{j}\leq\kappa_{j}\) for \(j\in\{0..4\}\) by taking \(w_{0}=0\) and \(w_{1}=1\).
Not any \(n\) is reached by a sum as in (8) by the decomposition shown in Tables 2 up to 6. On the 137 entries, there are 69 pairwise distinct values. Accordingly many neighbouring weights are duplicated, always associated with a different state. Nonetheless, it is worth to set \(w_{5}>w_{4}\) where \(w_{5}\) is the weight of V. Note that 34 is reached by entries, 22 and 49, with the same state but with a different decomposition: 00001 and 11010 for entries 22 and 49 respectively. In Table 7, the new state is uniquely defined by the old state and the neighbourhood weight. It is another reason to set \(w_{5}>w_{4}\). The values satisfying (7) are, from 0 to 4 : 1, 4, 14 and 30. Note that in Table 7, there is at most 2 occurrences of R if there is at least 1 occurrence of M. So that it is enough to take 12 for R and then 29 for M. Starting from 30, the first value of \(w_{5}>w_{4}\) which makes correct pictures is 34. For smaller values of \(w_{2}\) or \(w_{4}\), there are incorrect pictures. Accordingly, as far as those pictures are conformal to what is explained in Subsection 2.2, Theorem 1 is thus completely proved. \(\square\)
\begin{tabular}{c c c c c c c c} n\({}^{\rm a}\) & c:YBRMV:n & \(\Sigma\) & n\({}^{\rm s}\) & c:YBRMV:n & \(\Sigma\) & n\({}^{\rm s}\) & c:YBRMV:n & \(\Sigma\) \\ & tracks & & & 32 & 3-21001:1 & 40 & 67 & 2-02001:2 & 42 \\
0 & 0-00000:0 & 0 & 33 & 1-10101:1 & 47 & 68 & 3-11001:1 & 39 \\
1 & 1-20000:1 & 2 & 34 & 5-20101:5 & 48 & 69 & 3-13002:1 & 81 \\
2 & 1-10000:1 & 1 & 35 & 1-20101:1 & 48 & 70 & 5-11100:5 & 17 \\ & blue locomotive & & & & & & 71 & 2-01101:2 & 50 \\
3 & 1-11000:2 & 5 & 36 & 1-10011:4 & 64 & 72 & 1-12102:1 & 89 \\
4 & 2-10100:3 & 13 & 37 & 4-10101:3 & 47 & & blue filter & \\
5 & 3-11000:1 & 5 & 38 & 1-20011:4 & 65 & & permitting & \\
6 & 1-10100:1 & 13 & 39 & 5-20011:3 & 65 & 73 & 1-21101:1 & 52 \\
7 & 1-00100:1 & 12 & 40 & 3-10110:1 & 42 & 74 & 5-31002:5 & 75 \\ & mauve locomotive & & 41 & 1-10110:1 & 42 & 75 & 2-20003:2 & 104 \\
8 & 1-10010:4 & 30 & 42 & 4-20200:3 & 26 & 76 & 5-01001:5 & 38 \\
9 & 4-10100:3 & 13 & 43 & 3-10111:5 & 76 & 77 & 3-20020:3 & 60 \\
10 & 3-10010:1 & 30 & 44 & 3-20011:1 & 65 & 78 & 5-10002:5 & 69 \\ & fork & & & converters & & 79 & 5-00002:5 & 68 \\ \hline
11 & 1-30000:1 & 3 & blue to mauve & & 80 & 1-11102:2 & 85 \\ & blue locomotive & & 45 & 1-20022:1 & 128 & 81 & 1-01002:1 & 72 \\
12 & 1-21000:2 & 6 & 46 & 4-10011:4 & 64 & 82 & 5-11001:5 & 39 \\
13 & 2-20100:3 & 14 & 47 & 5-20010:5 & 31 & 83 & 4-00100:4 & 12 \\
14 & 3-12000:1 & 9 & 48 & 1-11022:4 & 131 & 84 & 1-12101:2 & 55 \\
15 & 1-10200:1 & 25 & 49 & 5-11010:5 & 34 & 85 & 5-22002:5 & 78 \\ & mauve locomotive & & 50 & 4-10122:3 & 139 & 86 & 5-01002:5 & 72 \\
16 & 1-20010:4 & 31 & 51 & 4-00021:4 & 92 & 87 & 2-20002:2 & 70 \\
17 & 4-20100:3 & 14 & 52 & 3-10011:1 & 64 & 88 & 5-11000:5 & 5 \\
18 & 3-10020:1 & 59 & 53 & 5-10020:5 & 59 & 89 & 2-10202:3 & 93 \\ & fixed switch & & & 54 & 3-10032:1 & 156 & 90 & 3-11020:3 & 63 \\
19 & 1-30001:1 & 37 & 55 & 5-10012:5 & 98 & 91 & 2-11201:3 & 63 \\
20 & 1-20001:1 & 36 & 56 & 4-00111:4 & 75 & 92 & 2-11003:2 & 107 \\
21 & 5-30001:5 & 37 & 57 & 5-00120:5 & 70 & 93 & 5-12102:5 & 89 \\
22 & 5-00001:5 & 34 & 58 & 1-10122:1 & 139 & 94 & 5-00102:5 & 80 \\ & blue locomotive & & 59 & 5-10110:5 & 42 & 95 & 3-11102:1 & 85 \\
23 & 1-11001:2 & 39 & mauve to blue & & 96 & 3-01120:3 & 74 \\
24 & 2-10101:3 & 47 & 60 & 1-22002:1 & 78 & 97 & 3-12101:1 & 55 \\
25 & 1-21001:2 & 40 & 61 & 5-21000:5 & 6 & 98 & 2-10103:2 & 115 \\
26 & 5-21001:3 & 40 & 62 & 2-11001:2 & 39 & 99 & 5-21102:5 & 86 \\
27 & 3-11100:1 & 17 & 63 & 1-12012:2 & 106 & 100 & 1-10202:1 & 93 \\
28 & 1-11100:1 & 17 & 64 & 2-12102:3 & 89 & 101 & 3-10120:3 & 71 \\
29 & 2-20200:3 & 26 & 65 & 5-02100:5 & 20 & 102 & 1-11201:1 & 63 \\
30 & 3-11101:5 & 51 & 66 & 5-12000:5 & 9 & 103 & 1-20102:1 & 82 \\
31 & 5-00100:5 & 12 & & & & & & \\ \end{tabular}
\begin{tabular}{c c c c c c c c} n\({}^{\rm a}\) & c:YBRMV:n & \(\Sigma\) & n\({}^{\rm s}\) & c:YBRMV:n & \(\Sigma\) & n\({}^{\rm s}\) & c:YBRMV:n & \(\Sigma\) \\ & tracks & & & 32 & 3-21001:1 & 40 & 67 & 2-02001:2 & 42 \\
0 & 0-00000:0 & 0 & 33 & 1-10101:1 & 47 & 68 & 3-11001:1 & 39 \\
1 & 1-20000:1 & 2 & 34 & 5-20101:5 & 48 & 69 & 3-13002:1 & 81 \\
2 & 1-10000:1 & 1 & 35 & 1-20101:1 & 48 & 70 & 5-11100:5 & 17 \\ & blue locomotive & & & 71 & 2-01101:2 & 50 \\
3 & 1-11000:2 & 5 & 36 & 1-10011:4 & 64 & 72 & 1-12102:1 & 89 \\
4 & 2-10100:3 & 13 & 37 & 4-10101:3 & 47 & & blue filter & \\
5 & 3-11000:1 & 5 & 38 & 1-20011:4 & 65 & & permitting & \\
6 & 1-10100:1 & 13 & 39 & 5-20011:3 & 65 & 73 & 1-21101:1 & 52 \\
7 & 1-00100:1 & 12 & 40 & 3-10110:1 & 42 & 74 & 5-31002:5 & 75 \\ & mauve locomotive & & 41 & 1-10110:1 & 42 & 75 & 2-20003:2 & 104 \\
8 & 1-10010:4 & 30 & 42 & 4-20200:3 & 26 & 76 & 5-01001:5 & 38 \\
9 & 4-10100:3 & 13 & 43 & 3-10111:5 & 76 & 77 & 3-20020:3 & 60 \\
10 & 3-10010:1 & 30 & 44 & 3-20011:1 & 65 & 78 & 5-10002:5 & 69 \\ & fork & & & converters & & 79 & 5-00002:5 & 68 \\ \hline
11 & 1-30000:1 & 3 & blue to mauve & & 80 & 1-11102:2 & 85 \\ & blue locomotive & & 45 & 1-2
## 4 Conclusion
As I already discussed the point dealing with states and weights, I think it is now worth to compare the present paper with previous ones.
In the present paper, I use the same model of the simulation of a register machine by a locomotive running over an appropriate circuit. However, the implementation of the circuit is different in many regards. The way the two types of locomotives is implemented allowed me to go back to an almost direct implementation of crossings. Another important point is the use of the filters. As far as the filters are directly constructed as programmable allowed me to greatly simplify the conception of various structures allowing me to implement the model. Accordingly, we get a more economic solution in terms of number of cells involved in structures and also a more efficient one. The price to pay is that the diagrams are a bit less readable. May be another model could be explored.
And so, there are a lot of open questions.
|
2302.09690 | Gravitational Vacuum Condensate Stars | Gravitational vacuum condensate stars, proposed as the endpoint of
gravitational collapse consistent with quantum theory, are reviewed. Gravastars
are cold, low entropy, maximally compact objects characterized by a surface
boundary layer and physical surface tension instead of an event horizon. Within
this thin boundary layer the effective vacuum energy changes rapidly, such that
the interior of a non-rotating gravastar is a non-singular static patch of de
Sitter space with eq. of state p=-rho. Remarkably, essentially this same result
is obtained by extrapolating Schwarzschild's 1916 constant density interior
solution to its compact limit, showing how the black hole singularity theorems
and the Buchdahl compactness bound are evaded. The surface stress tensor on the
horizon is determined by a modification of the Lanczos-Israel junction
conditions for null hypersurfaces, which is applied to rotating gravastar
solutions as well. The fundamental basis for the quantum phase transition at
the horizon is the stress tensor of the conformal anomaly, depending upon a new
light scalar field in the low energy effective action for gravity. This scalar
conformalon field allows the effective value of the vacuum energy described as
a condensate of an exact 4-form abelian gauge field to change at the horizon.
The resulting effective theory thus replaces the fixed constant Lambda of
classical general relativity, and its apparently unnaturally large sensitivity
to UV physics, with a dynamical condensate whose ground state value in empty
flat space is zero identically. This provides both a solution to the
cosmological constant problem and an effective Lagrangian dynamical framework
for the boundary layer and interior of gravitational condensate stars. The
status of present observational constraints and prospects for detection of
gravastars through their gravitational wave and echo signatures are discussed. | Emil Mottola | 2023-02-19T23:09:03Z | http://arxiv.org/abs/2302.09690v2 | # Gravitational Vacuum Condensate Stars
###### Abstract
Gravitational vacuum condensate stars, proposed as the endpoint of gravitational collapse consistent with quantum theory, are reviewed. Gravastars are cold, low entropy, maximally compact objects characterized by a surface boundary layer and physical surface tension, instead of an event horizon. Within this thin boundary layer the effective vacuum energy \(\Lambda_{\rm eff}\) changes rapidly, such that the interior of a non-rotating gravastar is a non-singular static patch of de Sitter space with eq. of state \(p=-\rho\). Remarkably, essentially this same result is obtained by extrapolating Schwarzschild's 1916 constant density interior solution to its compact limit, showing how the black hole singularity theorems and the Buchdahl compactness bound are evaded. The surface stress tensor on the horizon is determined by a modification of the Lanczos-Israel junction conditions for null hypersurfaces, which is applied to rotating gravastar solutions as well. The fundamental basis for the quantum phase transition at the horizon is the stress tensor of the conformal anomaly, depending upon a new light scalar field in the low energy effective action for gravity. This scalar conformalon field allows the effective value of the vacuum energy, described as a condensate of an exact 4-form abelian gauge field strength \(F=dA\), to change at the horizon. The resulting effective theory thus replaces the fixed constant \(\Lambda\) of classical general relativity, and its apparently unnaturally large sensitivity to UV physics, with a dynamical condensate whose ground state value in empty flat space is \(\Lambda_{\rm eff}=0\) identically. This provides both a natural resolution of the cosmological constant problem and an effective Lagrangian dynamical framework for the boundary layer and interior of gravitational vacuum condensate stars. The status of present observational constraints and prospects for detection of gravastars through their gravitational wave and echo signatures are discussed. |
2306.15996 | Flat bands and magnetism in $\mathrm{\mathbf{Fe_4 Ge Te_2}}$ and
$\mathrm{\mathbf{Fe_5GeTe_2}}$ due to bipartite crystal lattices | $\mathrm{Fe_{n=4,5}GeTe_2}$ exhibits quasi-two-dimensional properties as a
promising candidate for a near-room-temperature ferromagnet, which has
attracted great interest. In this work, we notice that the crystal lattice of
$\mathrm{Fe_{n=4,5}GeTe_2}$ can be approximately regarded as being stacked by
three bipartite crystal lattices. By combining the model Hamiltonians of
bipartite crystal lattices and first-principles calculations, we investigate
the electronic structure and the magnetism of $\mathrm{Fe_{n=4,5}GeTe_2}$. We
conclude that flat bands near the Fermi level originate from the bipartite
crystal lattices and that these flat bands are expected to lead to the
itinerant ferromagnetism in $\mathrm{Fe_{n=4,5}GeTe_2}$. Interestingly, we also
find that the magnetic moment of the Fe5 atom in $\mathrm{Fe_5 Ge Te_2}$ is
distinct from the other Fe atoms and is sensitive to the Coulomb interaction
$U$ and external pressure. These findings may be helpful to understand the
exotic magnetic behavior of $\mathrm{Fe_{n=4,5} Ge Te_2}$. | Fuyi Wang, Haijun Zhang | 2023-06-28T08:16:05Z | http://arxiv.org/abs/2306.15996v3 | Flat bands and magnetism in Fe\({}_{4}\)GeTe\({}_{2}\) and Fe\({}_{5}\)GeTe\({}_{2}\) due to bipartite crystal lattices
###### Abstract
Fe\({}_{\rm n=4,5}\)GeTe\({}_{2}\) exhibits quasi-two-dimensional properties as a promising candidate for a near-room-temperature ferromagnet, which has attracted great interest. In this work, we notice that the crystal lattice of Fe\({}_{\rm n=4,5}\)GeTe\({}_{2}\) can be approximately regarded as being stacked by three bipartite crystal lattices. By combining the model Hamiltonians of bipartite crystal lattices and first-principles calculations, we investigate the electronic structure and the magnetism of Fe\({}_{\rm n=4,5}\)GeTe\({}_{2}\). We conclude that flat bands near the Fermi level originate from the bipartite crystal lattices and that these flat bands are expected to lead to the itinerant ferromagnetism in Fe\({}_{\rm n=4,5}\)GeTe\({}_{2}\). Interestingly, we also find that the magnetic moment of the Fe5 atom in Fe\({}_{5}\)GeTe\({}_{2}\) is distinct from the other Fe atoms and is sensitive to the Coulomb interaction \(U\) and external pressure. These findings may be helpful to understand the exotic magnetic behavior of Fe\({}_{\rm n=4,5}\)GeTe\({}_{2}\).
## I Introduction
In recent years, Fe\({}_{\rm n=4,5}\)GeTe\({}_{2}\) have been discovered as van der Waals (vdW) itinerant ferromagnets with a high Curie temperature \(T_{c}\)[1; 2; 3; 4; 5; 6; 7; 8; 9; 10]. They are promising for spintronic applications due to the near-room-temperature ferromagnetism, magnetic anisotropy, and high electric conductivity [3; 4; 5; 6; 7; 8; 9; 10; 11; 12; 13; 14; 15; 16; 17; 18; 19; 20; 21; 22]. Fe\({}_{\rm n=4,5}\)GeTe\({}_{2}\) also exhibit many interesting properties, such as the Kondo effect [23], anomalous Hall effect (AHE) [24; 25], butterfly-shaped magnetoresistance [26], controllable topological magnetic transformations [27], and skyrmionic spin structures up to the room temperature [28; 29]. However, the underlying physics of the magnetic behaviors of Fe\({}_{\rm n=4,5}\)GeTe\({}_{2}\) has not been well understood.
To investigate the electronic structure and the magnetism of Fe\({}_{\rm n=4,5}\)GeTe\({}_{2}\), we approximately decompose their crystal lattices into three basic layers due to the layered structure. Each basic layer contains at least one Fe layer and one Ge or Te layer, as shown in Fig. 1. Interestingly, we notice that these basic layers of Fe\({}_{\rm n=4,5}\)GeTe\({}_{2}\) can be approximately regarded as bipartite crystal lattices (BCLs) which can be divided into two sublattices with negligible intra-sublattice hopping [30; 31], since the hopping primarily occurs between the Fe and Ge/Te sublattices. We determine that the stacked BCLs in Fe\({}_{\rm n=4,5}\)GeTe\({}_{2}\) give rise to flat bands [32; 33; 34; 35] which may account for the ferromagnetism observed in these materials. It is worth mentioning that the decomposition of BCLs for Fe\({}_{\rm n=4,5}\)GeTe\({}_{2}\) is a rough approximation, and the hoppings between adjacent BCLs still require careful consideration.
In this work, we construct model Hamiltonians for the stacked BCLs of Fe\({}_{\rm n=4,5}\)GeTe\({}_{2}\). We determine that the flat bands can be attributed to the BCLs, based on these model Hamiltonians. We also demonstrate that the itinerant ferromagnetism in these materials arises from the nearly flat bands near the Fermi energy driven by the Coulomb interaction \(U\), which is known as flat-band ferromagnetism [36; 37; 38; 39; 40; 41; 42]. The BCL-induced ferromagnetism primarily depends on the lattice structure, orbitals, and electron filling number. We expect that this conclusion could be extended to other vdW ferromagnets. Furthermore, by combining the model Hamiltonians and first-principles calculations, we find that the magnetic moment of Fe5 in Fe\({}_{5}\)GeTe\({}_{2}\) is sensitive to both \(U\) and external pressures. The pressure-tunable magnetic moment transitions in Fe\({}_{5}\)GeTe\({}_{2}\) might be experimentally observed.
## II Methods
First-principles calculations are carried out using the Perdew-Burke-Ermerhof-type (PBE) generalized gradient approximation (GGA) [43] of the density functional theory (DFT), using the Vienna ab initio simulation package (VASP) [44; 45; 46]. The GGA \(+\) U method with \(U=3.0~{}eV\) is employed to treat the Fe's \(d\) orbitals. Self-consistent calculations including the spin-orbit coupling (SOC) are performed. A kinetic energy cutoff of \(500~{}eV\) is used, and the \(10\times 10\times 10\) k-point mesh is taken for the bulk calculations. The inner atomic positions are obtained via full relaxation with a total energy tolerance of \(10^{-6}\) eV. The experimental lattice constants of Fe\({}_{4}\)GeTe\({}_{2}\) (\(a=4.03\) A, and \(c=29.08\) A) [4] and Fe\({}_{5}\)GeTe\({}_{2}\) (\(a=4.04\) A, and \(c=29.19\) A)[5] are adopted. The Wannier-based model Hamiltonians are obtained from the projection of the \(p\) orbitals of Bi and Te and the \(d\) orbitals of Fe through employing the WANNIER90 package [47; 48; 49].
## III Crystal structure and orbitals
### The bipartite crystal lattice
Fe\({}_{4}\)GeTe\({}_{2}\) has a space group of \(R\bar{3}m(166)\) that includes an inversion symmetry. The Te (Fe1\({}^{\prime}\), Fe2) atoms are related to the Te\({}^{\prime}\) (Fe1, Fe2\({}^{\prime}\)) atoms via an inversion oper
ation, with the Ge atom serving as the inversion center. The unit cell of Fe\({}_{4}\)GeTe\({}_{2}\) has a septuple-layer structure, as shown in Fig. 1. We can decompose the unit cell of Fe\({}_{4}\)GeTe\({}_{2}\) into three BCLs stacked along the \(z\) direction, denoted as L\({}_{1}\), L\({}_{2}\), and L\({}_{3}\). The L\({}_{1}\) BCL comprises the Fe1 sublattice and the Te sublattice, while the L\({}_{3}\) BCL comprises the Fe1\({}^{\prime}\) sublattice and the Te\({}^{\prime}\) sublattice. The L\({}_{1}\) and L\({}_{3}\) BCLs are related through the inversion symmetry. The L\({}_{2}\) BCL consists of the Fe2(Fe2\({}^{\prime}\)) sublattice and the Ge sublattice, which can be viewed as a dice lattice [50; 51; 52; 30] with an inversion symmetry, where the Ge atom acts as the inversion center.
Fe\({}_{5}\)GeTe\({}_{2}\) belongs to the \(R3m(160)\) space group. The lattice of Fe\({}_{5}\)GeTe\({}_{2}\) can be obtained by inserting a Fe5 layer between the Fe1\({}^{\prime}\) and Te1 layers of Fe\({}_{4}\)GeTe\({}_{2}\)[4], as depicted in Fig. 1(c, f). The inserted Fe5 layer breaks the inversion symmetry, resulting in the unequivalence of Fe1 and Fe1\({}^{\prime}\), as well as Fe2 and Fe2\({}^{\prime}\). Consequently, Fe1\({}^{\prime}\) and Fe2\({}^{\prime}\) in Fe\({}_{5}\)GeTe\({}_{2}\) are renamed as Fe4 and Fe3, respectively. The unit cell of Fe\({}_{5}\)GeTe\({}_{2}\) has an octuple-layer structure, which can also be approximately decomposed into three BCLs stacked along the \(z\) direction. The L\({}_{1}\) BCL in Fe\({}_{5}\)GeTe\({}_{2}\) is a dice lattice consisting of the Fe4 and Fe5 sublattice and the Te1 sublattice, though it is just a quasi-BCL due to the non-negligible nearest hopping between Fe4 and Fe5. The L\({}_{2}\) BCL in Fe\({}_{5}\)GeTe\({}_{2}\) is a dice lattice but lacks the inversion symmetry. Lastly, the L\({}_{3}\) BCL in Fe\({}_{5}\)GeTe\({}_{2}\) is a honeycomb lattice which is almost identical to the L\({}_{3}\) BCL in Fe\({}_{4}\)GeTe\({}_{2}\).
### Orbitals and the site symmetry
The ferromagnetism in Fe\({}_{n=4,5}\)GeTe\({}_{2}\) is mainly due to the partially filled \(d\) orbitals of Fe, whereas Te and Ge do not exhibit major magnetic behavior. Therefore, it is important to analyze the splitting of Fe's \(d\) orbitals to reveal the underlying mechanism of the ferromagnetism.
Both Fe\({}_{4}\)GeTe\({}_{2}\) and Fe\({}_{5}\)GeTe\({}_{2}\) have the \(C_{3v}\) site symmetry, which has three irreparable representations: two 1D irreps \(A_{1,2}\) and a 2D irrep \(E\). The \(p\) and \(d\) orbitals can be classified based on the irreducible representations (irrep) of the site symmetry group. Here, \(p_{z}\) corresponds to \(A_{1}\) irrep for \(p\) orbitals of Te and Ge, while (\(p_{x}\), \(p_{y}\)) corresponds to \(E\) irrep and forms a doublet. Moreover, the \(d\) orbitals of Fe are divided into a singlet \(d_{z^{2}}\) with \(A_{1}\) irrep and two doublets (\(d_{xz}\), \(d_{yz}\)) and (\(d_{xy}\), \(d_{x^{2}-y^{2}}\)) with \(E\) irrep. The orbitals in doublets can be recombined as \(p_{x}\pm ip_{y}\), \(d_{xz}\pm id_{yz}\), and \(d_{x^{2}-y^{2}}\pm id_{xy}\), renamed according to their quantum numbers of the angular momentum projection operator \(\hat{l_{z}}\) as \(p_{m=\pm 1}\), \(d_{m=\pm 1}\), and \(d_{m=\pm 2}\). Meanwhile, the singlet orbitals are renamed as \(p_{m=0}\) and \(d_{m=0}\), respectively.
Figure 1: Crystal structure and bipartite crystal lattices. (a) Crystal structures of Fe\({}_{4}\)GeTe\({}_{2}\) with primitive lattice cell in black solid line. (b) Top views of three BCLs of Fe\({}_{4}\)GeTe\({}_{2}\), labeled L\({}_{1}\), L\({}_{2}\) and L\({}_{3}\). The first BCL (L\({}_{1}\)) is a honeycomb lattice. The center BCL (L\({}_{2}\)) is a dice lattice. The third BCL (L\({}_{3}\)) is equivalent to the L\({}_{1}\) due to the inversion symmetry. (c) The side view of a septuple layer of Fe\({}_{4}\)GeTe\({}_{2}\). The triangle lattice has three different stacked positions denoted as \(A\), \(B\) and \(C\). (d) Crystal structures of Fe\({}_{5}\)GeTe\({}_{2}\) with primitive lattice cell in black solid line. (e) Top view of three BCLs of Fe\({}_{5}\)GeTe\({}_{2}\). The first BCL (L\({}_{1}\)) and the center BCL (L\({}_{2}\)) are dice lattices. The third BCL (L\({}_{3}\)) is a honeycomb lattice. (f) The side view of the octuple layer of Fe\({}_{5}\)GeTe\({}_{2}\).
## IV Model Hamiltonians
### Construction of Model Hamiltonians
To understand the origin of flat bands, it is essential to formulate a model Hamiltonian. Since the lattice structure of \(\mathrm{Fe_{n=4,5}GeTe_{2}}\) can be viewed as three stacked BCLs, we construct the tight-binding model Hamiltonian \(H_{tot}\) by placing the BCL Hamiltonians \(H_{L_{i}}\) on the diagonal. The general form of the tight-binding model Hamiltonian for \(\mathrm{Fe_{n=4,5}GeTe_{2}}\) is written as,
\[H_{tot}(k)=\left(\begin{array}{cccc}H_{L_{1}}(k)&S_{12}(k)&S_{13}(k)\\ S_{12}^{\dagger}(k)&H_{L_{2}}(k)&S_{23}(k)\\ S_{13}^{\dagger}(k)&S_{23}^{\dagger}(k)&H_{L_{3}}(k)\end{array}\right) \tag{1}\]
where the \(S_{12}\) and \(S_{23}\) represent the hopping between adjacent BCLs which usually have the same order of magnitude as the intra-BCL hoppings, while the \(S_{13}\) between the \(\mathrm{L_{1}}\) and \(\mathrm{L_{3}}\) BCLs is almost zero. Therefore, the Hamiltonians of \(\mathrm{L_{1}}\), \(\mathrm{L_{2}}\), and \(\mathrm{L_{3}}\) BCLs cannot be treated independently. However, the hopping between adjacent BCLs primarily occurs between orbitals along the \(z\) direction, such as the \(p_{z}\) and \(d_{z^{2}}\) with \(m=0\). Therefore, to simplify the model, the orbitals can be categorized into two sets. The first set comprises all the orbitals with \(m=0\), while the second set consists of the remaining orbitals with \(m\neq 0\). By applying a unitary transformation, the original model Hamiltonian is transformed to,
Figure 3: Band structures by model Hamiltonians of \(\mathrm{Fe_{5}GeTe_{2}}\). (a, b) The band structures of \(H_{m=0}\) and \(H_{m\neq 0}\) in red and blue. The dashed gray lines are the band structures calculated by first-principles calculations. (c, d) Band structures by the \(H_{B_{1}}\) for \(A^{\prime}(k)=0\) (c) and \(A^{\prime}(k)\neq 0\) (d). The projections of Fe4 and Fe5 are in blue and red, respectively. (e, f) Band structures by the \(H_{B_{2}}\) for \(A^{\prime}(k)=0\) (e) and \(A^{\prime}(k)\neq 0\) (f). The projections of Fe4 and Fe5 are in orange and green, respectively. (g, h) Band structures by the \(H_{B_{3}}\) for \(A^{\prime}(k)=0\) (g) and \(A^{\prime}(k)\neq 0\) (h). The projections of Fe1 are in purple.
Figure 2: Band structures by model Hamiltonians of \(\mathrm{Fe_{4}GeTe_{2}}\). (a, b) The band structures of \(H_{m=0}\) and \(H_{m\neq 0}\) in red and blue. The dashed gray lines are the band structures calculated by first-principles calculations (density functional theory, DFT). (c, d) Band structures by the \(H_{B_{1}}/H_{B_{3}}\) for \(A^{\prime}(k)=0\) (c) and \(A^{\prime}(k)\neq 0\) (d). \(H_{B_{1}}\) is equivalent to \(H_{B_{3}}\) due to the inversion symmetry. The projections of Fe1 are in green. (e, f) Band structures by the \(H_{B_{2}}\) for \(A^{\prime}(k)=0\) (e) and \(A^{\prime}(k)\neq 0\) (f). The projections of Fe1 are in orange.
\[H_{tot}(k)=\left(\begin{array}{cc}H_{m=0}(k)&S_{m}(k)\\ S_{m}^{\dagger}(k)&H_{m\neq 0}(k)\end{array}\right) \tag{2}\]
where the \(H_{m=0}(k)\) and \(H_{m\neq 0}(k)\) are the Hamiltonian with the \(m=0\) orbitals and \(m\neq 0\) orbitals respectively. \(S_{m}(k)\) is the hopping matrix between the \(m=0\) and \(m\neq 0\) orbitals. Fig. 2(b) and Fig. 3(b) show that the band structures calculated by \(H_{m\neq 0}(k)\) of Fe\({}_{\text{n=4,5}}\)GeTe\({}_{2}\) can catch the main feature of the band structures from the first-principles calculations, which validate the partitioning of orbitals into \(m=0\) and \(m\neq 0\).
Since the hopping between adjacent BCLs is relatively weak for the \(m\neq 0\) orbitals, the \(H_{m\neq 0}(k)\) is given by,
\[H_{m\neq 0}(k)=\left(\begin{array}{ccc}H_{B_{1}}(k)&S_{B_{12}}(k)&0\\ S_{B_{12}}^{\dagger}(k)&H_{B_{2}}(k)&S_{B_{23}}(k)\\ 0&S_{B_{23}}^{\dagger}(k)&H_{B_{3}}(k)\end{array}\right) \tag{3}\]
where \(H_{B_{i}}\) is the Hamiltonian based on the \(m\neq 0\) orbitals of the L\({}_{i}\) BCL, and \(S_{B_{ij}}\) is the hopping between the L\({}_{i}\) BCL and the L\({}_{j}\) BCL, which can be negligible (\(S_{B_{ij}}\approx 0\)). Then, \(H_{m\neq 0}(k)\) is further considered to made up of the three individual \(H_{B_{i}}\) which is written as [31],
\[H_{B_{i}}(k)=\left(\begin{array}{cc}A(k)&S(k)\\ S^{\dagger}(k)&B(k)\end{array}\right) \tag{4}\]
where \(A(k)/B(k)\) is a Hermitian matrix denoting the on-site energy and intra-sublattice hopping and \(S(k)\) denotes the inter-sublattice hopping for each BCL. Since onsite energies lie on the diagonal, the matrix \(A^{\prime}(k)/B^{\prime}(k)\) obtained after removing the diagonal terms of \(A(k)/B(k)\) represents the intra-sublattice hoppings. As mentioned above, the decomposition of BCLs of Fe\({}_{\text{n=4,5}}\)GeTe\({}_{2}\) is a rough approximation due to the existence of the nonzero intra-sublattice hoppings which leads to the dispersion of flat bands.
### Flat bands due to BCLs
In general, a BCL Hamiltonian can induce (\(N=N_{A}-N_{B}\)) flat bands when \(N_{A}>N_{B}\)[31], as shown in Appendix A. Here, \(N_{A/B}\) denotes the number of orbitals present on the \(A/B\) sublattice. The emergence of flat bands can be attributed to the destructive interference of wavefunctions associated with the properties of the BCL [53; 54]. When \(N_{A}>N_{B}\), the hopping along different directions overlaps destructively at the \(B\) sublattice, resulting in \((N_{A}-N_{B})\) states solely on the \(A\) sublattice at every momentum \(k\). Since there are no hoppings between states on the \(A\) sublattice of BCL, these states form \((N_{A}-N_{B})\) flat bands. However, according to the proof in Appendix A, the crystal field splitting of orbitals due to the crystal field effect and the intra-sublattice hoppings on \(A\) sublattice may lead to slight bending or loss of degeneracy in the flat bands. We will take into account the impact of the crystal field splitting and \(A^{\prime}(k)\) on the flat bands when performing calculations using these model Hamiltonians.
We first analyze the flat bands according to the BCL Hamiltonian \(H_{B_{i}}\) of Fe\({}_{\text{n=4,5}}\)GeTe\({}_{2}\). We first neglect the crystal field splitting of the \(d\) orbital due to the crystal field effect and the intra-sublattice hoppings \(A^{\prime}(k)\) for each BCL. The L\({}_{1}\)/L\({}_{3}\) BCL of Fe\({}_{4}\)GeTe\({}_{2}\) and L\({}_{3}\) BCL of Fe\({}_{5}\)GeTe\({}_{2}\) are honeycomb lattices that consist of a Fe sublattice (denoted as the \(A\) sublattice) and a Te sublattice (denoted as the \(B\) sublattice). The \(A\) sublattice comprises four degenerate \(d\) orbitals, while the \(B\) sublattice comprises two degenerate \(p\) orbitals. As a result, the BCL Hamiltonian with \((N_{A}-N_{B}=2)\) degenerate flat bands. On the other hand, the L\({}_{2}\) BCL of Fe\({}_{4}\)GeTe\({}_{2}\) and L\({}_{1}\)/L\({}_{2}\) BCLs are dice lattices that consist of the sublattice with two Fe (denoted as the \(A\) sublattice) and a Te/Ge sublattice (denoted as the \(B\) sublattice). The \(A\) sublattice contains eight degenerate \(d\) orbitals, while the \(B\) sublattice contains two degenerate \(p\) orbitals. The BCL Hamiltonian for this dice lattice has \((N_{A}-N_{B}=6)\) degenerate flat bands. Due to the relatively localized nature of the \(d\) orbitals, we anticipate that the intra-sublattice hopping \((A^{\prime}(k))\) will have small magnitudes, thereby having a limited impact on the formation of flat bands.
Based on the model Hamiltonians, the flat bands are calculated, shown in Fig. 2(c, e) and Fig. 3(c, e, g) without considering the intra-sublattice hoppings in \(A\) sublattice (\(A^{\prime}(k)=0\)) and Fig. 2(d, f) and Fig. 3(d, f, h) with considering the intra-sublattice hoppings(\(A^{\prime}(k)\neq 0\)). The clear flat bands have been shown in Fig. 2 and Fig. 3,
Figure 4: Electronic structures of Fe\({}_{4}\)GeTe\({}_{2}\) calculated by first-principles calculations with \(U=3.0\;eV\). (a) The non-SOC band structure. (b) The SOC band structure. (c) The spin-polarized DOS without SOC. (d) The Fermi surfaces with up spin are in red, while the Fermi surfaces with down spin are in blue.
though the \(A^{\prime}(k)\neq 0\) lead to the slight bending of the nearly flat bands. We can see that the dispersion of the bands almost keep unchanged with \(A^{\prime}(k)=0\) and \(A^{\prime}(k)\neq 0\) for L\({}_{1}\)/L\({}_{3}\) BCLs of Fe\({}_{4}\)GeTe\({}_{2}\) and L\({}_{3}\) BCL of Fe\({}_{5}\)GeTe\({}_{2}\), whereas this is not the case for the L\({}_{2}\) BCL of Fe\({}_{4}\)GeTe\({}_{2}\) and L\({}_{1}\)/L\({}_{2}\) BCLs of Fe\({}_{5}\)GeTe\({}_{2}\) [Fig. 2(e, f), Fig. 3(c, d, e, f)] due to the hopping between orbitals of Fe in dice lattice. We find that the bands from BCL model Hamiltonians with \(A^{\prime}(k)\neq 0\) can well reproduce the bands obtained from first-principles calculations, which support that the flat bands originate from the BCLs of Fe\({}_{n=4,5}\)GeTe\({}_{2}\).
It is worth discussing whether the flat bands of Fe\({}_{n=4,5}\)GeTe\({}_{2}\) are itinerant or local. Flat bands can be classified into two types: trivial flat atomic bands and non-trivial flat bands [53]. Flat atomic bands originate from the localization of orbitals or isolated atoms, resulting in negligible overlaps between atomic wavefunctions [53]. Conversely, non-trivial flat bands emerge from extended wavefunctions with substantial overlaps and hoppings [53], indicating the itinerant character. In the case of Fe\({}_{n=4,5}\)GeTe\({}_{2}\), the significant overlaps and hoppings between the orbitals suggest that their flat bands are itinerant.
### Flat-Band Ferromagnetism
In the absence of spin polarization, all flat bands formed by the \(d\) orbitals of Fe are close to the Fermi energy due to the partial occupation of the \(d\) orbitals. These flat bands result in sharp peaks of the non-spin-polarized density of states (DOS) near the Fermi energy. According to the Stoner theory, these peaks can lead to spontaneous magnetization [55; 56; 57]. The critical condition for the instability is expressed as \({U>1/N_{E_{F}}}\)[55], and here \({N_{E_{F}}}\) denotes the DOS at the Fermi energy.
As the value of \(U\) increases, the energies of states with up and down spin will decrease and increase respectively, leading to a spin-polarized DOS. Consequently, flat bands near the Fermi energy in non-spin-polarized band structures can give rise to ferromagnetism. The spin-polarized DOS contributes to the magnetic moment, which can be quantified as \({m=n_{\uparrow}-n_{\downarrow}}\), where \({n_{\uparrow}/n_{\downarrow}}\) represents the number of occupied states with the up/down spin. The magnetic moment increases with increasing \(U\), which is also confirmed by the results of first-principles calculations [Fig. 5(a) and Fig. 7(a)].
## V Electronic structure and magnetic properties
### Fe\({}_{4}\)GeTe\({}_{2}\)
We perform first-principles calculations to investigate the electronic structure and magnetic properties of Fe\({}_{4}\)GeTe\({}_{2}\). In our calculations, we employ \(U=3.0\ eV\) to obtain the band structure, DOS, and Fermi surfaces. The results suggest that the band structures with and without SOC are similar, implying that SOC has a negligible effect on the electronic structure of Fe\({}_{4}\)GeTe\({}_{2}\) [Fig. 4(a,b)]. The spin-polarization DOS is consistent with the ferromagnetism [Fig. 4(c)]. The band structure and Fermi surfaces [Fig. 4(d)] indicate the Fe\({}_{4}\)GeTe\({}_{2}\) is a quasi-2D ferromagnetic metal.
By gradually increasing the \(U\), we observe a gradual increase in the magnetic moments of the Fe atoms of Fe\({}_{4}\)GeTe\({}_{2}\). As illustrated in Fig. 5(a), the magnetic moments of Fe1 (Fe1\({}^{\prime}\)) and Fe2 (Fe2\({}^{\prime}\)) surpass \({1.5\ \mu_{B}}\) when \({U=0.0\ eV}\). Furthermore, we identify the presence of nearly flat bands in the non-spin-polarized band structures [Fig. 2] and the corresponding peaks in DOS [Fig. 5(b)]. These sharp peaks near the Fermi energy
Figure 5: Magnetic properties of Fe\({}_{4}\)GeTe\({}_{2}\) calculated by first-principles calculations. (a) The \(U\) dependence of magnetic moments of unequivalence Fe atoms. (b) The non-spin-polarized DOS.
Figure 6: Electronic structures of Fe\({}_{5}\)GeTe\({}_{2}\) calculated by first-principles calculations with \(U=3.0\ eV\). (a) The band structure without SOC. (b) The band structure with SOC. (c) The spin-polarized DOS without SOC. (d) The Fermi surfaces with up spin are in red, while the Fermi surfaces with down spin are in blue.
suggest that Fe\({}_{4}\)GeTe\({}_{2}\) exhibits characteristics of an itinerant flat-band ferromagnet [55].
### Fe\({}_{5}\)GeTe\({}_{2}\)
We further perform first-principles calculations to analyze the electronic and magnetic properties of Fe\({}_{5}\)GeTe\({}_{2}\). As shown in Fig. 6 and Fig. 7, the band structures, DOS, and Fermi surfaces are similar to those of Fe\({}_{4}\)GeTe\({}_{2}\). However, there is a significant difference in the magnetic properties of Fe5. For \(U\leq 0.7~{}eV\), the magnetic moments of Fe5 are negligible, while it has a sudden increase between \(U=0.7~{}eV\) and \(0.8~{}eV\). We explain this phenomenon based on the band structure of the L\({}_{1}\) BCL which is a quasi-dice lattice. The energy levels of Fe5 orbitals are slightly lower than those of Fe4 due to their different coupling to Te1. Then, bonding and anti-bonding bands are formed through the hopping between Fe4 and Fe5 orbitals. The anti-bonding band is primarily composed of Fe4 orbitals, whereas the bonding band is dominated by Fe5 orbitals. The Fe4-dominated bands are very close to the Fermi energy, resulting in the spontaneous magnetization of Fe4. As the value of \(U\) increases, the Fe5-dominated flat bands cross the Fermi level, leading to a pronounced enhancement in Fe5's magnetic moment.
We also investigate the effect of external pressure on the magnetic moment of Fe5 for Fe\({}_{5}\)GeTe\({}_{2}\). For simplicity, the cell volume is kept unchanged, applying pressure along the \(z\)-direction causes stretching in the \(xy\) plane. The compression along the \(z\)-direction is primarily accommodated by the vdW gaps, resulting in negligible alteration to the vertical spacing among atoms within each octuple layer. Consequently, the pressure primarily influences the intra-layer hoppings due to the in-plane stretching. Therefore, the hopping between Fe4 and Fe5 slightly decreases, causing the energy level of Fe5 to approach the Fermi energy. Therefore, the magnetic moment of Fe5 increases with the stretches in the \(xy\) plane. Furthermore, Fig. 8 illustrates that the expansion along the \(z\)-direction can lead to a significant reduction for the magnetic moment of Fe5, approaching almost zero. This pressure-induced modulation of the magnetic moment might be experimentally observed in Fe\({}_{5}\)GeTe\({}_{2}\).
## VI Conclusion
In this study, we investigate the origin of the nearly flat bands and ferrimagnetism in Fe\({}_{\rm n=4,5}\)GeTe\({}_{2}\). Our analysis reveals that the lattice structure of these materials can be viewed as three stacked BCLs along the \(z\) direction. The presence of different orbital numbers on two sublattices results in nearly flat bands. We demonstrate that the observed ferromagnetism in Fe\({}_{\rm n=4,5}\)GeTe\({}_{2}\) arises from these nearly flat bands according to the Stoner theory. By combining model calculations with first-principles calculations, we find that the magnetic moment of Fe5 in Fe\({}_{5}\)GeTe\({}_{2}\) is sensitive to Coulomb interactions \(U\) and external pressure. Notably, pressure-induced transitions in the magnetic moment of Fe5 may be experimentally observed in Fe\({}_{5}\)GeTe\({}_{2}\). The emergence of flat-band ferromagnetism in Fe\({}_{\rm n=4,5}\)GeTe\({}_{2}\) predominantly depends on the lattice structure, orbital characteristics, and electron filling number. These findings contribute to our understanding of the electronic and magnetic properties of vdW ferromagnets, specifically Fe\({}_{\rm n=4,5}\)GeTe\({}_{2}\).
###### Acknowledgements.
This work is supported by National Key Projects for Research and Development of China (Grant No.2021YFA1400400 and No.2017YFA0303203), the Fundamental Research Funds for the Central Universities (Grant No. 020414380185), Natural Science Foundation of Jiangsu Province (No. BK20200007), the Natural Science Foundation of China (No. 12074181, No. 12104217, and No. 11834006) and the Fok Ying-Tong Education Foundation of China (Grant No. 161006).
Figure 8: The dependence of the magnetism of Fe\({}_{5}\)GeTe\({}_{2}\) on external pressures calculated by first-principles calculations. (a) The \(U\) dependence of magnetic moments of Fe5’s magnetic moments under different external pressures. The percentages indicate the proportion of deformation in the \(z\)-direction. (b) The critical points of curves of Fe5’s magnetic moments in (a) evolution with the external pressure.
Figure 7: Magnetic properties of Fe\({}_{5}\)GeTe\({}_{2}\) calculated by first-principles calculations. (a) The \(U\) dependence of magnetic moments of unequivalent Fe atoms. (b) The non-spin-polarized DOS.
## Appendix A A brief proof of flat bands in BCLs
Firstly we ignore the intra-sublattice hoppings and the crystal field splitting of orbitals on the \(A\) sublattice, so \(A_{k}=\epsilon I\), where \(I\) is an identity matrix and \(\epsilon\) is the onsite energies of orbitals on \(A\) sublattice. We set \(\epsilon\) as the zero energy point:
\[H_{k}=\begin{pmatrix}O&S_{k}\\ S_{k}^{\dagger}&B_{k}\end{pmatrix} \tag{20}\]
Diagonalizing \(N_{B}\times N_{A}\) rectangular matrix \(S_{k}\), we have:
\[S_{k}=W_{k}\Sigma_{k}V_{k}^{\dagger} \tag{21}\]
Here, \(\Sigma_{k}\) is a rectangular diagonal matrix with \(N_{A}-N_{B}\) zero rows:
\[\Sigma_{k}=\begin{pmatrix}\epsilon_{1}&0&0&\dots&0\\ 0&\epsilon_{2}&0&\dots&0\\ 0&0&\epsilon_{3}&\ddots&0\\ \vdots&\ddots&\ddots&\ddots&0\\ 0&0&\dots&0&\epsilon_{N_{B}}\\ 0&0&\dots&0&0\\ \vdots&\vdots&\dots&\vdots&\vdots\\ 0&0&\dots&0&0\end{pmatrix} \tag{22}\]
Then we perform a similarity transformation on \(H_{k}\) as:
\[H_{k}=\begin{pmatrix}W_{k}&O\\ O&V_{k}\end{pmatrix}\begin{pmatrix}O&\Sigma_{k}\\ \Sigma_{k}^{\dagger}&b_{k}\end{pmatrix}\begin{pmatrix}W_{k}^{\dagger}&O\\ O&V_{k}^{\dagger}\end{pmatrix} \tag{23}\]
where \(b_{k}=V_{k}^{-1}B_{k}(V_{k}^{\dagger})^{-1}\). So that \(H_{k}\) is similar to a matrix that contains \(N_{A}-N_{B}\) zero rows, which implies that \(H_{k}\) possesses at least \(N_{A}-N_{B}\) zero-energy solutions with any \(k\). Then we could conclude that a BCL has at least \(N_{A}-N_{B}\) degenerate flat bands at the onsite energy of orbitals on \(A\) sublattice. This proof does not make requirements on the form of \(B_{k}\). Therefore, intra-sublattice hoppings and crystal field splitting of orbitals on the \(B\) sublattice do not affect the flat band.
|
2306.04643 | Abnormal Trading Detection in the NFT Market | The Non-Fungible-Token (NFT) market has experienced explosive growth in
recent years. According to DappRadar, the total transaction volume on OpenSea,
the largest NFT marketplace, reached 34.7 billion dollars in February 2023.
However, the NFT market is mostly unregulated and there are significant
concerns about money laundering, fraud and wash trading. The lack of
industry-wide regulations, and the fact that amateur traders and retail
investors comprise a significant fraction of the NFT market, make this market
particularly vulnerable to fraudulent activities. Therefore it is essential to
investigate and highlight the relevant risks involved in NFT trading. In this
paper, we attempted to uncover common fraudulent behaviors such as wash trading
that could mislead other traders. Using market data, we designed quantitative
features from the network, monetary, and temporal perspectives that were fed
into K-means clustering unsupervised learning algorithm to sort traders into
groups. Lastly, we discussed the clustering results' significance and how
regulations can reduce undesired behaviors. Our work can potentially help
regulators narrow down their search space for bad actors in the market as well
as provide insights for amateur traders to protect themselves from unforeseen
frauds. | Mingxiao Song, Yunsong Liu, Agam Shah, Sudheer Chava | 2023-05-25T15:12:14Z | http://arxiv.org/abs/2306.04643v2 | # Abnormal Trading Detection in the NFT Market
###### Abstract.
The Non-Fungible-Token (NFT) market has experienced explosive growth in recent years. According to DappRadar, the total transaction volume on OpenSea, the largest NFT marketplace, reached 34.7 billion dollars in February 2023. However, the NFT market is mostly unregulated and there are significant concerns about money laundering, fraud and wash trading. The lack of industry-wide regulations, and the fact that amateur traders and retail investors comprise a significant fraction of the NFT market, make this market particularly vulnerable to fraudulent activities. Therefore it is essential to investigate and highlight the relevant risks involved in NFT trading. In this paper, we attempted to uncover common fraudulent behaviors such as wash trading that could mislead other traders. Using market data, we designed quantitative features from the network, monetary, and temporal perspectives that were fed into K-means clustering unsupervised learning algorithm to sort traders into groups. Lastly, we discussed the clustering results' significance and how regulations can reduce undesired behaviors. Our work can potentially help regulators narrow down their search space for bad actors in the market as well as provide insights for amateur traders to protect themselves from unforeseen frauds.
+
Footnote †: c) 2023 Association for Computing Machinery. This is the author’s version of the work. It is posted here for your personal use. Not for redistribution. The definitive Version of Record was published in, [https://doi.org/10.1145/mmmmm.mmmmm](https://doi.org/10.1145/mmmmm.mmmmm).
+
Footnote †: c) 2023 Association for Computing Machinery. This is the author’s version of the work. It is posted here for your personal use. Not for redistribution. The definitive Version of Record was published in, [https://doi.org/10.1145/mmmmm.mmmmm](https://doi.org/10.1145/mmmmm.mmmmm).
+
Footnote †: c) 2023 Association for Computing Machinery. This is the author’s version of the work. It is posted here for your personal use. Not for redistribution. The definitive Version of Record was published in, [https://doi.org/10.1145/mmmmm.mmmmm](https://doi.org/10.1145/mmmmm.mmmmm).
+
Footnote †: c) 2023 Association for Computing Machinery. This is the author’s version of the work. It is posted here for your personal use. Not for redistribution. The definitive Version of Record was published in, [https://doi.org/10.1145/mmmmm.mmmmm](https://doi.org/10.1145/mmmmm.mmmmm).
+
Footnote †: c) 2023 Association for Computing Machinery. This is the author’s version of the work. It is posted here for your personal use. Not for redistribution. The definitive Version of Record was published in, [https://doi.org/10.1145/mmmmm.mmmmm](https://doi.org/10.1145/mmmmm.mmmmm).
+
Footnote †: c) 2023 Association for Computing Machinery. This is the author’s version of the work. It is posted here for your personal use. Not for redistribution. The definitive Version of Record was published in, [https://doi.org/10.1145/mmmmm.mmmmm](https://doi.org/10.1145/mmmmm.mmmmm).
+
Footnote †: c) 2023 Association for Computing Machinery. This is the author’s version of the work. It is posted here for your personal use. Not for redistribution. The definitive Version of Record was published in, [https://doi.org/10.1145/mmmmm.mmmmm](https://doi.org/10.1145/mmmmm.mmmmm).
+
Footnote †: c) 2023 Association for Computing Machinery. This is the author’s version of the work. It is posted here for your personal use. Not for redistribution. The definitive Version of Record was published in, [https://doi.org/10.1145/mmmmm.mmmmm](https://doi.org/10.1145/mmmmm.mmmmm).
+
Footnote †: c) 2023 Association for Computing Machinery. This is the author’s version of the work. It is posted here for your personal use. Not for redistribution. The definitive Version of Record was published in, [https://doi.org/10.1145/mmmmm.mmmmm](https://doi.org/10.1145/mmmmm.mmmmm).
+
Footnote †: c) 2023 Association for Computing Machinery. This is the author’s version of the work. It is posted here for your personal use. Not for redistribution. The definitive Version of Record was published in, [https://doi.org/10.1145/mmmmm.mmmmm](https://doi.org/10.1145/mmmmm.mmmmm).
+
Footnote †: c) 2023 Association for Computing Machinery. This is the author’s version of the work. It is posted here for your personal use. Not for redistribution. The definitive Version of Record was published in, [https://doi.org/10.1145/mmmmm.mmmmm](https://doi.org/10.1145/mmmmm.mmmmm).
+
Footnote †: c) 2023 Association for Computing Machinery. This is the author’s version of the work. It is posted here for your personal use. Not for redistribution. The definitive Version of Record was published in, [https://doi.org/10.1145/mmmmm.mmmmm](https://doi.org/10.1145/mmmmm.mmmmm).
+
Footnote †: c) 2023 Association for Computing Machinery. This is the author’s version of the work. It is posted here for your personal use. Not for redistribution. The definitive Version of Record was published in, [https://doi.org/10.1145/mmmmm.mmmmm](https://doi.org/10.1145/mmmmm.mmmmm).
+
Footnote †: c) 2023 Association for Computing Machinery. This is the author’s version of the work. It is posted here for your personal use. Not for redistribution. The definitive Version of Record was published in, [https://doi.org/10.1145/mmmmm.mmmmm](https://doi.org/10.1145/mmmmm.mmmmm).
+
Footnote †: c) 2023 Association for Computing Machinery. This is the author’s version of the work. It is posted here for your personal use. Not for redistribution. The definitive Version of Record was published in, [https://doi.org/10.1145/mmmmm.mmmmm](https://doi.org/10.1145/mmmmm.mmmmm).
+
Footnote †: c) 2023 Association for Computing Machinery. This is the author’s version of the work. It is posted here for your personal use. Not for redistribution. The definitive Version of Record was published in, [https://doi.org/10.1145/mmmmm.mmmmm](https://doi.org/10.1145/mmmmm.mmmmm).
+
Footnote †: c) 2023 Association for Computing Machinery. This is the author’s version of the work. It is posted here for your personal use. Not for redistribution. The definitive Version of Record was published in, [https://doi.org/10.1145/mmmmm.mmmmm](https://doi.org/10.1145/mmmmm.mmmmm).
+
Footnote †: c) 2023 Association for Computing Machinery. This is the author’s version of the work. It is posted here for your personal use. Not for redistribution. The definitive Version of Record was published in, [https://doi.org/10.1145/mmmmm.mmmmm](https://doi.org/10.1145/mmmmm.mmmmm).
+
Footnote †: c) 2023 Association for Computing Machinery. This is the author’s version of the work. It is posted here for your personal use. Not for redistribution. The definitive Version of Record was published in, [https://doi.org/10.1145/mmmmm.mmmmm](https://doi.org/10.1145/mmmmm.mmmmm).
+
Footnote †: c) 2023 Association for Computing Machinery. This is the author’s version of the work. It is posted here for your personal use. Not for redistribution. The definitive Version of Record was published in, [https://doi.org/10.1145/mmmmm.mmmmm](https://doi.org/10.1145/mmmmm.mmmmm).
+
Footnote †: c) 2023 Association for Computing Machinery. This is the author’s version of the work. It is posted here for your personal use. Not for redistribution. The definitive Version of Record was published in, [https://doi.org/10.1145/mmmmm.mmmmm](https://doi.org/10.1145/mmmmm.mmmmm).
+
Footnote †: c) 2023 Association for Computing Machinery. This is the author’s version of the work. It is posted here for your personal use. Not for redistribution. The definitive Version of Record was published in, [https://doi.org/10.1145/mmmmmmm.mmmmm](https://doi.org/10.1145/mmmmmmm.mmmmm).
+
Footnote †: c) 2023 Association for Computing Machinery. This is the author’s version of the work. It is posted here for your personal use. Not for redistribution. The definitive Version of Record was published in, [https://doi.org/10.1145/mmmmmmm.mmm](https://doi.org/10.1145/mmmmmmm.mmm).
+
Footnote †: c) 2023 Association for Computing Machinery. This is the author’s version of the work. It is posted here for your personal use. Not for redistribution. The definitive Version of Record was published in, [https://doi.org/10.1145/mmmmm.mmmmm](https://doi.org/10.1145/mmmmm.mmmmm).
+
Footnote †: c) 2023 Association for Computing Machinery. This is the author’s version of the work. It is posted here for your personal use. Not for redistribution. The definitive Version of Record was published in, [https://doi.org/10.1145/mmmmm.mmmmm](https://doi.org/10.1145/mmmmm.mmmmm).
+
Footnote †: c) 2023 Association for Computing Machinery. This is the author’s version of the work. It is posted here for your personal use. Not for redistribution. The definitive Version of Record was published in, [https://doi.org/10.1145/mmmmm.mmmmm](https://doi.org/10.1145/mmmmm.mmmmm).
+
Footnote †: c) 2023 Association for Computing Machinery. This is the author’s version of the work. It is posted here for your personal use. Not for redistribution. The definitive Version of Record was published in, [https://doi.org/10.1145/mmmmm.mmmmm](https://doi.org/10.1145/mmmmm.mmmmm).
+
Footnote †: c) 2023 Association for Computing Machinery. This is the author’s version of the work. It is posted here for your personal use. Not for redistribution. The definitive Version of Record was published in, [https://doi.org/10.1145/mmmmm.mmmmm](https://doi.org/10.1145/mmmmm.mmmmm).
+
Footnote †: c) 2023 Association for Computing Machinery. This is the author’s version of the work. It is posted here for your personal use. Not for redistribution. The definitive Version of Record was published in, [https://doi.org/10.1145/mmmmm.mmmmm](https://doi.org/10.1145/mmmmm.mmmmm).
identify suspicious activities. With our research, more characteristics can be used as crucial keys to identify fraudulent behaviors, transactions, and accounts, thus helping marketplaces deliver transparent and liquid pre-trade and post-trade information to all levels of NFT investors. Secondly, regulatory agencies can reference the potential group of wash traders to conduct validation and carry out purposeful research within each of the user behavioral groups we identified. In summary, our research is meaningful in supporting the identification of fraudulent targets in the NFT market while creating a compelling case for improving security scrutiny.
## 2. Literature Review
Some literature provides evidence for the existence of wash trading. Cho. et al.'s paper found that the LooksRare protocol was in charge of most sales at elevated prices (Cheng et al., 2017). Tariq et al. conducted statistical analyses such as Benford's law and clustering effects, proving the pervasiveness of abnormal prices and automated trades (Tariq et al., 2019).
**Graph-based Approach** The graph-based approach treats user interactions as a network and then identifies graph clusters, and detects collusive groups. Das et al. detected malicious trading behaviors by identifying strongly connected components in the user interaction network (Cheng et al., 2017). Von et al. examined illicit trades by looking into clusters of wallets with no obvious position changes after sequential transactions (Von et al., 2018). An innovative graph visualization called the NFTDisk was presented by Wen et al. (Wen et al., 2019). Graph-based Approach is also useful for detecting wash trades in traditional financial markets. Cao et al. analyzed wash trading patterns using digraphs and dynamic programming (Cao et al., 2019). In another research done by Victor et al., the graph-based strongly connected component identification method was again used for tracing wash trades on decentralized crypto exchanges (Von et al., 2018).
**Statistical Approach** Statistical approaches analyze abnormal behaviors from a quantitative point of view. For example, Serneels introduced three ways to detect wash trading with criterions such as close-loop token trades, closed-loop value trades, and high transaction volumes (Von et al., 2018). A more mathematically-based procedure was proposed by Pelechrinis et al. who built a regression model for profit prediction and filtered out anomalous transactions that generate above the average profit (Pene et al., 2018). Cong et al. quantified wash trading on cryptocurrency exchanges by exploring first significant-digit distributions, size rounding, and tail distributions of transaction volume (Pennec et al., 2018). Pennec et al. predicted ETH/BTC trading volumes using a regression model on web variables and wallet variables (Pennec et al., 2018).
**Data Mining and Machine Learning Approach** Data mining and machine learning techniques have become increasingly popular for fraud detection in financial markets. Thai et al. employed unsupervised learning methods such as k-means, Mahalanobis distance, and unsupervised SVM to detect anomalous behaviors in the Bitcoin network (Mohanso et al., 2018). Monano et al. also tested K-means and trimmed K-means clustering algorithms on currency, network, and average neighborhood features in Bitcoin transactions (Mohanso et al., 2018). Finally, Gubran et al.'s paper provided a comprehensive overview of most state of art fraud detection techniques used in the general financial market (Gubran et al., 2018). In summary, these techniques can help identify patterns that may not be visible to the human eye, detect subtle changes in transactions, and uncover previously unknown patterns of fraud that may have gone unnoticed using traditional methods.
Overall, previous work mostly focuses on fraud detection in traditional financial markets or cryptocurrencies. Also, only a limited amount of behavioral patterns for wash trading have been detected in the NFT market and there are more intrinsic characteristics to be exploited. Finally, cutting-edge data mining and machine-learning techniques haven't been fully uncovered in the field of non-fungible tokens. Its ability to detect complex patterns and help with automation in decision-making processes can be beneficial in the exploratory process for abnormality detection in the NFT market.
## 3. Data
In this section, we discuss data collection and preliminary data analysis which serve as the foundation for the quantitative definition of different behavioral patterns.
### Data Collection
Before introducing the dataset we used, we clarify some terms used in Non-Fungible Token trades. NFT Tokens are one-of-a-kind digital assets verified on the blockchain. NFT Collections refer to a collection of unique NFT tokens that are issued by the same artist. Some popular collections in the Art category include CryptoKitties, Azuki, and CryptoPunks (Azuki et al., 2018). The NFT Collections can be traded in NFT marketplaces, which are hubs for buying and selling NFTs. Some popular marketplaces include Opensea, Cryptoslam, and SuperRare (Shen et al., 2019). There are also NFT exchanges where orders can be placed by users and matched or executed by the exchange. As part of the transaction cost, a gas fee is required to compensate for the efforts and resources taken by miners to verify and add the transaction to the blockchain. The NFT buyers need a wallet to store and manage their NFT tokens and the wallet is identified by a unique wallet address. An NFT transaction operated by users can either be a sale, when a buyer purchases an NFT from a seller in exchange for cryptocurrency or other assets, or a transfer, when the owner of an NFT sends it to another wallet address without any exchange of payment.
Our research mainly focuses on transactions that occurred on an NFT marketplace, specifically OpenSea. Because the amount and characteristics of wash trading behaviors can be different among collections, we wanted to formulate a representative dataset. Therefore, we first used the Reservoir API to acquire collection-level information for 20,000 collections on the OpenSea marketplace. By eliminating inactive ones by ordering collections based on their tokenCounts and volume, we ended up with 5000 collections worth to be analyzed. From this pool of candidates, we randomly selected 100 collections to be a representative microcosm of the entire market, containing large, medium, and small collections with different frequencies of suspicious activities. We then used the Moralis API to retrieve all transaction histories for NFT tokens in those 100 collections ever since they were minted. For each transaction, the entries in the record include token_address (uniquely identify an NFT collection), token_id (uniquely identify an NFT token), from_address (seller's wallet address), to_address (buyer's wallet address), value (transaction value in Ethereum), block_number (block number on
blockchain for saving the transaction record), and block_timestamp (transaction timestamp). In total, we gathered over 1 million transactions with 252,924 distinct wallets. In support of our analysis, we also retrieved historical Ethereum to USD and Gas to Ethereum exchange rate data from Yahoo Finance.
Finally, we conducted data cleaning and removed unnecessary information and noisy data. Data validation was implemented since our dataset consists of data from multiple sources. We also performed data integration and transformation to manage all the data in an SQLite database that can be easily retrieved and analyzed.
### Exploratory Data Analysis
To have an overall look at the unusual behaviors and irregularities in the NFT market, we performed a preliminary data analysis utilizing traditional methods of abnormality detection. In this step, we don't aim to identify abnormal transactions but want to find evidence that shows the existence of abnormal activities. We first examined the first digit distribution of transaction prices against Benford's law, a common mathematical tool used in detecting fraud. Also, we used a common pattern in trading, round number clustering. Behavioral studies found that people tend to use multiples of 10 in decision-making. Lastly, we inspected the distribution of transaction price, focusing on whether it exhibits fat tails characterized by a power law.
Before taking those three approaches, we first removed all transactions with a price of 0 on which the above methods cannot be applied. To explore the distribution of trade prices' first digit, we grouped transactions by their price's first significant digit and calculated the percentage of all nine-digit classes. Figure 1 shows that our dataset exhibits a roughly similar exponentially decreasing line as that of Benford's law. However, to justify whether our dataset follows Benford's law, we performed a Chi-squared test with the null hypothesis being that the first digits in our dataset follow Benford's law. With a confidence level of 5% and a degree of freedom of 9, the Chi-squared test yields a Chi value of 24688, which is significantly larger than the critical value. Therefore, the test rejects our null hypothesis at a confidence level of 5%.
Next, we examined the round number clustering effect. We rounded transaction prices to 0 decimals and grouped transactions by their rounded price. By visual examination of the distribution of rounded prices from 0 to 80, we found that the distribution roughly exhibits a clustering at multiples of 5. However, there are abnormally high frequencies at certain rounded price points. For example, the frequency of 37 ETH is over 30,000, more than 5 times higher than the next highest frequency in the range from 0 to 80.
## 4. Model
To categorize NFT users into groups, we defined the problem as an unsupervised clustering problem with no ground truth. We built representative features and let the model learn to cluster users such that users within each cluster have some convergent behavioral patterns.
### Feature Engineering
Based on the results found in exploratory data analysis, we built patterns that have both statistical and practical significance and can reflect the motivation behind wash trading behaviors. By surveying existing literature that uses machine learning techniques for fraud detection (Bahdan et al., 2017; Chen et al., 2018; Chen et al., 2018), we categorized features into three categories: network features, monetary features, and temporal features.
Network features aim to reflect abnormalities in account interactions. Transaction records could be modeled as a directed multi-graph with wallets as nodes and transactions as edges. The directions of the edges follow the flow of the transaction, distinguishing the accounts as buyers and sellers. We observed that most accounts in the NFT marketplace make few transactions with a large group of different accounts. And accounts that make a high number of transactions with a small set of other accounts are more likely to be wash traders due to the fact that wash traders profit from actively participating in the market and trading with the same accounts repeatedly. With such observation, the degree of connections and unique connections an account has served as the perfect tool to capture an account's interactions in the market.
Monetary features are designed to capture abnormalities reflected in transaction sizes in USD. The absolute value, the average, and the standard deviation of the monetary volume construct a comprehensive view of the flow of capital. The profit actually gained by the trader is also an important factor. As mentioned in the introduction, though not all wash traders are profitable, excessive gain is still a crucial identification of wash trading behaviors. Finally, we described users' actions relative to market trends and quantified whether there is an obvious difference in behavioral patterns during bear and bull terms of the collections.
Lastly, temporal features could summarize a wallet's trading habits from the time dimension. Like other financial markets, compared with retail investors in the market, special groups of traders may trade with high-frequency, regularity, activeness, and other characteristics, which can be reflected by the temporal features we designed.
Table 1 details and describes all the features within each category. These features listed are wallet-based, meaning each of the 252,924 wallets has all features calculated. We standardized by transforming the features such that they have a mean of zero and a standard deviation of one. After designing these features, we calculated the correlation within each of the three feature types. This helps us
Figure 1. Our observation of the dataset’s first digit distribution vs. expectation of the distribution according to Benford’s law.
eliminate some highly correlated features we initially designed with the identification threshold of 0.9.
### K-Means Clustering
Without ground truth given, we wanted the model to identify potential grouping patterns and understand the underlying structure of the data, making our task an unsupervised clustering problem. The choice of the clustering algorithm depends on the requirement of the problem, the size of the data as well as the relative performance of each model in practice. Compared to several common clustering models such as Density-based clustering and Hierarchical clustering, K-Means clustering makes the most suitable and explainable predictions for our dataset while performing fast computing with our high dimensional dataset.
K-means cluster data into K distinct groups by iteratively partitioning the data into clusters and adjusting the centroid of each cluster until they converge to stable positions (Beng et al., 2017). In our example, each wallet is represented by a multi-dimensional vector with each dimension being a feature calculated above. This produces a set of m points \((x_{1},...,x_{m})\) in which \(x_{i}\in R^{n}\), where \(n=26\).
#### 4.2.1. Selection of K A
key problem with K-Means clustering is how to find the number K. With the goal of analyzing the customer composition in the NFT market, the number of clusters should be explainable such that it is not producing too many partitions while the number is not too small so that no valuable separation is provided. We started with two graphical techniques to determine the optimal number of clusters.
Firstly, the elbow method works by plotting the within-cluster sum of squares (WOSS) against the number of clusters and identifying the elbow point on the plot. The number at the elbow point is then the one that produces the optimal outcome as adding more clusters doesn't reduce WOSS significantly. Figure 2 shows the visualization. Though the elbow is not as obvious, we could still use a knee locator to find the elbow point at number 7. It also reflects that the number of clusters within a reasonable range doesn't affect the performance so much. Next, we calculated the Davies-Bouldin index (DBI) which aims to maximize the similarity within clusters and minimize the similarity between clusters. The lower the value, the better the clustering result. As shown in Figure 2, the global minimum occurs at around 13 and 14. However, considering explainability, dividing the dataset into 13 groups was too much when we need to define a trader type for each group. Thus, we considered
\begin{table}
\begin{tabular}{l l l} \hline \hline Category & Feature & Description \\ \hline Network & In-Degree & Total number of sellers the wallet has interacted with as a buyer \\ & Out-Degree & Total number of buyers the wallet has interacted with as a seller \\ & Unique In-Degree Ratio & Total number of distinct sellers the wallet has interacted with divided by in-degree \\ & Unique Out-Degree Ratio & Total number of distinct buyers the wallet has interacted with divided by out-degree \\ \hline Monetary & Total In-transaction Volume & Total volume of transactions with the wallet as a buyer in USD \\ & Total Out-transaction Volume & Total volume of transactions with the wallet as a seller in USD \\ & Average In-transaction Volume & Average amount bought in USD \\ & Average Out-transaction Volume & Average amount sold in USD \\ & SD of In-transaction Volume & Standard deviation of volume bought in USD \\ & SD of Out-transaction Volume & Standard deviation of volume sold in USD \\ & Profit from Transfers & Profit gained from transfers \\ & Profit Ratio & Sell price minus buy price divided by buy price for all non-zero valued sells \\ & Transfer Ratio & Number of transfers a wallet made divided by the total number of transactions \\ & Relative-Sell & Average ratio of selling price to collection EMA7\({}^{\text{s}}\)selling price \\ & Relative-Buy & Average ratio of buying price to collection EMA7 buying price \\ \hline Temporal & In-transaction Interval & Average time interval in terms of days between each in-transaction \\ & Out-transaction Interval & Average time interval in terms of days between each out-transaction \\ & Diff Time Interval & Difference between interval in-transaction and interval out-transaction for a wallet \\ & Max Trans & Maximum number of transactions per day \\ & Avg Trans & Average number of transactions per day \\ & SD Trans & Standard deviation of the number of transactions per day \\ & Avg Minted Days & Average number of days since an item being minted for each transaction for each wallet \\ & Market-Trend Buy & Activeness of a wallet during bear and bull markets\({}^{\text{s}}\)in regard to buying events \\ & Market-Trend Sell & Activeness of a wallet during bear and bull markets in regard to selling events \\ & Buy-ATR & Activeness of a wallet during volatile versus stable markets\({}^{\text{s}}\)in regard to buying events \\ & Sell-ATR & Activeness of a wallet during volatile versus stable markets in regard to selling events \\ \hline \hline \end{tabular}
* 7-day exponential moving average reflects heat in the market in the short term.
* The state of a market is determined using 7-day exponential moving average of transaction price.
* Volatility of a market is measured by the average true range (ATR).
\end{table}
Table 1. Feature Description.
the other local minimum which occurs around number 8. Finally, we used the silhouette coefficient to select the optimal number of clusters. It measures how similar an object is to data points within its own cluster compared to other clusters. Higher values indicate better clustering results since it indicates a strong separation between clusters. In order to further filter the candidates provided by the previous two methods, we used the silhouette coefficient to evaluate candidates 7, 8, and 9. As shown by our calculation, these three ways of partitioning have corresponding coefficients of 0.3, 0.29, and 0.28.
In summary, combining the results from these three methods with the fact that these three divisions all generate groups containing similar sets of data points, we chose k to be 7. Theoretically, it performs well enough divisions while avoiding delicate splits in the general market.
## 5. Results
With the lack of ground-truth structural analysis of the NFT market, we used machine learning evaluation techniques to prove the stability and validity of the algorithm, and then through visual, statistical, and cluster analysis, we assigned interpretable labels to each user group.
### Result Validation
To assess the performance of the model, we first performed cross-validation. We shuffled split the dataset into 10 samples, run the trained K-Means model on each of these subsets, and compare two clustering evaluation indices, Sum of Square Error (SSE) and DB-index. We observed that the model performs stably with 10 subsamples and that the train and test portion of each set generates clusters with similar qualities as well. No overfitting problem is presented with our model.
### Visual Analysis
To visualize such a high-dimensional dataset, we employed Principal Component Analysis (PCA) to find features that explain the largest variance. Setting the number of principal components to two for simpler visualization, we found the cumulative explained variation to be 25.66%. Figure 3 visually distributes all data points according to the two principal components. Class 0 in color green locates in the upper left with a high value of both PC1 and PC2. Class 3 in the color black tends to have a high value of PC1 and Class 2 in the color light blue tends to have a high value of PC2. Class 1 is the majority class in red with low values in both PC1 and PC2. Although the clusters are not clearly separated, it could be due to the mixed nature of NFT traders, which makes it hard to set distinguishable bounds. Thus, what is more important is to assign meaningful labels to these clusters by looking into the statistics in the next section.
### Statistical Analysis
In this section, we compare feature statistics of the seven clusters in order to find obvious quantitative characteristics that can define particular groups. Among the seven clusters, there are two small clusters, two medium ones, and three relatively large clusters. We then performed statistical comparisons among clusters. Given the large number of features we have as well as multiple clusters, we used the Exploratory Platform to help us with data analysis and visualization. We first constructed radar maps for each type of feature to reflect the general relative characteristics of these clusters, as the mean value of the features for each class is reflected in the map. Take the temporal feature radar map as an example, Class 4 is highly skewed and thus greatly stretches the map so we excluded Class 4 and compared the rest of these classes on relatively the same scales. Now Class 3 has a large mean in features related to the number of daily transactions, while Class 2 stands out in interval in-transaction and diff time intervals. Class 5 also yields lower than average diff time interval and above the average interval in/out-transaction.
Besides the radar graphs, we also analyzed boxplots for individual features, with which, instead of simply comparing the mean, we took variance and outliers into account as well. These statistical observations we made provide clues for forming definitive labels for all clusters, explaining user behavioral patterns with jargon in the NFT market, and identifying potential wash trader groups in the next section.
### Cluster Analysis
Apart from the aforementioned analyses, we also turned them into explainable facts understandable by actual participants in the NFT market. Upon visual and statistical analysis, we then summarized unique properties for each class and identified them to be some specific types of traders in the market.
Figure 3. PCA Visualization of the Clustering Result.
Figure 2. WCCSS and DBI vs Number of Cluster Plot.
**General Market: Class 1 (137,623 accounts)**
Class 1 with 137,623 accounts represents the market majority and serves as the benchmark since it contains no skewed features.
**Hodler: Class 6 (96,420 accounts)**
The 96,420 wallet addresses in Class 6 are identified as holders, which is a jargon describing traders buying and holding the asset for a long time. This group of traders essentially invest, never sell, and believe in the asset's long-term value. From our analysis, the most direct evidence is that all accounts have zero out degree, meaning that the selling activity is never conducted. Another fact is that they tend to invest right after the asset is initially minted.
**Inactive Accounts: Class 2 (5,145 accounts)**
Class 2 is labeled as inactive users. Their trading patterns are summarized by large intervals between buy activities and buy-sell activities, which means that this group of users doesn't frequently buy and upon buying doesn't trade owned products as often.
**Institutional Accounts: Class 4 (7 accounts)**
Institutional accounts manage a larger amount of money and assets compared to general retail investors. This cluster of accounts is clearly identified in the early phase of analysis as they contain outstanding values in many features and no matter what cluster number we input into the K-Means model, these seven accounts are always aggregated with similarity. These seven accounts generate all the outlier values for in-degree, out-degree, unique-in, and unique-out degrees, as well as the count of transfer activities. Given the limited number of samples in this set, we investigated individual account activities on Etherscan, a block analytics platform for Ethereum. Among these seven accounts, there are institutional accounts for NFT Collections such as AI Cabone, Doodles, and Space Doodles, and for NFT marketplaces such as Gem Swap, and Nimbus.
**Collectors: Class 0 (122 accounts)**
This relatively small group of accounts is identified as collectors, who establish values and ideas through their purchases and influence the whole NFT market. Different from speculators, investors who enter the market just to be profitable and make money, collectors look for NFT collections with a unique narrative and value the artistic value of the asset. Based on our analysis, they have properties such as high buy and sell volume, relatively large intervals between buy and sell activities, and buying when the collection is quite mature. They also make high profits by setting the selling price higher than the collection average. These all comply with the general understanding of collectors as they need time to evaluate the asset with aggregated insights and analysis.
**Wash Traders: Class 3 and Class 5 (13,607 accounts)**
Finally, two classes are left as candidates for wash traders. They are Class 3 and Class 5 resulting in 13,607 addresses. No clear labels can be given to these traders but they have some potential characteristics that match our understanding of wash trading behaviors. We found that entities from Class 3 tend to have high in and out-degree, meaning that they actively trade but likely with the same set of accounts. This can be explained by the fact that wash traders always trade with the same set of subaccounts either to inflate price or trading volume. They also have a high transfer ratio and a large number of transactions per day, validating the previous finding that most of their actions are transfers among subaccounts. Finally, they tend to buy and sell when the market is relatively stable since a volatile market is more risky and thus harder to profit as wash traders. On the other hand Class 5 with 11,359 wallets tend to buy and sell in high volume and have large intervals between consecutive in and out transactions. They tend to buy when a collection is relatively mature, meaning that the collection has been minted for a long time. These properties are identifiable features for wash traders.
## 6. Conclusion
Based on the seven clusters we derived from K-Means, five of them are marked with specific labels while the other two are potential wash trader candidates, concluding that the wash trader percentage in the NFT market is 5.38%. Our analysis is helpful for regulatory agencies since we provide a structural analysis of the whole NFT market, and based on the quantitative features we designed, regulators can only focus on a subset of the general market when constructing supervision rules. Open-source platforms can also be built to provide our findings to the general public and help them make comprehensive judgments. In summary, the research method we have taken can be used as an innovative inspiration that more researchers can use to conduct in-depth analysis.
There are still some limitations of our work that can be addressed in future studies. Firstly, the lack of ground truth brings uncertainty and bias upon evaluation. Thus, future research could either resort to authoritative resources, similar studies, or any open-source dataset to validate our results or provide more detailed divisions. Secondly, jumping out of the scope of identifying wash trading accounts, another potential improvement is to detect wash trading transactions, which better reflect the flow of abnormal dynamics. Finally, since we only took wash traders as our anomaly detection target, more scandalous activities such as money laundering, pump-and-dump, and rug-pull, can be analyzed in similar ways.
With the conclusion of this paper, we hope to attract more people who are interested in protecting the NFT market order and increasing the credibility and stability of the crypto market, with the goal of formalizing a more sophisticated and mature marketplace.
Figure 4. Radar Map for Temporal Features.
### Acknowledgement
We gratefully thank Andrew Hornback for his helpful advice during our research process. We sincerely thank peers from the Georgia Tech FinTech Lab for their extensive support. We also thank participants of the Web3 ATL Conference and the Georgia Tech UROP Symposium for their invaluable feedback.
|
2310.08947 | On network dynamical systems with a nilpotent singularity | Network dynamics is nowadays of extreme relevance to model and analyze
complex systems. From a dynamical systems perspective, understanding the local
behavior near equilibria is of utmost importance. In particular, equilibria
with at least one zero eigenvalue play a crucial role in bifurcation analysis.
In this paper, we want to shed some light on nilpotent equilibria of network
dynamical systems. As a main result, we show that the blow-up technique, which
has proven to be extremely useful in understanding degenerate singularities in
low-dimensional ordinary differential equations, is also suitable in the
framework of network dynamical systems. Most importantly, we show that the
blow-up technique preserves the network structure. The further usefulness of
the blow-up technique, especially with regard to the desingularization of a
nilpotent point, is showcased through several examples including linear
diffusive systems, systems with nilpotent internal dynamics, and an adaptive
network of Kuramoto oscillators. | Hildeberto Jardón-Kojakhmetov, Christian Kuehn | 2023-10-13T08:31:53Z | http://arxiv.org/abs/2310.08947v1 | # On network dynamical systems with a nilpotent singularity
###### Abstract
Network dynamics is nowadays of extreme relevance to model and analyze complex systems. From a dynamical systems perspective, understanding the local behavior near equilibria is of utmost importance. In particular, equilibria with _at least_ one zero eigenvalue play a crucial role in bifurcation analysis. In this paper, we want to shed some light on nilpotent equilibria of network dynamical systems. As a main result, we show that the blow-up technique, which has proven to be extremely useful in understanding degenerate singularities in low-dimensional ordinary differential equations, is also suitable in the framework of network dynamical systems. Most importantly, we show that the blow-up technique preserves the network structure. The further usefulness of the blow-up technique, especially with regard to the desingularization of a nilpotent point, is showcased through several examples including linear diffusive systems, systems with nilpotent internal dynamics, and an adaptive network of Kuramoto oscillators.
Nilpotent singularities; Network dynamical systems; Blow-up method; Geometric desingularization.
## 1 Introduction
We consider a networked system of \(N\)_scalar_ nodes \(x_{i}=x_{i}(t)\in\mathbb{R}\):
\[\dot{x}_{i}=f_{i}(x_{i},\mu_{i})+\sum_{j=1}^{N}w_{ij}h_{ij}(x_{i},x_{j},\lambda _{ij}),\qquad i=1,\ldots,N, \tag{1.1}\]
where \(f_{i}\) and \(h_{ij}\) are smooth functions with some "internal" parameters \(\mu_{i}\in\mathbb{R}^{\mu_{i}}\), "interaction parameters" \(\lambda_{ij}\in\mathbb{R}^{l_{j}}\), and "weights" \(w_{ij}\in\mathbb{R}\). We refer to \(f_{i}\) as _the internal dynamics_ and to \(h_{ij}\) as _the interaction_. The topology of the network is encoded in \(w_{ij}\), which are the coefficients
of a weighted adjacency matrix. In compact form, we can write (1.1) as
\[\dot{x}=F(x,\mu)+H(x,w,\lambda), \tag{1.2}\]
where \(F\) includes all the internal dynamics and \(H\) all the interactions; and with \(x\in\mathbb{R}^{N}\), \(\mu\in\mathbb{R}^{u_{1}+\cdots+u_{N}}=\mathbb{R}^{u}\), \(w\in\mathbb{R}^{N^{2}}\) and \(\lambda\in\mathbb{R}^{I}\) with \(l=\sum_{i,j=1}^{N}l_{ij}\). We shall say that a system _has a network structure_[26, 30, 34] if it can be (re-)written in the form (1.1), or equivalently (1.2). The main idea is that one can clearly distinguish the internal, or uncoupled, dynamics from the network interaction. In the rest of this paper, we shall concentrate on studying the dynamics near a nilpotent equilibrium point of (1.1). We recall that given a differential equation \(\dot{x}=f(x)\), \(x\in\mathbb{R}^{n}\), an equilibrium point \(x^{*}\) is said to be nilpotent if the eigenvalues of the Jacobian \(\mathrm{D}_{x}f(x^{*})\) are all zero.
The main conclusion drawn from the analysis presented in this paper is that _the blow-up transformation preserves the network structure_. We point out that, a priori, there is no guarantee for a coordinate change to preserve any sort of structure. This can be assessed by simply attempting a linear transformation on e.g. (1.2). The blow-up (see section 2.1) is a singular coordinate transformation that has been extensively used to desingularize the local dynamics of (mostly) low dimensional differential equations. Some further important observations are the following (for the details see section 2):
* A node-directional blow-up induces dynamics on the "blown-up" edges. However, these dynamics occur at higher orders. That is, up to leading order terms, the blown-up network is still static, see section 2.3.
* A parameter-directional blow-up is, essentially, a rescaling such that a distinguished parameter is set to 1 and the rest are kept small, see section 2.4.
* The blow-up seems to be especially useful for slowly adaptive networks, where, for example, an edge-directional blow-up leads to a static network with a distinguished edge fixed to 1. Moreover, the adaptation rule that defines the dynamics of such distinguished edge is visible, globally, in the whole network, see section 2.5.
Before going into the technical details, let us motivate our interest with some examples.
### Motivating examples
In this section, we motivate considering networked systems with nilpotent equilibria through some examples.
**Nilpotent internal dynamics:** Consider a weakly coupled network of the form (1.1) given by
\[\dot{x}_{i}=f_{i}(x_{i},\mu_{i})+\varepsilon\sum_{j=1}^{N}w_{ij}h_{ij}(x_{i},x_{ j},\lambda_{ij}),\qquad i=1,\ldots,N,\]
where \(0<\varepsilon\ll 1\), and such that there is a fixed set of parameters \(\mu_{i}^{*}\), and a point \(x^{*}=(x_{1}^{*},\ldots,x_{N}^{*})\), such that \(f_{i}(x_{i}^{*},\mu_{i}^{*})=\dfrac{\partial f_{i}}{\partial x_{i}}(x_{i}^{*},\mu_{i}^{*})=0\). In this case, the point \(x^{*}\) is nilpotent for \(\varepsilon=0\) and an approach to investigate the dynamics for \(\varepsilon\) small could be via the blow-up technique, as we shall present in this paper. See a concrete example in section 3.2.
**Remark 1**: _Even though we focus on networks of scalar nodes, dynamic networks with nilpotent internal dynamics frequently appear once the nodes are considered multidimensional. For example, one may consider coupled neuronal models [32] where each neuron modeled, e.g. via the Hodgkin-Huxley model or its simplifications contains nilpotent points in its internal dynamics. In a more general setting, nilpotent Hopf bifurcations have been considered for coupled cell networks [8], see also [12, 13, 24]. In such a case, an adapted version of [14] together with the blow-up analysis considered here could be applicable as well._
**A class of nonlinear consensus protocols:** Consider the general consensus protocol
\[\dot{x}=-L\left(Ax+\sum_{i=2}^{p}F_{i}(x)\right),\]
where \(x\in\mathbb{R}^{N},\ p\geq 2,\ L\) is a Laplacian matrix, \(A\) is a non-singular matrix and \(F_{i}\) is a homogeneous polynomial vector field of degree \(i\). The classical consensus protocol is obtained when \(A=I\) and \(F_{i}=0\) for all \(i\geq 2\). On the other hand, if \(A=0\) we immediately have that the origin is nilpotent. Furthermore, even if one simply considers \(\dot{x}=-LF_{2}(x)\), any \(x^{*}\) so that \(F(x^{*})\in\ker L\) would correspond to a nilpotent point. Since the kernel of \(L\) is
non-trivial, nilpotent points become relevant in this situation. See [5] for a particular case study, and [16] for a similar, but adaptive, example.
**Canceling linear dynamics:**: Consider (1.1) in the particular matrix form
\[\dot{x}=Ax+Bx+\cdots,\]
where \(A\) is diagonal, accounting for the (leading part of the) internal dynamics, and \(B\) is a matrix describing the (leading part of the) interaction; the dots stand for higher-order terms1. One can then realize that, upon variation of parameters, some components of these linear parts may "cancel" in the sense of having a nilpotent form. Clearly, this does not require \(A=-B\) in general. For example, consider the homogeneous case where \(A=al_{N}\) with \(a\in\mathbb{R}\). So, it suffices to notice that if \(T\) is the (complex) matrix that brings \(B\) into its Jordan canonical form, then the transformation \(x\mapsto T^{-1}x\) implies \(T(al_{N}+B)T^{-1}=al_{N}+J_{B}\), where \(J_{B}\) is the Jordan canonical form of \(B\), and is upper-triangular. In turn, \(al_{N}+J_{B}\) can be nilpotent, or at least have some zero eigenvalues, for \(B\neq-A\). For an example see section 3.1, and [27] for a related scenario.
**(Slowly) adaptive networks:**: As in the classical (low-dimensional) setting, nilpotent singularities are extremely relevant for slow-fast systems [21, 36]. Consider the (slowly) adaptive dynamic network [3, 31]
Footnote 1: Throughout this document, “higher-order terms” means terms of order \(\mathcal{O}(|x|^{k})\) as \(|x|\to 0\) for some \(k>1\) to be specified when appropriate.
\[\dot{x}_{i} =f_{i}(x_{i},\mu_{i})+\sum_{j=1}^{N}w_{ij}h_{ij}(x_{i},x_{j}, \lambda_{ij}) \tag{1.3}\] \[\dot{w}_{ij} =\epsilon g_{ij}(x,w_{ij}).\]
If one were to attempt the analysis of (1.3) with techniques derived from Geometric Singular Perturbation Theory (GSPT), the nilpotent points of (1.3) restricted to \(\epsilon=0\) define a class of singularities nearby which the solutions may exhibit a quite intricate behavior for \(0<\epsilon\ll 1\).
As a particular example consider a family of adaptive Kuramoto oscillators
\[\phi_{i}^{\prime} =\Omega-\frac{1}{N}\sum_{j=1}^{N}w_{ij}\sin(\phi_{i}-\phi_{j}+\alpha)\] \[w_{ij}^{\prime} =-\varepsilon\left(\sin(\phi_{i}-\phi_{j}+\beta)+w_{ij}\right),\]
with parameters \(\alpha,\beta\) close to zero. A phase-locked equilibrium is given by
\[\phi_{i}^{*} =\psi_{i}\] \[w_{ij}^{*} =-\sin(\psi_{i}-\psi_{j}+\beta)\] \[\Omega =-\frac{1}{N}\sum_{j=1}^{N}\sin(\psi_{i}-\psi_{j}+\beta)\sin(\psi _{i}-\psi_{j}+\alpha),\]
for some \(\psi_{i}\in[0,2\pi)\), \(i=1,\ldots,N\). As an example, let us take a 4-node network and \(\psi=(\psi_{1},\psi_{2},\psi_{3},\psi_{4})=\left(0,\frac{\pi}{2},\pi,\frac{3 \pi}{2}\right)\). For this choice of equilibrium phases, the phase-locked equilibrium is nilpotent for \(\alpha=\beta=0\) and \(\Omega=-\frac{1}{2}\). Indeed we have that the relevant part of the linearization, at the aforementioned phase-locked equilibrium, is given by
\[J=\frac{1}{4}\begin{bmatrix}w_{13}&0&-w_{13}&0\\ 0&w_{24}&0&-w_{24}\\ -w_{31}&0&w_{31}&0\\ 0&-w_{42}&0&w_{42}\end{bmatrix}.\]
So, further noting that \(w_{13}^{*}=w_{31}^{*}=w_{24}^{*}=w_{42}^{*}=0\) we have that \(J=0\), implying that the phase-locked equilibrium is nilpotent. This example is further detailed in section 3.3.
**Remark 2**: _Adaptivity is, of course, not necessary. See for example [33] where nilpotent equilibria of the classical Kuramoto model are considered._
As motivated in the previous examples, we are interested in network dynamical systems (1.1), equivalently (1.2), with a nilpotent singularity. Nilpotent points are highly important because they hint at the possibility of complicated bifurcations occurring in their vicinity.
From now on, we shall assume that there is a set of parameters \(\mu^{*}\), \(\lambda^{*}\) and edge weights \(w^{*}\) such that \(x^{*}\) is an _isolated_ nilpotent equilibrium point of (1.2).
**Remark 3**: _Assuming \(x^{*}\) being isolated is not too strict, at least for our arguments regarding the blow-up transformation preserving the network structure, because one can also blow-up higher-dimensional sets of equilibria in a similar way. We present a brief description of the blow-up transformation and several relevant references in section 2.1._
Without loss of generality, we further assume that the nilpotent point is at \(x^{*}=0\) for \((\mu^{*},\lambda^{*},\omega^{*})=(0,0,0)\). However, we emphasize that, as presented in the examples, a nilpotent point does not necessarily have to be related to a disconnected network. Let \(a_{i}(\mu)=\dfrac{\partial f_{i}}{\partial x_{i}}(0,\mu)\), \(D(\mu)=\operatorname{diag}\left\{a_{i}(\mu)\right\}_{i=1}^{N}\), and \(A(\omega,\lambda)=\dfrac{\partial H}{\partial x}(0,\omega,\lambda)\). So, we can write (1.2) as
\[\dot{x}=(D(\mu)+A(\omega,\lambda))x+\tilde{F}(x,\mu)+\tilde{H}(x,w,\lambda), \tag{1.4}\]
where, from our assumptions, \(D(0)+A(0,0)\) is a nilpotent matrix (and non of which needs to be the zero matrix) and \(\tilde{F}\) and \(\tilde{H}\) stand, respectively, for the higher-order terms (by this we mean monomials of degree higher than one) of the internal dynamics and of the interaction.
It is an important question whether one may indeed expect the leading term \(D(\mu)+A(\omega,\lambda)\) in (1.4) to be nilpotent for parameters \((\mu,\omega,\lambda)=(\mu^{*},\omega^{*},\lambda^{*})\). Roughly speaking, since \(n\) eigenvalues are required to be zero, we expect that this can be achieved, generically, for \(n\)-parameter families of dynamic networks (1.4). More formally, since \(GL(n)\) has dimension \(n^{2}\) and since there are at most \(n(n-1)\) "different" nilpotent matrices, one expects that such matrices appear generically whenever \(D(\mu)+A(\omega,\lambda)\) has at least \(n\) independent parameters. In other words, if \(D(\mu)+A(\omega,\lambda)\) has at least \(n\) independent parameters, then there is at least one particular choice \((\mu^{*},\omega^{*},\lambda^{*})\) such that \(D(\mu^{*})+L(\omega^{*},\lambda^{*})\) is nilpotent. Moreover, since the only nilpotent symmetric matrix is the zero matrix, we shall consider that the interaction is _directed_.
**Remark 4**: _We emphasize that we do not put \(D(0)+A(0,0)\) in normal form since, generally speaking, a similarity transformation destroys the network structure of the system._
## 2 Blowing up preserves the network structure
Our aim in this section is to show that the (singular) coordinate transformation known as blow-up, when applied to a nilpotent singularity of a network dynamical system, preserves the network structure. Due to the importance of the blow-up, although the literature contains already plenty of information on it [1, 17, 21], we prefer to provide a brief recollection of what is known as a blow-up in the way it is used in this paper. More importantly, in this section, we provide certain terminology and fix some notations that are later used in the examples of section 3.
### The blow-up
Let \(X:\mathbb{R}^{n}\to\mathbb{R}^{n}\) be a smooth vector field2 such that \(X(0)=0\) and \(0\in\mathbb{R}^{n}\) is a nilpotent point. Let \(\phi:\mathbb{S}^{n-1}\times I\to\mathbb{R}^{n}\) be a weighted polar transformation given by
Footnote 2: What follows also holds for smooth vector fields on smooth manifolds by taking a chart in a neighborhood of the nilpotent point.
\[(\bar{x}_{1},\ldots,\bar{x}_{n},r)\mapsto(r^{\alpha_{1}}\bar{x}_{1},\ldots,r^{ \alpha_{n}}\bar{x}_{n})=(x_{1},\ldots,x_{n}),\]
where \(\alpha_{i}\in\mathbb{N}\) for all \(i=1,\ldots,n\), \(\sum_{i=1}^{n}\bar{x}_{i}^{2}=1\) (i.e. \((\bar{x}_{1},\ldots,\bar{x}_{n})\in\mathbb{S}^{n-1}\)) and \(r\in I\subset\mathbb{R}\) where \(I\) is an interval containing the origin. We notice that \(\phi^{-1}\) maps the origin \(0\in\mathbb{R}^{n}\) to the sphere \(\mathbb{S}^{n-1}\times\{0\}\) and we usually say that "the origin is blown-up to a sphere". On the other hand, \(\phi\) is a diffeomorphism whenever \(r>0\). Due to the weights in the transformation, one refers to the transformation as quasihomogeneous (and homogeneous when all weights are the same). These weights often facilitate the desingularization (see a description below), but their appropriate choice can be quite complicated. If the diagram of figure 1 commutes, then we call \(\bar{X}\) the blown-up vector field.
Since \(X(0)=0\), it follows that \(\bar{X}|_{r=0}=0\). However, if the weights are appropriately chosen, one can often _desingularize_\(\bar{X}\) by dividing by some power of \(r\). In other words, one can define the _desingularized_ vector field \(\bar{X}=\dfrac{1}{r^{k}}\bar{X}\) such that \(\bar{X}(0)\neq 0\) and \(\bar{X}\) has semi-hyperbolic singularities only (and no more nilpotent ones). Recalling that \(\phi\) is a diffeomorphism for \(r>0\), the main advantage now is that for \(r>0\) small (i.e. near \(\mathbb{S}^{n-1}\times\{0\}\)) one can recover the dynamics of \(\bar{X}\) from those of \(\bar{X}\), and in turn these describe the dynamics of \(X\) near the origin. In
practice, however, working with spherical coordinates can become cumbersome very quickly. Hence, one prefers to work with _directional_ blow-ups, which are defined by replacing \(\mathbb{S}^{n-1}\) by its charts. Moreover, blowing up using a sphere is not necessary, as one can blow-up, for example, to a hyperbolic manifold [22].
Although the origins of the blow-up transformation are in algebraic geometry, the blow-up transformation has proven extremely useful in dynamical systems to study nilpotent singularities. It has been, for example, used to study degenerate bifurcation problems and their unfoldings [6, 35], see also the survey [1]; or, of particular relevance for this paper, to study some non-hyperbolic points of slow-fast systems [7, 20], see also the survey [17]. Indeed, in the aforementioned context, one can consider nowadays that the blow-up transformation is a standard tool of Geometric Singular Perturbation Theory [21, 36]. Over the past decades, the blow-up transformation has been mostly used for low-dimensional problems. In fact, it is known that analytic vector fields up to dimension three can be desingularized after a finite number of blow-ups [28, 29]. In this paper, we will see the usefulness of the blow-up transformation on systems that are usually considered large-dimensional, dynamic networks. As dynamic networks are, usually, of large dimension, a relevant direction in this regard is the blow-up in the context of PDEs [9, 10, 15, 19], which so far has been considered without networks structure and, in some cases, can be regarded as mean-field limits of dynamic networks [11, 23].
### Structure preservation
This section is dedicated to showing that a general directional blow-up preserves the network structure of a given dynamical system.
Figure 1: Commutative diagram defining a blow-up transformation.
**Remark 5**: _As described in the previous section, usually one expects that after desingularization via blow-up one gains a certain degree of hyperbolicity. The number of blow-up transformations to completely desingularize a nilpotent singularity is, however, not known a priori. This means that, after a blow-up, some of the new singularities may be semi-hyperbolic or (even still) nilpotent. This is nevertheless already good progress. In the former case, further analysis can be restricted to a lower dimensional subset, the corresponding center manifold, where the blow-up can be applied again. In the latter, the process of blowing-up can also be repeated in a system that is, usually, less degenerate than the starting one. Nevertheless, one expects that after a finite number of blow-ups, one can fully describe the dynamics of a system near a nilpotent singularity._
_We emphasize that in this section, we only focus on structure preservation via blowing up. The question of (full) desingularization is a nontrivial one and is highly dependent on the particular problem at hand, and the choice of weights in the quasihomogeneous blow-up (a nontrivial task in itself). We discuss some cases where one can achieve desingularization later in the examples of section 3._
Using the notation introduced above, let us consider a vector field of the form
\[X:\left\{\begin{aligned} \dot{x}&=F(x,\sigma)+H(x, \sigma)=(D_{\sigma}+A_{\sigma})x+\tilde{F}(x,\sigma)+\tilde{H}(x,\sigma)\\ \dot{\sigma}&=0,\end{aligned}\right. \tag{2.1}\]
where \(F\) stands for the internal dynamics, \(H\) the interaction, and \(\sigma\in\mathbb{R}^{p}\) includes all possible parameters, hence \(D_{0}+A_{0}\) is nilpotent. _Our objective is to show that a local vector field obtained by a directional blow-up has the same identifiable structure of internal dynamics plus interaction_.
**Remark 6**: _The blow-up technique is local. Hence, without further mentioning it, we assume that the vector fields we are considering are polynomial._
Let us define a quasihomogeneous blow-up via the map \(\Xi:\mathbb{R}_{\geq 0}\times\mathbb{S}^{N+p-1}\to\mathbb{R}^{N+p}\) given by:
\[\Xi:(r,\bar{x}_{1},\ldots,\bar{x}_{n},\bar{\sigma}_{1},\ldots,\bar{\sigma}_{ p})\mapsto\underbrace{(r^{\alpha_{1}}\bar{x}_{1},\ldots,r^{\alpha_{n}}\bar{x}_{n}, r^{\beta_{1}}\bar{\sigma}_{1},\ldots,r^{\beta_{p}}\bar{\sigma}_{p})}_{=(x_{1}, \ldots,x_{N},\sigma_{1},\ldots,\sigma_{p})},\]
and where
\[\sum_{i=1}^{N}\bar{x_{i}^{2}}+\sum_{j=1}^{p}\sigma_{j}^{2}=1.\]
We can define a _directional_ blow-up by fixing one of the blown-up variables in \(\mathbb{S}^{N+p-1}\) to \(\pm 1\) and let the rest be coordinates in \(\mathbb{R}^{n-1}\) (this is, of course, reminiscent of a stereographic projection). The sign is, for now, rather inessential, so let us consider charts
\[K_{i} =\left\{\bar{x}_{i}=1\right\},\qquad i=1,\ldots,N\] \[Q_{j} =\left\{\bar{\sigma}_{j}=1\right\},\qquad j=1,\ldots,p.\]
Let us denote a corresponding directional blow-up by
\[\Phi_{i} =\Xi|_{\bar{x}_{i}=1},\] \[\Psi_{j} =\Xi|_{\bar{\sigma}_{j}=1}.\]
The blown-up vector field in the chart \(K_{i}\) (resp. \(Q_{j}\)) is then obtained via the application of the corresponding change of coordinates \(\Phi_{i}\) (resp. \(\Psi_{j}\)). For convenience, let
\[\begin{split}\bar{X}_{i}&=\left(\mathrm{D}\Phi_{i} \right)^{-1}X\circ\Phi_{i},\qquad i=1,\ldots,N\\ \bar{Y}_{j}&=\left(\mathrm{D}\Psi_{j}\right)^{-1}X \circ\Psi_{j},\qquad j=1,\ldots,p.\end{split} \tag{2.2}\]
**Remark 7**:
\(\bullet\) _The quasi-homogeneous blow-up requires a careful choice of weights so that the local vector fields are well defined. In this section, however, we are only interested in the form the blown-up vector fields take and hence assume that (2.2) are all well defined, especially at \(r=0\) (possibly after a division by some power of \(r\)). \(\bullet\,\Phi_{i}\) corresponds to a blow-up of the node \(x_{i}\). Hence, within the context of networks, we refer to it as a "node blow-up". On the other hand, the need to blow-up in the direction of a parameter arises when such a parameter is responsible for some singular behavior, for example when it induces a nilpotent singularity. So, if for example, \(\sigma_{k}\) corresponds to some "weight parameter" \(w_{ij}\), we shall refer to \(\Psi_{k}\) as an "edge blow-up". Of course, everything
_that we will present below can be extended to network dynamical systems given by_
\[\dot{x}_{i} =f_{i}+\sum_{j=1}^{N}w_{ij}h_{ij},\] \[\dot{w}_{ij} =g_{ij},\]
_where the "edge-directional blow-up" would be more apparent. This consideration is, however, inessential for the purposes of the paper, which is to show the preservation of the network structure after the blow-up. See more details in section 2.5._
Since the compositions \(X\circ\Phi_{i}\) and \(X\circ\Psi_{j}\) do not change the network structure, we only need to look at the effect of pre-multiplication by the inverse of the derivative of the directional blow-up.
### Node directional blow-up
We first focus on the node blow-up, that is on the \(\bar{X}_{i}\)'s. For convenience and simplicity of the exposition, for each \(i=1,\dots,N\) let us reorder the components of \(X\) according to the coordinates
\[(x_{i},x_{1},\dots,x_{i-1},x_{i+1},\dots,x_{N},\sigma_{1},\dots,\sigma_{p}).\]
We call such a vector filed \(X_{i}\), but notice that we are simply swapping the position of the \(i\)-th component3. In this way the corresponding directional blow-up \(\Phi_{i}\) is given in coordinate form
by
\[\left(\begin{array}{c}r^{\alpha_{i}}\\ r^{\alpha_{1}}\bar{x}_{1}\\ \vdots\\ r^{\alpha_{l-1}}\bar{x}_{i-1}\\ r^{\alpha_{i+1}}\bar{x}_{i+1}\\ \vdots\\ r^{\alpha_{N}}\bar{x}_{N}\\ r^{\beta_{1}}\bar{\sigma}_{1}\\ \vdots\\ r^{\beta_{p}}\bar{\sigma}_{p}\end{array}\right)=\left(\begin{array}{c}x_{i} \\ x_{1}\\ \vdots\\ x_{i-1}\\ x_{i+1}\\ \vdots\\ x_{N}\\ \sigma_{1}\\ \vdots\\ \sigma_{p}\end{array}\right),\]
and therefore
\[\mathrm{D}\Phi_{i}=\begin{bmatrix}\rho_{i}&0_{1\times(N-1)}&0_{1\times p}\\ \chi_{i}&R_{i}&0_{(N-1)\times p}\\ \zeta&0_{p\times(N-1)}&B\end{bmatrix}\]
where
\[\rho_{i} =\alpha_{i}r^{\alpha_{i}-1},\] \[\chi_{i} =(\alpha_{1}r^{\alpha_{1}-1}\bar{x}_{1},\cdots,\alpha_{i-1}r^{ \alpha_{i-1}-1}\bar{x}_{i-1},\alpha_{i+1}r^{\alpha_{i+1}-1}\bar{x}_{i+1}, \cdots,\alpha_{N}r^{\alpha_{N}-1}\bar{x}_{N})^{\top},\] \[R_{i} =\mathrm{diag}\left\{r^{\alpha_{1}},\cdots,r^{\alpha_{i-1}},r^{ \alpha_{i+1}},\cdots,r^{\alpha_{N}}\right\},\] \[\zeta =(\beta_{1}r^{\beta_{1}-1},\cdots,\beta_{p}r^{\beta_{p}-1}\bar{ \sigma}_{p})^{\top},\] \[B =\mathrm{diag}\left\{r^{\beta_{1}},\cdots,r^{\beta_{p}}\right\}.\]
Since \(\mathrm{D}\Phi_{i}\) is lower triangular, its inverse is also lower triangular. In fact, we have:
\[(\mathrm{D}\Phi)^{-1}=\frac{1}{\rho_{i}}\begin{bmatrix}1&0_{1\times(N-1)}&0_ {1\times p}\\ -R_{i}^{-1}\chi_{i}&\rho_{i}R_{i}^{-1}&0_{(N-1)\times p}\\ -B^{-1}\zeta&0_{p\times(N-1)}&\rho_{i}B^{-1}.\end{bmatrix}\]
**Remark 8**.: _As expected, \(\mathrm{D}\Phi\) is invertible for \(r\neq 0\)._
Thus, the local vector field \(\bar{X}_{i}=(\mathrm{D}\Phi_{i})^{-1}X_{i}\circ\Phi_{i}\) can be written in local coordinates4 as
Footnote 4: We recycle the coordinate notation in each chart to avoid introducing obfuscating new terminology. So, for example, the local coordinates in the chart \(K_{i}\) are \((r,\bar{x}_{1},\ldots,\bar{x}_{i-1},\bar{x}_{i+1},\ldots,\bar{x}_{N},\bar{ \sigma}_{1},\ldots,\bar{\sigma}_{p})\), and similarly the local coordinates in the chart \(Q_{j}\) are \((r,\bar{x}_{1},\ldots,\bar{x}_{N},\bar{\sigma}_{1},\ldots,\bar{\sigma}_{j-1}, \bar{\sigma}_{j+1},\ldots,\bar{\sigma}_{p})\). The distinction between these local coordinates is only necessary when transitioning from one chart to another. Furthermore, along the text we use the common notation \(r^{\alpha}\bar{x}=(r^{\alpha_{1}}\bar{x}_{1},\ldots,r^{\alpha_{N}}\bar{x}_{N})\), which within the chart \(K_{i}\) should be understood as \(r^{\alpha}\bar{x}=(r^{\alpha_{i}},r^{\alpha_{1}}\bar{x}_{1},\ldots,r^{\alpha_ {i-1}}\bar{x}_{i-1},r^{\alpha_{i+1}}\bar{x}_{i+1},r^{\alpha_{N}}\bar{x}_{N})\), and similarly for \(r^{\beta}\bar{\sigma}\).
\[\bar{X}_{i}:\begin{cases}r^{\prime}&=\frac{r^{1-\alpha_{i}}}{\alpha_{i}}\left( \bar{f}_{i}+\bar{h}_{i}\right),\\ \bar{x}_{k}^{\prime}&=\frac{1}{r^{\alpha_{k}}}\left(\bar{f}_{k}+\bar{h}_{k}- \frac{\alpha_{k}}{\alpha_{i}}r^{\alpha_{k}-\alpha_{i}}\bar{x}_{k}(\bar{f}_{i} +\bar{h}_{i})\right),\\ \bar{\sigma}_{j}^{\prime}&=-\frac{\beta_{j}}{\alpha_{i}}r^{-\alpha_{i}}\bar{ \sigma}_{j}(\bar{f}_{i}+\bar{h}_{i}),\end{cases}\]
where \(k=1,\ldots,i-1,i+1,\ldots,N\), \(j=1,\ldots,p\) and where \(\bar{f}_{\ell}\) and \(\bar{h}_{\ell}\) are the \(\ell\)-th component of \(F\circ\Phi_{i}\) and \(H\circ\Phi_{i}\) respectively. Notice already that the equations for \(\bar{x}_{k}\) keep, in some sense to be detailed below, the network structure.
Let \([M]_{ij}\) denote the \(i,j\) entry of the matrix \(M\). Recall from (1.4) that \(F+H=(D_{\sigma}+A_{\sigma})x+\cdots\). This implies that, up to leading order terms, we have:
\[\bar{f}_{i}+\bar{h}_{i}=(a_{i}(r^{\beta}\bar{\sigma})+[A_{r^{\beta}\bar{ \sigma}}]_{ii})r^{\alpha_{i}}+\sum_{\begin{subarray}{c}j=1\\ j\neq i\end{subarray}}^{N}[A_{r^{\beta}\bar{\sigma}}]_{ij}r^{\alpha_{j}}\bar{x }_{j}+\cdots\]
\[\bar{f}_{k}+\bar{h}_{k}=[A_{r^{\beta}\bar{\sigma}}]_{ki}r^{\alpha_{i}}+(a_{k}( r^{\beta}\bar{\sigma})+[A_{r^{\beta}\bar{\sigma}}]_{kk})r^{\alpha_{k}}\bar{x }_{k}+\sum_{\begin{subarray}{c}j=1\\ j\neq i\end{subarray}}^{N}[A_{r^{\beta}\bar{\sigma}}]_{kj}r^{\alpha_{j}}\bar{x }_{j}+\cdots,\]
which hints to the choice of the blow-up weights \(\alpha_{i}=\alpha\) for all \(i=1,\ldots,N\) so that every local vector field \(\bar{X}_{i}\) is well defined for \(r=0\). Under such a choice we have, up to leading order
terms, the blown-up vector field \(\bar{X}_{i}\) given by:
\[r^{\prime} =\frac{r}{\alpha}\left((a_{i}(r^{\beta}\bar{\sigma})+[A_{r\beta \bar{\sigma}}]_{ii})+\sum_{\begin{subarray}{c}j=1\\ j\neq i\end{subarray}}^{N}[A_{r\beta\bar{\sigma}}]_{ij}\bar{x}_{j}+\cdots\right) \tag{2.3}\] \[\bar{x}_{k}^{\prime} =\left(\begin{array}{c}[A_{r\beta\bar{\sigma}}]_{ki}+(a_{k}(r^ {\beta}\bar{\sigma})+[A_{r\beta\bar{\sigma}}]_{kk})\bar{x}_{k}+\sum_{ \begin{subarray}{c}j=1\\ j\neq i\\ j\neq k\end{subarray}}^{N}[A_{r\beta\bar{\sigma}}]_{kj}\bar{x}_{j}+\cdots\\ \qquad-\bar{x}_{k}\left((a_{i}(r^{\beta}\bar{\sigma})+[A_{r\beta\bar{\sigma}} ]_{ii})+\sum_{\begin{subarray}{c}j=1\\ j\neq i\end{subarray}}^{N}[A_{r\beta\bar{\sigma}}]_{ij}\bar{x}_{j}+\cdots\right)\] \[\bar{\sigma}_{j}^{\prime} =-\frac{\beta_{j}}{\alpha}\bar{\sigma}_{j}\left((a_{i}(r^{\beta} \bar{\sigma})+[A_{r\beta\bar{\sigma}}]_{ii})+\sum_{\begin{subarray}{c}j=1\\ j\neq i\end{subarray}}^{N}[A_{r\beta\bar{\sigma}}]_{ij}\bar{x}_{j}+\cdots\right).\]
For \(r=0\) we have that \(r^{\prime}=0\) and
\[\bar{x}_{k}^{\prime} =[A_{0}]_{ki}+(a_{k}(0)+[A_{0}]_{kk}-a_{i}(0)-[A_{0}]_{ii})\bar{x} _{k}+\sum_{\begin{subarray}{c}j=1\\ j\neq i\\ j\neq k\end{subarray}}^{N}[A_{0}]_{kj}\bar{x}_{j}+\mathcal{O}(|\bar{x}|^{2}) \tag{2.4}\] \[\bar{\sigma}_{j}^{\prime} =-\frac{\beta_{j}}{\alpha}\left(a_{i}(0)+[A_{0}]_{ii}\right)\bar{ \sigma}_{j}+\mathcal{O}(\bar{\sigma}_{j}\bar{x}),\]
where \(k=1,\ldots,i-1,i+1,\ldots,N\). We can see that the blown-up system (2.3) has "dynamic" parameters \(\bar{\sigma}\). However, for \(r=0\) (i.e. (2.4)) the network is static. Moreover, we see that the equilibrium point of \(\bar{x}^{\prime}\) may not be at the origin any more.
**Interpretation:** one can give several network interpretations to the equation for \(\bar{x}^{\prime}\) in (2.4). For this, it is convenient to rewrite the first part of (2.4) as
\[\bar{x}_{k}^{\prime}=[A_{0}]_{ki}+(a_{k}(0)-a_{i}(0)-[A_{0}]_{ii})\bar{x}_{k}+ \sum_{\begin{subarray}{c}j=1\\ j\neq i\end{subarray}}^{N}[A_{0}]_{kj}\bar{x}_{j}+\cdots,\]
where we can notice a network with \(N-1\) nodes \((\bar{x}_{1},\ldots,\bar{x}_{i-1},\bar{x}_{i+1},\ldots,\bar{x}_{N})\).
\(\bullet\) As a first interpretation, one may say that the internal dynamics are now modified ("shifted" by \([A_{0}]_{ki}\) and "scaled" by \([A_{0}]_{ii}\)) by the influence of node \(i\) (which has been blown-up and "does not exists anymore"). Regarding the interaction, that remains the same, except that the node \(i\) does not appear anymore.
\(\bullet\) Another interpretation could be given when we re-write (2.4) as:
\[\bar{x}_{k}^{\prime}=[A_{0}]_{ki}+a_{k}(0)\bar{x}_{k}+\sum_{\begin{subarray}{ c}j=1\\ j\neq i\end{subarray}}^{N}[\tilde{A}_{0}]_{kj}\bar{x}_{j}+\cdots,\]
where \([\tilde{A}_{0}]_{kj}=-a_{i}(0)-[L_{0}]_{ii}+[A_{0}]_{kk}\) if \(j=k\) and \([\tilde{A}_{0}]_{kj}=[A_{0}]_{kj}\) otherwise. This, instead can be interpreted as the "self-interaction" having been modified by the (nonexistent) node \(i\), while the term \([A_{0}]_{ki}\) can be regarded as a constant input, which is nonzero if and only if there is a connection from node \(i\) to node \(k\).
Finally, if \(r=0\) is hyperbolic in the \(r\)-direction (which would be the case under desingularization), then we expect that (2.3) is a regular perturbation of (2.4) for \(r>0\) sufficiently small. This means that the dynamics of (2.1) in a small neighborhood of the nilpotent origin can be determined from to those of (2.4), see a relevant example in section 3.
### Parameter directional blow-up
We now turn our attention to the blow-up in the parameter direction (the \(\bar{Y}_{i}\)'s in (2.2)). For convenience, let us recall that we are considering the (extended) vector field
\[X:\left\{\begin{aligned} \dot{x}&=F(x,\sigma)+H(x, \sigma)=(D_{\sigma}+L_{\sigma})x+\tilde{F}(x,\sigma)+\tilde{H}(x,\sigma)\\ \dot{\sigma}&=0,\end{aligned}\right. \tag{2.5}\]
where \(L_{0}+D_{0}\) is nilpotent. If one would rewrite (2.5) as \(X=\sum_{i=1}^{N}f_{i}\frac{\partial}{\partial x_{i}}+\sum_{j=1}^{p}0\frac{ \partial}{\partial\sigma_{j}}\), it is rather convenient to consider the vector field
\[Y_{j}=0\frac{\partial}{\partial\sigma_{j}}+\sum_{i=1}^{N}f_{i}\frac{\partial}{ \partial x_{i}}+\sum_{k=1,\,k\neq j}^{p}0\frac{\partial}{\partial\sigma_{k}},\]
with the directional blow-up \(\Psi_{j}\) given by local coordinates:
\[\left(\begin{array}{c}r^{\beta_{j}}\\ r^{\alpha}\bar{x}_{1}\\ \vdots\\ r^{\alpha}\bar{x}_{N}\\ r^{\beta_{1}}\bar{\sigma}_{1}\\ \vdots\\ r^{\beta_{j-1}}\bar{\sigma}_{j-1}\\ r^{\beta_{j+1}}\bar{\sigma}_{j+1}\\ \vdots\\ r^{\beta_{p}}\bar{\sigma}_{p}\end{array}\right)=\left(\begin{array}{c} \sigma_{j}\\ x_{1}\\ \vdots\\ x_{N}\\ \sigma_{1}\\ \vdots\\ \sigma_{j-1}\\ \sigma_{j+1}\\ \vdots\\ \sigma_{p}\end{array}\right).\]
Recall that from the previous section we have already chosen the blow-up weights \(\alpha_{i}=\alpha\) for all \(i=1,\ldots,N\). Therefore
\[\mathrm{D}\Psi_{j}=\begin{bmatrix}\rho_{j}&0_{1\times N}&0_{1\times(p-1)}\\ \chi&R&0_{N\times(p-1)}\\ \zeta_{j}&0_{(p-1)\times N}&B_{j}\end{bmatrix},\]
where
\[\rho_{j} =\beta_{j}\sigma_{j}^{\beta_{j}-1}\] \[\chi =(\alpha r^{\alpha-1}\bar{x}_{1},\ldots,\alpha r^{\alpha-1}\bar{x} _{N})^{\top}\] \[R =\text{diag}\left\{r^{\alpha},\ldots,r^{\alpha}\right\}\] \[\zeta_{j} =(\beta_{1}r^{\beta_{1}-1}\bar{\sigma}_{1},\ldots,\beta_{j-1}r^{ \beta_{j-1}-1}\bar{\sigma}_{j-1},\beta_{j+1}r^{\beta_{j+1}-1}\bar{\sigma}_{j+ 1},\ldots,\beta_{p}r^{\beta_{p}-1}\bar{\sigma}_{p})^{\top}\] \[B_{j} =\text{diag}\left\{r^{\beta_{1}},\ldots,r^{\beta_{j-1}},r^{\beta_ {j+1}},\ldots,r^{\beta_{p}}\right\}.\]
Following similar steps as in the previous section, we find that the corresponding local vector field (in the chart \(Q_{j}\)) reads as
\[\bar{Y}_{j}=\begin{cases}r^{\prime}&=0\\ \bar{x}_{i}^{\prime}&=\frac{1}{r^{\alpha}}(\bar{f}_{i}+\bar{h}_{i})\\ \bar{\sigma}_{k}^{\prime}&=0,\end{cases}\]
where \(k=1,\ldots,j-1,j+1,\ldots,p\), and \(\bar{f}_{\ell}\) and \(\bar{h}_{\ell}\) are the \(\ell\)-th component of \(F\circ\Psi_{j}\) and \(H\circ\Psi_{j}\) respectively. Again, it is evident that the blown-up vector field \(\bar{Y}_{j}\) still has a network structure. More specifically we have:
\[\bar{f}_{i} =a_{i}(r^{\beta}\bar{\sigma})r^{\alpha}\bar{x}_{i}+\mathcal{O}(r ^{2\alpha})\] \[\bar{h}_{i} =\sum_{k=1}^{N}\left[A_{r^{\beta}\bar{\sigma}}\right]r^{\alpha} \bar{x}_{k}+\mathcal{O}(r^{2\alpha}).\]
We recall that in the chart \(Q_{j}\) we use the short-hand notation
\[r^{\beta}\bar{\sigma}=(r^{\beta_{1}}\bar{\sigma}_{1},\ldots,r^{\beta_{j-1}} \bar{\sigma}_{j-1},r^{\beta_{j}},r^{\beta_{j+1}}\bar{\sigma}_{j+1},\ldots,r^{ \beta_{p}}\bar{\sigma}_{p}).\]
**Interpretation:**: in this case the blow-up is essentially a rescaling, where one wants to focus on the influence of the particular parameter \(\sigma_{j}\). After successful desingularization (which depends on the vector fields and the choice of blow-up weights), the blown-up vector field
\(\bar{Y}_{j}\) is, qualitatively speaking, \(Y_{j}\) with the parameter \(\bar{\sigma}_{j}=1\) (or \(\bar{\sigma}_{j}=-1\) depending on the direction of the blow-up). See the example in section 3.3.
From the above insight, since a parameter \(\sigma_{i}\in\sigma\) could be an edge weight, and due to its relevance in applied sciences, we now discuss the edge-directional blow-up in the context of adaptive networks.
### Edge-directional blow-up for adaptive networks
Let us now consider a slowly adaptive network
\[\begin{split}\dot{x}_{i}&=f_{i}(x_{i})+\sum_{j=1} ^{N}w_{ij}h_{ij}(x_{i},x_{j})\\ \dot{w}_{ij}&=\epsilon g_{ij}(x,w_{ij}).\end{split} \tag{2.6}\]
**Remark 9**: _Frequently, the adaptation rule \(g_{ij}\) in applications depends only on the nodes \((i,j)\), see e.g. [3]; but for our purposes such specification is not relevant._
Consider an edge-directional blow-up for the edge \((k,l)\) (for simplicity we only consider the positive direction)
\[x_{i}=r\bar{x}_{i},\,w_{ij}=r\bar{w}_{ij},\,\epsilon=r\bar{\epsilon},\,w_{kl}= r,\qquad ij\neq kl. \tag{2.7}\]
**Remark 10**: _In this section, we only focus on the preservation of network structure and its interpretation. If more details are known from the particular functions in (2.6), then a more appropriate choice of blow-up (2.7) can be made._
The corresponding blown-up system reads as:
\[\begin{split}\dot{r}&=r\bar{\xi}\bar{g}_{kl}(r,\bar{x })\\ \dot{\bar{x}}_{i}&=\frac{\bar{f}_{i}(r,\bar{x}_{i})}{r }+\sum_{j=1}^{N}\bar{w}_{ij}\bar{h}_{ij}(r,\bar{x}_{i},\bar{x}_{j})-\bar{\epsilon }\bar{x}_{i}\bar{g}_{kl}(r,\bar{x}),\qquad i\neq k\\ \dot{\bar{x}}_{k}&=\frac{\bar{f}_{k}(r,\bar{x}_{k} )}{r}+\sum_{j=1,j\neq l}^{N}\bar{w}_{kj}\bar{h}_{kj}(r,\bar{x}_{k},\bar{x}_{j} )+\bar{h}_{kl}(r,\bar{x}_{k},\bar{x}_{l})-\bar{\epsilon}\bar{x}_{k}\bar{g}_{ kl}(r,\bar{x}),\\ \dot{\bar{w}}_{ij}&=\bar{\epsilon}(\bar{g}_{ij}(r, \bar{x},\bar{w}_{ij})-\bar{w}_{ij}\bar{g}_{kl}(r,\bar{x})),\quad ij\neq kl\\ \dot{\bar{\epsilon}}&=-\bar{\epsilon}^{2}\bar{g}_{ kl}(r,\bar{x})\end{split} \tag{2.8}\]
The dynamics restricted to the invariant subset \(\{\bar{\epsilon}=0\}\) are often useful. One accordingly has that (2.8) reduces to
\[\begin{split}\dot{r}&=0\\ \dot{\bar{x}}_{i}&=\frac{\bar{f}_{i}(r,\bar{x}_{i})}{ r}+\sum_{j=1}^{N}\bar{w}_{ij}\bar{h}_{ij}(r,\bar{x}_{i},\bar{x}_{j}),\qquad i \neq k\\ \dot{\bar{x}}_{k}&=\frac{\bar{f}_{k}(r,\bar{x}_{k}) }{r}+\sum_{j=1,j\neq l}^{N}\bar{w}_{kj}\bar{h}_{kj}(r,\bar{x}_{k},\bar{x}_{j} )+\bar{h}_{kl}(r,\bar{x}_{k},\bar{x}_{l}),\\ \dot{\bar{w}}_{ij}&=0,\quad ij\neq kl\end{split} \tag{2.9}\]
which is the model of a static network, with weights \(\bar{w}_{ij}\) and fixed weight \(\bar{w}_{kl}=1\). If \(\bar{\epsilon}\) remains small, then (2.8) can be interpreted as a small perturbation of the static model (2.9). In this case, we notice that the weights \(\bar{w}_{ij}\), \(ij\neq kl\), would evolve slowly, while \(\bar{w}_{kl}\) is still fixed to 1. Moreover, we notice that the adaptation effects of \(w_{kl}\) are "globally transported" to the dynamics of the nodes. This may have important applications to elucidate the influence of certain edges on the rest of the network. If (2.9) has been desingularized, then the perturbation (2.8) would be regular.
## 3 Examples
In this section, we present several examples that highlight the use of the blow-up for network dynamical systems. In particular, we focus on network structure preservation and the possibility of desingularization after blow-up.
### Linearly Diffusive systems
Our first example concerns a set of diffusively coupled nodes with linear internal dynamics:
\[\dot{x}_{i}=a_{i}x_{i}+\sum_{j=1}^{N}w_{ij}(x_{j}-x_{i}), \tag{3.1}\]
with parameters \(a_{i}\in\mathbb{R}\), \(a_{i}\neq 0\), and \(w_{ij}\in\mathbb{R}\) such that the origin is nilpotent. For the moment, we focus on describing the flow near the origin _with the parameters fixed_.
To start, let us consider the case where \(N=2\), that is:
\[\dot{x}_{1} =a_{1}x_{1}+w_{12}(x_{2}-x_{1}) \tag{3.2}\] \[\dot{x}_{2} =a_{2}x_{2}+w_{21}(x_{1}-x_{2}).\]
For this, the nilpotency condition is given by the two simultaneous equations
\[a_{1}-w_{12}+a_{2}-w_{21} =0 \tag{3.3}\] \[a_{1}a_{2}-a_{1}w_{21}-a_{2}w_{12} =0.\]
Notice that for these equations to have a nontrivial solution it is necessary that \(w_{12}\neq w_{21}\) and \(a_{1}\neq a_{2}\) which is fulfilled, generically, by having (at least) two independent parameters. Assuming that \(a_{1},a_{2}\) are independent, we have that \(w_{ij}=w_{ij}^{*}:=\dfrac{a_{i}^{2}}{a_{i}-a_{j}}\), makes the origin nilpotent.
Consider the node-directional blow-up5
Footnote 5: It can be easily checked that these two charts are sufficient as in the charts \(K_{2}^{\pm}=\{\bar{x}_{2}=\pm 1\}\) one obtains analogous local vector fields.
\[x_{1}=\pm r,\,x_{2}=r\bar{x}_{2},\]
where the \(\pm\) sign accounts for the charts \(K_{1}^{\pm}:\{\bar{x}_{1}=\pm 1\}\), and which leads to the local vector field
\[r^{\prime} =r\left(a_{1}\pm w_{12}(\bar{x}_{2}\mp 1)\right)\] \[\bar{x}_{2}^{\prime} =\pm w_{21}+\bar{x}_{2}(-a_{1}+a_{2}+w_{12}-w_{21}\mp w_{12}\bar{x }_{2}),\]
whereby substituting \(w_{ij}=w_{ij}^{*}=\dfrac{a_{i}^{2}}{a_{i}-a_{j}}\) we get
\[r^{\prime} =-\dfrac{a_{1}r}{a_{1}-a_{2}}(a_{2}\mp a_{1}\bar{x}_{2}) \tag{3.4}\] \[\bar{x}_{2}^{\prime} =\mp\dfrac{(a_{2}\mp a_{1}\bar{x}_{2})^{2}}{a_{1}-a_{2}}.\]
The equilibrium point of (3.4), namely \((r,x_{2})=(0,\pm\dfrac{a_{2}}{a_{1}})\), is still nilpotent, and in fact the linearization matrix at the equilibrium point is the zero matrix. However, as a slight advantage, (3.4) is in triangular form and can be desingularized6 to find the classification shown in figure 2.
Footnote 6: The desingularized vector is found by dividing (3.4) by \((a_{2}\mp a_{1}\bar{x}_{2})\), and provides the phase-portrait of (3.4) away from \((a_{2}\mp a_{1}\bar{x}_{2})=0\) after reversing the direction of the flow in the region \((a_{2}\mp a_{1}\bar{x}_{2})<0\).
We further emphasize that the equilibrium of (3.4) is as degenerate as it can get, in the sense that shifting the equilibrium point to the origin one gets (re-using the variables)
\[r^{\prime} =\dfrac{a_{1}^{2}}{a_{1}-a_{2}}r\bar{x}_{2}\] \[\bar{x}_{2}^{\prime} =-\dfrac{a_{1}^{2}}{a_{1}-a_{2}}\bar{x}_{2}^{2},\]
which has linearization matrix \(\begin{bmatrix}0&0\\ 0&0\end{bmatrix}\), instead of the less degenerate form \(\begin{bmatrix}0&1\\ 0&0\end{bmatrix}\), which is mostly considered in the literature. Hence, for example, the techniques of [25] cannot be applied to achieve the stability of the origin via the introduction of higher-order terms. Of course, stabilization by modifying the weights, or equivalently by introducing new linear
terms, is quite straightforward. If one were to consider weights
\[w_{ij}=w_{ij}^{*}+\delta_{ij},\]
in (4), one finds equilibria \((r^{*},\bar{x}_{2}^{*})\) given by \(r^{*}=0\) and
\[\bar{x}_{2}^{*}=\frac{\pm(a_{2}-a_{1})\sqrt{4a_{1}\delta_{21}+4a_{2}\delta_{12}+( \delta_{12}+\delta_{21})^{2}}+2a_{1}a_{2}+a_{1}\delta_{12}-a_{1}\delta_{21}-a_{ 2}\delta_{12}+a_{2}\delta_{21}}{2\left(a_{1}^{2}+a_{1}\delta_{12}-a_{2}\delta_ {12}\right)}\]
with corresponding Jacobian
\[J=\begin{bmatrix}\frac{1}{2}\left(\pm\sqrt{4a_{1}\delta_{21}+4a_{2}\delta_{12} +(\delta_{12}+\delta_{21})^{2}}-\delta_{12}-\delta_{21}\right)&0\\ 0&\mp\sqrt{4a_{1}\delta_{21}+4a_{2}\delta_{12}+(\delta_{12}+\delta_{21})^{2} }\end{bmatrix}\]
from which one can conclude, for example, that if \(-\delta_{12}-\delta_{21}<0\) and \(\delta\)\(4a_{1}\delta_{21}+4a_{2}\delta_{12}<0\) then there is one stable node and one saddle, and the flow on both \(r\)-eigenspaces is contracting. This then leads to a stable origin for (21), see figure 3 for an example.
It would be interesting to investigate, however, if higher-order interactions [4] can stabilize the origin. This is, in general, a very complicated problem and we limit ourselves to providing
a proof of concept. Suppose one introduces higher-order terms as follows:
\[\begin{split}\dot{x}_{1}&=a_{1}x_{1}+w_{12}(x_{2}-x_{1 })-wx_{2}^{3}+w(x_{1}+x_{2})^{3}\\ \dot{x}_{2}&=a_{2}x_{2}+w_{21}(x_{1}-x_{2})+wx_{2}^{3 },\end{split} \tag{3.5}\]
where \(w\in\mathbb{R}\). Upon a time-rescaling, one can assume that all parameters are small. To simplify the analysis, but keeping the problem still interesting, let \(a_{1}=\alpha\) and \(a_{2}=-\alpha\) with \(\alpha>0\), hence the uncoupled dynamics are unstable. We emphasize that the origin is still nilpotent for (3.5) with \(w_{ij}=w_{ij}^{*}\). Consider the polar blow-up
\[(x,y)=(r\cos(\theta),r\sin(\theta))\]
together with the rescaling of parameters \(\alpha=r^{3}A,w=rW\). In these new coordinates, and after division by \(r^{3}\), we obtain the vector field
\[\begin{split} r^{\prime}&=rf_{1}(\theta)\\ \theta^{\prime}&=f_{2}(\theta)\end{split} \tag{3.6}\]
where, having substituted \(w_{ij}=w_{ij}^{*}\), we have
\[\begin{split} f_{1}(\theta)&=\frac{1}{8}(4A\cos(2 \theta)+W(6\sin(2\theta)+3\sin(4\theta)-\cos(4\theta)+9))\\ f_{2}(\theta)&=\frac{1}{8}(-2(2A+3W)\sin(2\theta) -4A+W\sin(4\theta)+3W\cos(4\theta)-3W).\end{split}\]
It follows that if we let7\(W=-A\), then (3.6) has eight hyperbolic equilibria for \(r=0\) given by
Footnote 7: One can obtain the equilibria and their corresponding Jacobians numerically for any value of \(A>0\) and \(W<0\) leading to a similar qualitative behavior. It just happens that for this choice, the equilibria are independent of the parameters.
\[\{\theta_{i}\}\approx\{0.431808,1.26918,2.35619,2.54775,3.5734,4.41077,5.49779, 5.68935\},\;i=1,\ldots,8,\]
where in particular \((r,\theta)=(0,\theta_{k})\), \(k=2,4,6,8\), are stable nodes and the rest are saddles with the \(r\)-direction stable. This leads to the diagram shown in figure 4.
**Remark 11**: _Notice that thanks to the higher-order terms, the equilibria of the blown-up system are hyperbolic. That is, one can indeed desingularize a nilpotent equilibrium of a network dynamical system via the blow-up._
Due to hyperbolicity, a qualitatively similar result, as the one stated above, holds for \(W\approx-A\). Moreover, since the result does not depend on the rescaling of the parameters, just on their signs, we conclude that the origin of (3.5) is stable for \(a_{1}=\alpha\), \(a_{2}=-\alpha\) with \(\alpha>0\) and \(w<0\), see a simulation in figure 4.
The main conclusions of the previous analysis are summarized as follows.
**Proposition 1**: _Consider the 2-node network_
\[\dot{x}_{1} =a_{1}x_{1}+w_{12}(x_{2}-x_{1})-wx_{2}^{3}+w(x_{1}+x_{2})^{3}\] \[\dot{x}_{2} =a_{2}x_{2}+w_{21}(x_{1}-x_{2})+wx_{2}^{3},\]
_with parameters satisfying (3.3) so that the origin is nilpotent. If \(w=0\) then the origin is unstable. On the other hand, if \(a_{1}=-a_{2}=\alpha\) with \(\alpha>0\) and \(w<0\), then the origin is locally asymptotically stable._
The general case (3.1) quickly becomes analytically intractable. For example, it is quite difficult to find closed expressions for the parameters such that the origin is nilpotent. Nevertheless, we conjecture that the dynamics shall be reminiscent of what is shown in figure 2, and that as for the case \(N=2\), higher-order interactions may stabilize nilpotent points. A special case that is simple to analyze is considered in the following proposition.
**Proposition 2**: _Consider_
\[\dot{x}_{i}=a_{i}x_{i}+\sum_{j=1}^{N}w_{ij}(x_{j}-x_{i})+h_{i}(x),\]
_with \(h_{i}(0)=0\), \(h_{i}\in\mathcal{O}(|x|^{2})\), and with parameters \(a_{i}\in\mathbb{R}\), \(a_{i}\neq 0\), and \(w_{ij}\in\mathbb{R}\) such that the origin is nilpotent. Suppose that there exists a smooth function \(V:\mathbb{R}^{n}\rightarrow\mathbb{R}\) such that \(h(x)=(h_{1}(x),\ldots,h_{n}(x))=-\nabla V(x)\) and \(x=0\) is a (degenerate, since \(h_{i}\in\mathcal{O}(|x|^{2})\)) local minimum of \(V\). Then the origin is locally asymptotically stable._
Proof: System (3.1) can be rewritten as the gradient system \(\dot{x}=-\nabla W\) where \(W=(W_{1},\ldots,W_{n})\) with \(W_{i}=-(a_{i}-w_{ii})x_{i}^{2}-\sum_{i=1,\,i\neq j}^{n}w_{ij}x_{i}x_{j}+V\). The statement is now straightforward from (3.1).
Figure 4: (a) Global phase-portrait of (3.6), where all equilibria are hyperbolic. Panels (b) and (c) show a simulation of (3.5) with \(a_{1}=1\), \(a_{2}=-1\), \(w_{ij}=w_{ij}^{*}\), \(w=-1/10\) and \(w=-2\) respectively.
noticing that the nilpotent assumption guarantees that the origin is a (degenerate) local minimum of \(W\). \(\quad\square\)
**Remark 12**: _Proposition 2 concerns network dynamical systems for which the leading part is nilpotent and the higher-order terms are of gradient form. The proposition roughly tells us that near the nilpotent point, the system behaves as a gradient system, hence the higher-order terms, if chosen appropriately, stabilize the nilpotent origin. Naturally, many network dynamical systems do not admit a gradient structure. In particular, one can check that (3.5) is not of such a gradient form._
### An example of nilpotent internal dynamics
Consider the following system of weakly and diffusely coupled scalar nodes8
Footnote 8: This model can, alternatively, be studied with the tools developed in [18]. Here we present an analysis fitting the blow-up method.
\[\dot{x}_{i}=a_{i}x_{i}^{3}(1-x_{i})+\epsilon\sum_{j=1}^{N}w_{ij}(x_{j}-x_{i}), \tag{3.7}\]
where \(a_{i}\neq 0\), \(w_{ij}\in\mathbb{R}\), and for simplicity we assume that \(x_{i}(0)\in(0,1)\), and that \(w_{ii}=0\) for all \(i=1,\ldots,N\) (there are no self-couplings). Since for \(\epsilon=0\) the origin is nilpotent, we are going to use the blow-up technique to elucidate the stability of the origin for \(\epsilon\) small.
A node directional blow-up given by
\[(r,r\bar{x}_{1},\ldots,r\bar{x}_{i-1},r\bar{x}_{i+1},\ldots,r\bar{x}_{N},r^{2 }\bar{\epsilon})=(x_{i},x_{1},\ldots,x_{i-1},x_{i+1},\ldots,x_{N},\epsilon)\]
leads, after desingularization, to
\[r^{\prime} =r\left(a_{i}+\bar{\epsilon}\sum_{j=1}^{N}w_{ij}(\bar{x}_{j}-1)- a_{i}r\right)\] \[\bar{x}_{k}^{\prime} =a_{k}\bar{x}_{k}^{3}+\bar{\epsilon}\sum_{j\neq i}^{N}w_{kj}(\bar {x}_{j}-\bar{x}_{k})+\bar{\epsilon}w_{ki}(1-\bar{x}_{k})-\bar{x}_{k}\left(a_{i }+\bar{\epsilon}\sum_{j=1}^{N}w_{ij}(\bar{x}_{j}-1)-a_{i}r\right)-ra_{k}\bar{x }_{k}^{4}\] \[\bar{\epsilon}^{\prime} =-2\epsilon\left(a_{i}+\bar{\epsilon}\sum_{j=1}^{N}w_{ij}(\bar{x} _{j}-1)-a_{i}r\right),\]
\(k=1,\ldots,N\) with \(k\neq i\). In this model \(x_{i}\) is replaced by \(r\), hence one is interested in the stability of \(r=0\), which we recall corresponds to the origin via the blow-up transformation.
As is usual in the blow-up analysis, one is interested in the dynamics restricted to the invariant subspaces \(\{r=\bar{\epsilon}=0\}\), \(\{\bar{\epsilon}=0\}\) and \(\{r=0\}\) (although in this case, it suffices to take the last two subspaces). For \(\bar{\epsilon}=0\) we have
\[r^{\prime} =a_{i}r(1-r)\] \[\bar{x}_{k}^{\prime} =-a_{i}\bar{x}_{k}+a_{k}\bar{x}_{k}^{3}+r(a_{i}\bar{x}_{k}-a_{k} \bar{x}_{k}^{4})\] \[\bar{\epsilon}^{\prime} =0,\]
where it is straightforward to see, from the leading order terms, that if \(a_{l}<0\) for all \(l=1,\ldots,N\) we have that, locally, \((r,\bar{x}_{k})=(0,0)\) is a _hyperbolic saddle_ while \((r,\bar{x}_{k})=(0,\pm\sqrt{a_{i}/a_{k}})\) are _hyperbolic_ sinks, with Jacobians
\[J_{(0,0)}=\begin{bmatrix}a_{i}&0\\ 0&-a_{i}\end{bmatrix}\qquad\text{and}\qquad J_{\left(0,\pm\sqrt{a_{i}/a_{k}} \right)}=\begin{bmatrix}a_{i}&0\\ *&2a_{i}\end{bmatrix}\]
respectively. It will be useful for our arguments below to notice that these stability properties depend only on the \(a_{i}\) coefficient.
Since the above equilibria are hyperbolic after the blow-up, one can conclude that for \(\bar{\epsilon}>0\) small, the local stability properties of the equilibria are preserved.
In the \(\bar{\epsilon}\)-direction (the rescalling chart) we obtain the desingularized system
\[\dot{\bar{x}}_{i}=a_{i}\bar{x}_{i}^{3}(1-\bar{x}_{i})+\sum_{j=1}^{N}w_{ij}( \bar{x}_{j}-\bar{x}_{i}),\]
The leading order term is now the interaction. Therefore, the origin is not nilpotent anymore, but semi-hyperbolic. It follows that if all the weights \(w_{ij}\) are nonnegative, and the digraph is strongly connected, then the subspace \(\operatorname{span}\left\{1,\ldots,1\right\}\in\mathbb{R}^{N}\) is locally attracting. The dynamics on such a space are thus given by the scalar equation
\[\bar{x}=a_{i}\bar{x}^{3}(1-\bar{x}),\]
for which the origin is locally asymptotically stable provided that \(a_{i}<0\).
From our previous analysis, we have proven the following:
**Proposition 3**: _If \(a_{i}<0\), \(w_{ij}\leq 0\) for all \(i,j=1,\ldots,N\) and the underlying digraph of (3.7) is strongly connected, then the (nilpotent) origin of (3.7) is locally asymptotically stable for \(\varepsilon>0\) sufficiently small._
### A slowly adaptive network
Let us consider an adaptive network of \(N\) homogeneous Kuramoto oscillators given by [2]
\[\dot{\phi}_{i} =\omega-\frac{1}{N}\sum_{j=1}^{N}\kappa_{ij}\sin(\phi_{i}-\phi_{ j}+\alpha)\] \[\dot{\kappa}_{ij} =-\varepsilon(\sin(\phi_{i}-\phi_{j}+\beta)+\kappa_{ij}).\]
Without loss of generality, one may consider \(\omega=0\) by changing to the co-rotating frame \(\phi_{i}\mapsto\phi_{i}+\omega t\). Thus, from now on we consider
\[\dot{\phi}_{i} =-\frac{1}{N}\sum_{j=1}^{N}\kappa_{ij}\sin(\phi_{i}-\phi_{j}+\alpha) \tag{3.8}\] \[\dot{\kappa}_{ij} =-\varepsilon(\sin(\phi_{i}-\phi_{j}+\beta)+\kappa_{ij}).\]
A one-cluster solution9 is defined as \(\phi_{i}(t)=s(t)+a_{i}\) for some constant \(a_{i}\in[0,2\pi)\), \(i=1\ldots,N\). Depending on the phases \(a_{i}\) the aforementioned one-cluster solutions receive different names, see [2, Definition 2.3]. Upon substitution of the one-cluster solution in (3.8) one finds that \(\dot{s}=\)constant, meaning that one-cluster solutions are, in fact, of the form \(\phi_{i}(t)=\Omega t+a_{i}\), with some \(\Omega\in\mathbb{R}\). It is well-known that the one-cluster solutions with \(\Omega=\Omega^{*}:=\frac{1}{N}\sum_{j}\sin(a_{i}-a_{j}+\alpha)\sin(a_{i}-a_{j} +\beta)\), together with \(k_{ij}(t)=-\sin(a_{i}-a_{j}+\beta)\) form a set of relative equilibria of (3.8). In other words, consider the co-rotating coordinates \(\psi_{i}=\phi_{i}-(\Omega^{*}t+a_{i})\), and the shifted weights \(\sigma_{ij}=\kappa_{ij}+\sin(a_{i}-a_{j}+\beta)\). Then (3.8) is
re-written as
\[\dot{\psi}_{i} =-\Omega^{*}+\frac{1}{N}\sum_{j=1}^{N}\sin(a_{i}-a_{j}+\beta)\sin( \psi_{i}+a_{i}-\psi_{j}-a_{j}+\alpha) \tag{3.9}\] \[\quad-\frac{1}{N}\sum_{j=1}^{N}\sigma_{ij}\sin(\psi_{i}+a_{i}- \psi_{j}-a_{j}+\alpha)\] \[\dot{\sigma}_{ij} =-\varepsilon(\sin(\psi_{i}+a_{i}-\psi_{j}-a_{j}+\beta)-\sin(a_{i }-a_{j}+\beta)+\sigma_{ij})\]
It follows, indeed, that \((\psi_{i}^{*},\sigma_{ij}^{*})=(0,0)\) is an equilibrium of (3.9). It turns out that this equilibrium is non-hyperbolic from a slow-fast perspective. Moreover, for certain parameters, one-cluster solutions can be nilpotent.
**Proposition 4**: _Consider the layer equation of (3.9). Then the equilibrium point \((\psi_{i}^{*},\sigma_{ij}^{*})=(0,0)\) is non-hyperbolic. In particular, the antipodal solution, which corresponds to \(a_{i}\in\{0,\pi\}\), is nilpotent for \(\beta=0\)._
**Remark 13**: _These are not the only nilpotent solutions. In fact, for \(\alpha=\beta=0\) all one-cluster solutions in [2, Corollary 4.3] are nilpotent. In this example, however, we only concentrate on the antipodal case._
_Proof_ Since we are looking at the layer equation of (3.9), we only need to consider the Jacobian \(J=[J_{ij}]_{i,j=1,\ldots,N}\) where \(J_{ij}=\dfrac{\partial\dot{\psi}_{i}}{\partial\psi_{j}}\Big{|}_{\psi_{i}=\psi_ {j}=\sigma_{ij}=0}\). So, we have
\[J_{ii} =\frac{1}{N}\sum_{j=1}^{N}\sin(a_{i}-a_{j}+\beta)\cos(a_{i}-a_{j} +\alpha)\] \[J_{ij} =\frac{1}{N}\sin(a_{i}-a_{j}+\beta)\cos(a_{i}-a_{j}+\alpha).\]
Since \(J_{ii}+\sum_{j\neq i}^{N}J_{ij}=0\), it follows that \(\ker J\) is non-trivial (this is reminiscent, of course, of Laplacian matrices), showing that the equilibrium point \((\psi_{i}^{*},\sigma_{ij}^{*})=(0,0)\) is non-hyperbolic. Finally, if \(\beta=0\) and \(a_{i}\in\{0,\pi\}\) we have that \(J=0\), i.e., the equilibrium point \((\psi_{i}^{*},\sigma_{ij}^{*})=(0,0)\) is nilpotent.
The question one would now like to answer is: what is the stability of the anti-podal solutions for \(\epsilon,\,\beta\) small? To answer this question we are going to use the blow-up technique. We remark that the blow-up technique is a local tool, hence, it is in principle applied to polynomial vector fields. Here we shall simply expand (3.9) for \((\psi_{i},\sigma_{ij})\) close to the origin. Let \(a_{i}\in\{0,\pi\}\). Up to a constant shift above, we can assume without loss of generality that \(a_{i}=0\) for all \(i=1,\ldots,N\). Then (3.9) reads as
\[\dot{\psi}_{i} =-\Omega^{*}+\frac{1}{N}\sum_{j=1}^{N}\sin\beta\sin(\psi_{i}-\psi _{j}+\alpha)-\frac{1}{N}\sum_{j=1}^{N}\sigma_{ij}\sin(\psi_{i}-\psi_{j}+\alpha)\] \[\dot{\sigma}_{ij} =-\epsilon(\sin(\psi_{i}-\psi_{j}+\beta)-\sin\beta+\sigma_{ij}).\]
Expanding \(\sin(\psi_{i}-\psi_{j}+\rho)\) for \(\psi_{i}\approx\psi_{j}\), i.e. \((\sin(\psi_{i}-\psi_{j}+\rho)=\sin\rho+\cos\rho(\psi_{i}-\psi_{j})+\cdots)\), with \(\rho=\alpha,\beta\), and accounting for the fact that in this setup we have \(-\Omega^{*}+\sin\alpha\sin\beta=0\), we get
\[\dot{\psi}_{i} =\frac{1}{N}\sum_{j=1}^{N}\left[\sin\beta\cos\alpha(\psi_{i}-\psi _{j})-\sigma_{ij}(\sin\alpha+\cos\alpha(\psi_{i}-\psi_{j})\right]+\cdots \tag{3.10}\] \[\dot{\sigma}_{ij} =-\epsilon\left(\cos\beta(\psi_{i}-\psi_{j})+\sigma_{ij}+\cdots \right),\]
where the \(\cdots\) indicate higher-order terms. Naturally, the origin \((\psi_{i},\sigma_{ij})=(0,0)\) is (still) nilpotent for \(\epsilon=\beta=0\), but now it is straightforward to notice that \(\alpha=\pm\frac{\pi}{2}\) also induces that the origin is nilpotent (this case is not further discussed in this example). For the rest of this example, we only consider the leading part of (3.10).
Let us propose the \(\beta\)-directional blow-up (in the rescaling chart) given by10
Footnote 10: Notice that this is equivalent to first going to the rescaling chart and then blowing up in the \(\pm\beta\)-direction.
\[\psi_{i}=ru_{i},\quad\sigma_{ij}=rs_{ij},\quad\alpha=rA,\quad\beta=\pm r,\quad \epsilon=r. \tag{3.11}\]
This leads to the desingularized vector field (where we take only the leading order terms of the Taylor expansion of the trigonometric functions)
\[\dot{r} =0\] \[\dot{u}_{i} =\frac{1}{N}\sum_{j=1}^{N}\pm(u_{i}-u_{j})-s_{ij}(A+(u_{i}-u_{j})) \tag{3.12}\] \[\dot{s}_{ij} =-((u_{i}-u_{j})+s_{ij}).\]
Now, the following conclusions on the local stability of the line of equilibria \(u_{i}=u_{j}\), \(s_{ij}=0\), which we denote by \(\gamma\), are straightforward11: a) if \(\beta>0\), corresponding to the plus sign in (3.12), then \(\gamma\) cannot be stable; b) if \(\beta<0\), corresponding to the negative sign in (3.12), then \(\gamma\) is locally stable for \(A<1\) and unstable for \(A>1\). From this analysis, and recalling the blow-up map (3.11), the following statement is proven (compare with [2, figure 4.2 (b)]):
Footnote 11: It suffices to consider the leading part \(\dot{u}_{i}=\frac{\pm 1+A}{N}\sum_{j=1}^{N}(u_{i}-u_{j})\).
Proposition 5The antipodal solutions of (3.8) are locally asymptotically stable for \(\beta<-\alpha\) and \(\epsilon>0\) sufficiently small.
## 4 Conclusions and Discussion
We have studied the problem of structure preservation under the blow-up transformation of network dynamical systems, including adaptive ones. This transformation is a useful tool to rigorously analyze the dynamics of differential equations close to nilpotent equilibria. The essential philosophy of the method is to transform a singular problem into a regular one.
The main conclusion from our analysis is that, indeed, the blow-up transformation preserves the network structure. This means that the local vector fields obtained after a directional blow-up can still be interpreted as a network dynamical system. The precise interpretation turns out to depend on the direction of blow-up and the model at hand. Moreover, via a series of examples, we have shown that (as in the classical low-dimensional setting) the blow-up helps to desingularize nilpotent singularities.
Besides structure preservation, we have noticed that the blow-up induces parameters to become dynamic. Nevertheless, such dynamics seem to occur at higher-orders, which could
yield another natural way to uncover higher-interactions in networks. In the particular case of edge-directional blow-up, including the case of adaptive networks, the main message is that singular networks with static topology are to be regularized as network dynamical systems with co-evolving edges; hence, this provides a route to potentially uncover adaptive network dynamics.
Several open problems stem from the analysis presented here, from which we highlight the following:
1.The problem of desingularization has been addressed here only in the examples. It would be interesting to relate the network topology and the nature of internal dynamics, interaction, and possible adaptation, to better describe the desingularization procedure via blow-up. In particular, one may want to study classes of networked systems where the corresponding vector field is quasi-homogeneous.
2.For high-dimensional problems, it may become unrealistic to blow-up in all possible directions. This raises new directions for improving and better adapting the blow-up method for networks: a) one may want to blow-up in several directions simultaneously, b) one may want to devise techniques that distinguish between regular and singular directions, or c) one may want to develop symbolic computational tools that automatically provide the local vector fields in the relevant charts where the dynamics have been desingularized.
3.Choosing the blow-up weights is, even for low dimensions, quite challenging. In the context of networks, it is not unfeasible to expect that the topology of the network induces, or forces, some patterns in the blow-up weights. For example, the analysis carried out in section 2 suggests that, due to the network structure, all nodes are to be blown up with the same weight, provided that the linear part is not identically zero. In this regard, it is known that the quasi-homogeneous blow-up for planar systems can be related to a Newton polyhedron, see [1]. In higher dimensions, the quasihomogeneous blow-up should naturally be related to a polytope. It would be interesting to see if there is a relation between the blow-up polytope and the topology of the network to be desingularized.
4.A way to describe the dynamics of large networks is via mean field limits, which essentially provides a low-dimensional averaged description (e.g. via a PDE, but in certain cases even a low-dimensional system of ODEs) of the network. It would be interesting to investigate if a desingularization of a mean-field limit corresponds, in any way, to the desingularization of the corresponding large, but finite, network. |
2304.05324 | Nonclassicality of photon-added-then-subtracted and
photon-subtracted-then-added states | We formulate the density matrices of a quantum state obtained by first adding
multi-photons to and then subtracting multi-photons from any arbitrary state as
well as performing the same process in the reverse order. Considering the field
to be initially in a thermal (or in an even coherent) state, we evaluate the
photon number distribution, Wigner function and Mandel's $Q$ parameter of the
resulting field. We show graphically that in which order multi-photons are
added and subtracted has a noticeable effect on the temporal behavior of these
statistical properties. | Arpita Chatterjee | 2023-04-11T16:30:15Z | http://arxiv.org/abs/2304.05324v1 | # Nonclassicality of photon-added-then-subtracted and photon-subtracted-then-added states
###### Abstract
We formulate the density matrices of a quantum state obtained by first adding multi-photons to and then subtracting multi-photons from any arbitrary state as well as performing the same process in the reverse order. Considering the field to be initially in a thermal (or in an even coherent) state, we evaluate the photon number distribution, Wigner function and Mandel's \(Q\) parameter of the resulting field. We show graphically that in which order multi-photons are added and subtracted has a noticeable effect on the temporal behavior of these statistical properties.
pacs: 42.50.-p, 42.50.Ct, 42.50.Pq
## I Introduction
The non-commutativity between the annihilation (\(a\)) and creation (\(a^{\dagger}\)) operators has long been a field of interest in quantum mechanics. Due to this non-commutativity of bosonic operators, simple alternated sequences of adding and subtracting identical particles to any quantum system show different results. Agarwal and Tara [1] first proposed a method for producing the photon-added coherent state. Another way of creating photon-added or photon-subtracted state is through a beam-splitter [2]. Dakna [3] showed that if any arbitrary initial state and a Fock state are injected at the two input channels, then the photon number counting of the output Fock state reduces the other output channel to a corresponding photon-added or photon-subtracted state. In addition, a cavity-QED based technique was theoretically discussed by Sun _et al._[4]. Conditioned on sending two atoms one by one at the considered levels and detecting them only if they end at the desired levels, they verified that \(a\) and \(a^{\dagger}\) are non-commutable.
Recently, Parigi _et al._[5] successfully demonstrated an experimental set-up to observe the effect of adding or subtracting single-photon to or from a completely classical and fully incoherent thermal light field. By applying alternated sequences of the creation and annihilation operators they realized that the resulting states depend on the order in which the two quantum operators have been applied. The same group also implemented a single-photon interferometer to achieve a direct proof for the bosonic commutation relation [6]. Besides testing the non-commutativity of bosonic operators, this adding-subtracting phenomena has a great significance because excitation by a definite number of photons can turn any classical field to a nonclassical one [7]. On the other hand, annihilation of a quantum state not only produces non-classicality but it is able to convert Gaussian states into non-Gaussian ones [8]. Non-Gaussian states are known to provide useful resources for tasks such as entanglement distillation [9] and noiseless amplification [10]. In recent times, successful experiments have been proposed by Zavatta _et al._[11, 12, 13] to manipulate photon subtraction from or photon addition to a light beam via simple optical processes such as beam splitter, frequency down-conversion and homodyne detection. These experimental successes has made possible the generation of nonclassical states which have many real life applications. For example, squeezed states are used to reduce the noise level in one of the phase-space quadratures below the quantum limit [14], entangled states are employed to realize quantum computer and to transfer quantum information [15] etc. In fact, besides performing quantum communication, it has been experimentally shown that entanglement can be enhanced by subtracting a photon from one of the two modes of a two-mode squeezed state [9]. Here we concentrate on the behavioral changes of nonclassicality of quantum states after applying multi-photons in different orders.
It is interesting to notice that the theoreticians as well as the experimentalists investigated a lot about single or two photon addition or subtraction but less attention has been paid to multi-photon changes. For example, Marek _et al._[16] generated the squeezed superpositions of coherent states by applying the two-photon subtraction (\(a^{2}\)) or the photon subtraction and addition (\(a^{\dagger}a\)) combination to squeezed vacuum state. But in our paper, we are interested in finding the results of using an arbitrary \(p\)-photon addition and \(q\)-photon subtraction. Our multi-photon scheme can be realizable in a quantum optics laboratory as the initial thermal (even coherent) field contains a very small number of photons [17]. Yang Yang _et al._[18] investigated the nonclassicality of a single-photon-subtracted Gaussian state as well as a photon-added-then-subtracted thermal state. They used nonclassical depth as a measure of nonclassicality and observed a strong correlation between the nonclassicality of the radiation field and the photon addition-subtraction process. They reported that the states generated by first
adding (subtracting) multi-photons to an arbitrary state and then subtracting (adding) multi-photons from the resulting state is certainly nonclassical if the number of added photons is equal to or larger than the number of subtracted photons. It has been pointed out that the photon-added-then-subtracted state is nonclassical irrespective of any initial state. But their approach is restricted to the nonclassicality depth criteria [19] only. We here employ other popular tools for looking into the nonclassicality of multi-photon cases.
This paper is structured as follows: we describe the density matrices for added-then-subtracted and subtracted-then-added quantum states in Sec. II. Sec. III concerns with finding various distributions of the thermal field after photon excitation and de-excitation processes. In Sec. IV, we study the same properties for the even coherent state. The last section ends with a summary of the main results of this article.
## II General theory
Here we do a comparison for the nonclassicality of added-then-subtracted and subtracted-then-added state. The density matrix of an arbitrary quantum state of the single-mode radiation field can be expanded in terms of photon number states as
\[\hat{\rho}=\sum_{m=0}^{\infty}\sum_{n=0}^{\infty}\rho(m,n)|m\rangle\langle n|. \tag{1}\]
For a given density matrix \(\hat{\rho}\) of the single-mode radiation field, the state generated by first adding \(p\) photons and then subtracting \(q\) photons may be written as
\[\hat{\rho}^{(sa)}=N_{1}a^{q}a^{\dagger p}\hat{\rho}a^{p}a^{\dagger q}, \tag{2}\]
where \(N_{1}\) is the normalization constant for the density operator. Substituting (1) into (2), we obtain
\[\hat{\rho}^{(sa)}=N_{1}\sum_{m=0}^{\infty}\sum_{n=0}^{\infty} \rho(m,n)\frac{(m+p)!}{\sqrt{m!}\sqrt{(m+p-q)!}}\] \[\times|m+p-q\rangle\langle n+p-q|\frac{(n+p)!}{\sqrt{n!}\sqrt{(n +p-q)!}}. \tag{3}\]
Next we consider just the reverse process of Eq. (2), i.e., first subtracting \(q\) photons and then adding \(p\) photons to the initial state. The finally generated state is
\[\hat{\rho}^{(as)}=N_{2}\sum_{m=q}^{\infty}\sum_{n=q}^{\infty}\rho (m,n)\frac{\sqrt{m!}\sqrt{(m+p-q)!}}{(m-q)!}\] \[\times|m+p-q\rangle\langle n+p-q|\frac{\sqrt{n!}\sqrt{(n+p-q)!}}{ (n-q)!}. \tag{4}\]
We can derive any property of the final field from these density operators.
## III Thermal state
For the initial thermal field the density operator is [4]
\[\hat{\rho}_{\rm th}=\sum_{n=0}^{\infty}\frac{\bar{n}^{n}}{(1+\bar{n})^{1+n}}| n\rangle\langle n|, \tag{5}\]
where \(\bar{n}\) is the mean photon number of the thermal state. Using the formulas (2) and (3) the final density operators for the two sequences are
\[\hat{\rho}_{\rm th}^{(sa)}=N_{1}\sum_{n=0}^{\infty}\frac{\bar{n}^ {n}}{(1+\bar{n})^{1+n}}\frac{((n+p)!)^{2}}{n!(n+p-q)!}\] \[\times|n+p-q\rangle\langle n+p-q|, \tag{6}\]
and
\[\hat{\rho}_{\rm th}^{(as)}=N_{2}\sum_{n=q}^{\infty}\frac{\bar{n}^ {n}}{(1+\bar{n})^{1+n}}\frac{n!(n+p-q)!}{((n-q)!)^{2}}\] \[\times|n+p-q\rangle\langle n+p-q|. \tag{7}\]
The analytical expressions for \(N_{1}\) and \(N_{2}\) are respectively
\[N_{1}=\left\{\begin{array}{ll}\frac{(1+\bar{n})(p-q)!}{(p!)^{2}\;{}_{2}F_{1 }(1+p;1+p;1+p-q;\frac{\bar{n}}{1+\bar{n}})},&p-q\geq 0\\ &\\ \frac{(1+\bar{n})}{\sum_{n=0}^{\infty}(\frac{\bar{n}}{1+\bar{n}})^{\frac{((n+p )!)^{2}}{n!(n+p-q)!}}}&,&p-q<0\end{array}\right. \tag{8}\]
and
\[N_{2}=\frac{(1+\bar{n})\Big{(}\frac{\bar{n}}{1+\bar{n}}\Big{)}^{-q}}{p!\;{}_{ 2}F_{1}(1+q,1+p;1;\frac{\bar{n}}{1+\bar{n}})}, \tag{9}\]
in which \({}_{P}F_{Q}\) is the Generalized Hypergeometric function.
### Photon Number Distribution
For a given thermal field, the probabilities of finding \(n\) photons in states (6) and (7) are respectively
\[p_{\rm th}^{(sa)}(n)=\frac{N_{1}}{(1+\bar{n})}\left(\frac{\bar{n}}{1+\bar{n}} \right)^{n-p+q}\frac{((n+q)!)^{2}}{n!(n-p+q)!}, \tag{10}\]
and
\[p_{\rm th}^{(as)}(n)=\frac{N_{2}}{(1+\bar{n})}\left(\frac{\bar{n}}{1+\bar{n}} \right)^{n-p+q}\frac{n!(n-p+q)!}{((n-p)!)^{2}}. \tag{11}\]
In Fig. 1, we show how photon number distribution changes with photon number \(n\) for different excitation and de-excitation parameters. In general, with increasing \(p\) and \(q\) the peak moves towards right and becomes more wide for both the added-then-subtracted and subtracted-then-added distributions. That means adding and subtracting photons shift the peak from zero to nonzero photons. We further notice that \(p_{\rm th}^{(as)}(n)\) possesses a narrower distribution compared to \(p_{\rm th}^{(sa)}(n)\) [see Figs. 1(a)-(c) and Figs. 1(d)-(f) ].
### Wigner Distribution
For an optical field in the state \(\hat{\rho}\), the Wigner function is defined as [20]
\[W(\beta,\beta^{*})\] \[= \frac{2}{\pi^{2}}e^{2|\beta|^{2}}\int\left\langle-\gamma|\hat{\rho }|\gamma\right\rangle\exp[-2(\beta^{*}\gamma-\beta\gamma^{*})]d^{2}\gamma, \tag{12}\]
where \(|\gamma\rangle\) is a coherent state. In particular, a simple calculation via (12) results the Wigner distribution for the initial thermal state as
\[W_{\rm th}(\beta,\beta^{*})=\frac{2}{\pi(1+2\bar{n})}\exp\left(-\frac{2|\beta |^{2}}{1+2\bar{n}}\right)\!, \tag{13}\]
which is clearly Gaussian. The Wigner functions for photon-added-then-subtracted and photon-subtracted-then-added thermal states are respectively
\[W_{\rm th}^{(sa)}(\beta,\beta^{*}) = \frac{2N_{1}}{\pi}e^{-2|\beta|^{2}}\frac{(4|\beta|^{2})^{p-q}}{(1+ \bar{n})}\sum_{n=0}^{\infty}\left\{\frac{(n+p)!}{(n+p-q)!}\right\}^{2} \tag{14}\] \[\times\frac{\left\{\left(\frac{\bar{n}}{1+\bar{n}}\right)(4|\beta |^{2})\right\}^{n}}{n!},\]
and
\[W_{\rm th}^{(as)}(\beta,\beta^{*}) = \frac{2N_{2}}{\pi}e^{-2|\beta|^{2}}\frac{(4|\beta|^{2})^{p-q}}{(1 +\bar{n})}\sum_{n=q}^{\infty}\left\{\frac{n!}{(n-q)!}\right\}^{2} \tag{15}\] \[\times\frac{\left\{\left(\frac{\bar{n}}{1+\bar{n}}\right)(4|\beta |^{2})\right\}^{n}}{n!}.\]
Fig. 2 elaborates the Wigner distributions in phase space for several combinations of \(p\) and \(q\). It is clear that the cycling of photons in alternate orders cause extensively different changes to the field. \(W_{\rm th}^{(sa)}(\beta,\beta^{*})\) is positive everywhere but the Gaussian peak gradually transforms to a central dip as \((p,q)\) increases. Here addition of a larger number of photons, keeping \(q\) fixed, leads to a deeper region of the Wigner function [see Figs. 2(b)-(c)]. That means adding more photons in photon-added-then-subtracted method can prepare a classical non-Gaussian state [21]. In Figs. 2(d)-(f), \(W_{\rm th}^{(as)}(\beta,\beta^{*})\) are plotted for (1,1), (2,4) and (2,6) respectively. For this reverse process, the dip at the central position shrinks with increasing \(p\) and \(q\). When \(p\) is fixed, the midway hole looses its depth as annihilation number increases. It should be noted that the Wigner function obtained by first adding one photon and then subtracting one photon [Fig. 2(a)] from an initial thermal state remarkably differs in character from the Wigner function after the one-photon-subtracted-then-added process [Fig. 2(d)]. The different results for these two alternate sequences with same \((p,\,q)\) establish the non-commutativity between \(a\) and \(a^{\dagger}\).
### Mandel's \(Q\) Parameter
Next to determine the photon statistics of a single-mode radiation field we consider the Mandel's \(Q\) parameter defined by [22]
\[Q=\frac{\left\langle{a^{\dagger}}^{2}a^{2}\right\rangle}{\left\langle{a^{ \dagger}}a\right\rangle}-\left\langle{a^{\dagger}}a\right\rangle, \tag{16}\]
which measures the deviation of the variance of the photon number distribution of the considered state from the Poissonian distribution of the coherent state. \(Q=0\) stands for Poissonian distribution, while for
Figure 1: (Color online) Photon number distribution of photon-added-then-subtracted (upper line) and photon-subtracted-then-added (lower line) thermal field is plotted against \(n\) for \(\bar{n}=0.25\) and (a) \(p=q=2\), (b) \(p=4\), \(q=2\), (c) \(p=8\), \(q=6\), (d) \(p=q=2\), (e) \(p=4\), \(q=2\) and (f) \(p=8\), \(q=6\).
\(0\) (\(Q>0\)), the field obeys sub- (super-) Poissonian photon statistics. But the negativity of \(Q\) is not a necessary condition to distinguish the quantum states into nonclassical and classical regime but just a sufficient one. For example, a state may be nonclassical even though \(Q\) is positive [1]. Using (16), one can easily calculate
\[Q_{\rm th}^{(sa)} = \frac{(p-q-1)\ _{2}F_{1}(1+p,1+p;p-q-1;\frac{\bar{n}}{1+\bar{n}})}{ _{2}F_{1}(1+p,1+p;p-q;\frac{\bar{n}}{1+\bar{n}})} \tag{17}\] \[-\frac{(p-q)\ _{2}F_{1}(1+p,1+p;p-q;\frac{\bar{n}}{1+\bar{n}})}{ _{2}F_{1}(1+p,1+p;1+p-q;\frac{\bar{n}}{1+\bar{n}})}\]
and
\[Q_{\rm th}^{(as)} = \frac{(p-1)\ _{3}F_{2}(1+p,1+p,1+q;1,p-1;\frac{\bar{n}}{1+\bar{n}})}{ _{3}F_{2}(1+p,1+p,1+q;1,p;\frac{\bar{n}}{1+\bar{n}})} \tag{18}\] \[-\frac{p\ _{3}F_{2}(1+p,1+p,1+q;1,p;\frac{\bar{n}}{1+\bar{n}})}{ _{2}F_{1}(1+q,1+p;1;\frac{\bar{n}}{1+\bar{n}})}\]
In order to see the variation of the \(Q\) parameter against the mean photon number \(\bar{n}\), Mandel's \(Q\) is plotted as a function of \(\bar{n}\) in Fig. 3. In the range of \((-1,\ 1/2)\), \(Q\) increases monotonically with \(\bar{n}\), regardless of several \((p,\ q)\) values. Both the added-then-subtracted and subtracted-then-added \(Q\) curves are partially negative and partially positive which indicates that the fields enjoying sub-Poissonian characteristic also obey super-Poissonian distribution after a certain limit of \(\bar{n}\). For fixed \(q\), as depicted in Fig. 3(a), the values of \(Q_{\rm th}^{(sa)}\) decrease as \(p\) increases. But in Fig. 3(b), where the number of addition is fixed and the number of subtraction is varied, \(Q\) increases as \(q\) increases. It seems that increasing creation (annihilation) number may produce stronger (weaker) sub-Poissonian statistics. On the contrary, Fig. 3(c) and Fig. 3(d) have made it clear that adding more photons from a subtracted-then-added thermal field (keeping \(q\) fixed) creates a weaker super-Poissonian distribution and subtracting more photons from a subtracted-then-added thermal field (keeping \(p\) fixed) creates a stronger super-Poissonian distribution. It is to be pointed out that in general \(Q_{\rm th}^{(sa)}\) reaches Poissonian level (\(Q=0\)) more rapidly than \(Q_{\rm th}^{(as)}\).
Figure 3: (Color online) Mandel’s \(Q\) parameter for photon-added-then-subtracted thermal state (upper row) and photon-subtracted-then-added thermal state (lower row) as a function of mean photon number with different \((p,\ q)\)’s.
Figure 2: (Color online) Wigner distribution of photon-added-then-subtracted (upper line) and photon-subtracted-then-added (lower line) thermal state as a function of \({\rm Re}(\beta)\) and \({\rm Im}(\beta)\) for \(\bar{n}=0.04\) and (a) \(p=q=1\), (b) \(p=4\), \(q=12\), (c) \(p=8\), \(q=12\), (d) \(p=q=1\), (e) \(p=2\), \(q=4\) and (f) \(p=2\), \(q=6\).
Even coherent state
The even coherent state [ECS] is defined as a superposition of two coherent states as [23]
\[|\psi\rangle_{\text{ECS}}=\frac{1}{(2+2e^{-2|\alpha|^{2}})^{1/2}}(|\alpha\rangle +|-\alpha\rangle), \tag{19}\]
where \(|-\alpha\rangle\) has the same amplitude as \(|\alpha\rangle\) but with a phase shift of \(\pi\). When \(|\alpha|\) is as small as \(2\), \(|\langle\alpha|-\alpha\rangle|^{2}\approx 0\)[24]. Assuming (19) as the initial state, the density operators for the photon-added-then-subtracted and photon-subtracted-then-added even coherent states are respectively
\[\hat{\rho}_{\text{ECS}}^{(sa)}=N_{3}a^{q}a^{\dagger p}\hat{\rho}_{\text{ECS}}a^ {p}a^{\dagger q}, \tag{20}\]
and
\[\hat{\rho}_{\text{ECS}}^{(as)}=N_{4}a^{\dagger p}a^{q}\hat{\rho}_{\text{ECS}}a^ {\dagger q}a^{p}, \tag{21}\]
where \(N_{3}\) and \(N_{4}\) are the normalization constants. For deriving \(N_{3}\) and \(N_{4}\), we use some normal- (antinormal-) ordered operator identities such as [25]
\[a^{\dagger p}a^{q}=:H_{p,q}(a^{\dagger},a):,\,a^{a}a^{\dagger p}=(-i)^{p+q}:H _{p,q}(ia^{\dagger},ia):,\]
and
\[:H_{p,q}(ia^{\dagger},ia)::H_{u,v}(ia^{\dagger},ia):\] \[=\sum_{n=0}^{min(p,v)}\frac{p!v!}{n!(p-n)!(v-n)!}:H_{p+u-n,q+v-n} (ia^{\dagger},ia):,\]
where \(::\) stands for normal ordering, \(H\) is the two-variable Hermite polynomial defined as
\[H_{m,n}(x,y)=\sum_{l=0}^{min(m,n)}(-1)^{l}\frac{m!n!}{l!(m-l)!(n-l)}x^{m-l}y^{ n-l}.\]
With the help of the above properties and the well-known relation between the bivariate Hermite polynomial and the Laguerre polynomial [26], i.e. \(H_{m,m}(x,y)=(-1)^{m}m!L_{m}(xy)\), \(N_{3}\) and \(N_{4}\) can be calculated as
\[\left.\begin{array}{l}N_{3}=\frac{\left(1+e^{-2|\alpha|^{2}}\right)}{\sum_{ m=0}^{q}\frac{(q!)^{2}(p+q-m)!}{(-1)^{m}m!((q-m)!)^{2}}L_{p+q-m}^{(\text{ sup})}(|\alpha|^{2})},\\ N_{4}=(-1)^{p+q}N_{3},\end{array}\right\} \tag{22}\]
where \(L_{p+q-m}^{(\text{sup})}(|\alpha|^{2})=L_{p+q-m}(|\alpha|^{2})+L_{p+q-m}(-| \alpha|^{2})\).
### Photon Number Distribution
We can find \(n\) number of photons in states (20) and (21) respectively with the probabilities
\[p_{\text{ECS}}^{(sa)}(n)=\left\{\begin{array}{ll}\frac{2N_{3}e^{-|\alpha|^{ 2}}}{(1+e^{-2|\alpha|^{2}})}\frac{((n+q)!)^{2}(|\alpha|^{2})^{n-p+q}}{n!((n-p +q)!)^{2}},&n-p+q\text{ even}\\ 0&,&n-p+q\text{ odd}\end{array}\right. \tag{23}\]
and
\[p_{\text{ECS}}^{(as)}(n)=\left\{\begin{array}{ll}\frac{2Na^{e-|\alpha|^{2}} }{(1+e^{-2|\alpha|^{2}})}\frac{n!(|\alpha|^{2})^{n-p+q}}{((n-p)!)^{2}},&n-p+q \text{ even}\\ 0&,&n-p+q\text{ odd}\end{array}\right. \tag{24}\]
In Fig. 4, we examine how the changes in \((p,q)\) affect the photon-added-then-subtracted (photon-subtracted-then-added) even coherent state, when \(n-p+q\) is even and \(|\alpha|^{2}=4\). In general, \(p_{\text{ECS}}^{(sa)}\) has a broader distribution than \(p_{\text{ECS}}^{(as)}\). Figs. 4(b) and (c) show that by increasing the excitation number one can move the peak towards right for the added-then-subtracted state. But for the subtracted-then-added even coherent state, the increase in \(q\) (whenever \(q\) differs a lot from \(p\)) has no significant effect on the position of the peak [see Figs. 4(e)-(f)].
### Wigner Distribution
To find out the analytical expressions for the Wigner functions of added-then-subtracted and subtracted-then-added even coherent states, we recall some basic properties of two-variable Hermite polynomial [26]
\[H_{m,n}(x,y)=\frac{\partial^{m+n}}{\partial u^{m}\partial v^{n} }\exp(-uv+ux+vy)|_{u,v=0},\] \[\frac{\partial^{r}}{\partial x^{r}}H_{m,n}(x,y)=\frac{m!}{(m-r)!}H _{m-r,n}(x,y),\]
and an integral formula [27]
\[\int\frac{d^{2}z}{\pi^{2}}\exp(a|z|^{2}+bz+cz^{*}+dz^{2}+{ez^{*}} ^{2})\] \[=\frac{1}{\sqrt{a^{2}-4de}}\exp\left(\frac{-abc+b^{2}e+c^{2}d}{a^ {2}-4de}\right)\!,\]
whose convergent condition is \(\text{Re}(a\pm d\pm e)<0\) and \(\text{Re}(\frac{a^{2}-4de}{a\pm d\pm e})<0\). Insertion of these formulas into (12) gives us
\[W_{\text{ECS}}^{(sa)}(\beta,\beta^{*})=\frac{N_{3}}{(1+e^{-2| \alpha|^{2}})}\sum_{n=0}^{p}\frac{(-1)^{n}(p!)^{2}}{n!((p-n)!)^{2}}\left\{ \left(|H_{p-n,q}[i(2\beta-\alpha),i\alpha^{*}]|^{2}e^{-2|\alpha-\beta|^{2}}+|H _{p-n,q}[i(2\beta+\alpha),-i\alpha^{*}]|^{2}e^{-2|\alpha+\beta|^{2}}\right)\right.\] \[\left.+e^{-2|\beta|^{2}}\left(H_{p-n,q}[i(2\beta-\alpha),-i\alpha ^{*}]\overline{H_{p-n,q}[i(2\beta+\alpha),i\alpha^{*}]}e^{2(\alpha\beta^{*}- \alpha^{*}\beta)}\right.\]
\[+\overline{H_{p-n,q}[i(2\beta-\alpha),-i\alpha^{*}]H_{p-n,q}[i(2\beta+\alpha),i \alpha^{*}]e^{2(\alpha^{*}\beta-\alpha\beta^{*})})\Big{\}}\,, \tag{25}\]
and
\[W_{\rm ECS}^{(as)}(\beta,\beta^{*}) = \frac{N_{4}}{(1+e^{-2|\alpha|^{2}})}\sum_{n=0}^{p}\frac{(-1)^{n}(p!)^{2}}{n!((p-n)!)^{2}}\left\{\Big{(}|H_{p-n,q}[2\beta-\alpha,\alpha^{*}]|^{2} e^{-2|\alpha-\beta|^{2}}+|H_{p-n,q}[2\beta+\alpha,-\alpha^{*}]|^{2}e^{-2|\alpha+ \beta|^{2}}\right)\right. \tag{26}\] \[\left.+e^{-2|\beta|^{2}}\left(H_{p-n,q}[2\beta-\alpha,-\alpha^{*} ]\overline{H_{p-n,q}[2\beta+\alpha,\alpha^{*}]}e^{2(\alpha\beta^{*}-\alpha^{*} \beta)}\right.\right.\] \[\left.\left.+\overline{H_{p-n,q}[2\beta-\alpha,-\alpha^{*}]}H_{p -n,q}[2\beta+\alpha,\alpha^{*}]e^{2(\alpha^{*}\beta-\alpha\beta^{*})}\right) \right\},\]
where \(\overline{H_{m,n}(x,y)}\) denotes complex conjugate of \(H_{m,n}(x,y)\). The partial negativity of Wigner function is a clear signature of nonclassical character of the related state. But this condition is one-sided i.e. one cannot conclude the state is classical when the Wigner function is positive everywhere. For example, the Wigner function of the squeezed state is Gaussian and positive everywhere but it is a well-known nonclassical state.
Based on Eqs. (25) and (26), we plot the Wigner function in phase space with several combinations of \((p,q)\) and \(\alpha=1\). The nonclassical character of both the added-then-subtracted and subtracted-then-added even coherent states are depicted in Fig. 5. In case of \(p=q=1\), the distribution \(W_{\rm ECS}^{(as)}(\beta,\beta^{*})\) almost coincides with the distribution \(W_{\rm ECS}^{(as)}(\beta,\beta^{*})\). We observe that the central Gaussian peak of the added-then-subtracted even coherent state first transforms to a single peak with a deep crater and then again a single Gaussian peak comes out of this crater as \(p\) changes from 1 to 2 [Fig. 5(b)] and 2 to 3 [Fig. 5(c)]. While for subtracted-then-added even coherent state, the bumps at the two ends of the \(x\)-axis slowly disappear with \(p\). We further notice that the partial negative region of the added-then-subtracted (subtracted-then-added) even coherent state gradually diminishes with increasing \(p\). This implies that increasing photon addition number causes the lose of nonclassicality of the state. In fact if we increase \(q\) together with \(p\), \(W_{\rm ECS}^{(as)}(\beta,\beta^{*})\) just reduces to show a nearly plane region.
In Fig. 6, we show Wigner functions of only photon-added even coherent state for \(\alpha=0.1\). Choosing (a) \(p=1\) and (b) \(p=5\), we obtain partially negative Wigner functions which look like those presented in [28].
Figure 4: (Color online) Photon number distribution of photon-added-then-subtracted (upper line) and photon-subtracted-then-added (lower line) even coherent state is plotted against \(n\) for \(|\alpha|^{2}=4\) and (a) \(p=q=1\), (b) \(p=8\), \(q=4\), (c) \(p=16\), \(q=4\), (d) \(p=q=1\), (e) \(p=4\), \(q=8\) and (f) \(p=4\), \(q=12\).
### Mandel's \(Q\) Parameter
Mandel's \(Q\) parameter for even coherent state can be derived using the following relations:
\[\langle a^{\dagger}a\rangle^{(sa)} = \frac{N_{3}}{(1+e^{-2|\alpha|^{2}})}\sum_{m=0}^{q+1}\frac{((q+1)!) ^{2}(p+q+1-m)!}{(-1)^{m}m!((q+1-m)!)^{2}} \tag{27}\] \[\times L_{p+q+1-m}^{(\sup)}(|\alpha|^{2}),\]
\[\langle a^{\dagger}a\rangle^{(as)} = \frac{N_{4}}{(1+e^{-2|\alpha|^{2}})}\sum_{m=0}^{q}\frac{(-1)^{p+q+ 1-m}(q!)^{2}(p+q-m)!}{m!((q-m)!)^{2}}\left[(p+q+1-m)L_{p+q+1-m}^{(\sup)}(|\alpha |^{2})+L_{p+q-m}^{(\sup)}(|\alpha|^{2})\right],\]
\[\langle a^{\dagger 2}a^{2}\rangle^{(sa)} = \frac{N_{3}}{(1+e^{-2|\alpha|^{2}})}\sum_{m=0}^{q+2}\frac{((q+2)!)^{2}(p+q+2-m)!}{(-1)^{m}m!((q+2-m)!)^{2}}\left[(p+q+2-m)(p+q+1-m)L_{p+q+2-m}^ {(\sup)}(|\alpha|^{2})\right]\] \[\times L_{p+q+2-m}^{(\sup)}(|\alpha|^{2}),\]
Figure 5: (Color online) Wigner function of photon-added-then-subtracted (upper row) and photon-subtracted-then-added (lower row) even coherent state as a function of \(\mathrm{Re}(\beta)\) and \(\mathrm{Im}(\beta)\) for \(\alpha=1\) and different \((p,q)\)’s (a) \((1,1)\), (b) \((2,1)\), (c) \((3,1)\), (d) \((1,1)\), (e) \((2,1)\) and (f) \((3,1)\).
\[+4(p+q+1-m)L_{p+q+1-m}^{(\sup)}(|\alpha|^{2})+2L_{p+q-m}^{(\sup)}(| \alpha|^{2})\Big{]}\,. \tag{30}\]
Substituting (27) and (28) into (16) and (29) and (30) again into (16) we determine \(Q_{\rm ECS}^{(sa)}\) and \(Q_{\rm ECS}^{(as)}\) respectively.
Fig. 7 clearly shows the changes of \(Q_{\rm ECS}^{(sa)}\) and \(Q_{\rm ECS}^{(as)}\) curves for a fixed number of one operation and a variable number of other operation. Fig. 7(a) and Fig. 7(b) respectively exhibit that \(Q_{\rm ECS}^{(sa)}\) goes far away from Poissonian level as \(p\) increases, keeping \(q\) fixed and comes closer to Poissonian level as \(q\) increases, keeping \(p\) fixed. The values of \(Q_{\rm ECS}^{(as)}\) also conclude the same. For increasing photon creation number, \(Q_{\rm ECS}^{(sa)}\) sticks to its sub-Poissonian character much more but for increasing photon subtracting number, \(Q_{\rm ECS}^{(as)}\) changes its characteristic to indicate super-Poissonian distribution.
## V Conclusion
In this article, we have introduced the density matrices of different quantum states after applying an operator combination \(a^{q}a^{\dagger p}\) or \(a^{\dagger p}a^{q}\) to them. Assuming the field to be initially either in a thermal state or in an even coherent state, we have investigated the statistical properties depending on the analytical expression of the normalization constant, photon number distribution, Wigner function and Mandel's \(Q\) parameter. Two different criteria i.e. negativity of Wigner function and Poissonian statistics of Mandel's \(Q\) have been used to reveal the nonclassicality of photon-added-then-subtracted (thermal or even coherent) state and photon-subtracted-then-added (thermal or even coherent) state. We have noticed that the Wigner function of thermal state has no negative region at all but the Wigner function of even coherent state exhibits a partial negative region in phase space which is a clear evidence for the nonclassicality of the state. In case of even coherent state, as \((p,q)\) be
Figure 6: (Color online) Wigner function of only photon-added even coherent state with \(\alpha=0.1\).
Figure 7: (Color online) Mandel’s \(Q\) parameter for photon-added-then-subtracted even coherent state (upper row) and photon-subtracted-then-added even coherent state (lower row) as a function of \(|\alpha|\) with different \((p,\ q)\)’s.
comes larger, \(W_{\rm ECS}^{(sa)}(\beta,\beta^{*})\) and \(W_{\rm ECS}^{(as)}(\beta,\beta^{*})\) change in an almost opposite way. In addition, the \(Q\) parameter of thermal state presenting negative values becomes \(>0\) after a certain limit of \(\bar{n}\). We have seen also irrespective of \((p,q)\) values, \(Q_{\rm ECS}^{(sa)}\) represents super-Poissonian curves after a certain value of \(|\alpha|\). In conclusion, the different results for the different orders of adding and subtracting multiphotons to the initial state clearly proves the non-commutativity between \(a\) and \(a^{\dagger}\).
**ACKNOWLEDGEMENT**
AC thanks National Board of Higher Mathematics, Department of Atomic Energy, India for the financial support.
|
2303.16499 | Assessing the Impact of Mobile Attackers on RPL-based Internet of Things | The Internet of Things (IoT) is becoming ubiquitous in our daily life. IoT
networks that are made up of devices low power, low memory, and low computing
capability appears in many applications such as healthcare, home, agriculture.
IPv6 Routing Protocol for Low Power and Lossy Network (RPL) has become a
standardized routing protocol for such low-power and lossy networks in IoT. RPL
establishes the best routes between devices according to the requirements of
the application, which is achieved by the Objective Function (OF). Even though
some security mechanisms are defined for external attackers in its RFC, RPL is
vulnerable to attacks coming from inside. Moreover, the same attacks could has
different impacts on networks with different OFs. Therefore, an analysis of
such attacks becomes important in order to develop suitable security solutions
for RPL. This study analyze RPL-specific attacks on networks using RPL's
default OFs, namely Objective Function Zero (OF0) and the Minimum Rank with
Hysteresis Objective Function (MRHOF). Moreover, mobile attackers could affect
more nodes in a network due to their mobility. While the security solutions
proposed in the literature assume that the network is static, this study takes
into account mobile attackers. | Cansu Dogan, Selim Yilmaz, Sevil Sen | 2023-03-29T07:22:44Z | http://arxiv.org/abs/2303.16499v1 | # Assessing the Impact of Mobile Attackers on RPL-based Internet of Things
###### Abstract
The Internet of Things (IoT) is becoming ubiquitous in our daily life. IoT networks that are made up of devices low power, low memory, and low computing capability appears in many applications such as healthcare, home, agriculture. IPv6 Routing Protocol for Low Power and Lossy Network (RPL) has become a standardized routing protocol for such low-power and lossy networks in IoT. RPL establishes the best routes between devices according to the requirements of the application, which is achieved by the Objective Function (OF). Even though some security mechanisms are defined for external attackers in its RFC, RPL is vulnerable to attacks coming from inside. Moreover, the same attacks could has different impacts on networks with different OFs. Therefore, an analysis of such attacks becomes important in order to develop suitable security solutions for RPL. This study analyze RPL-specific attacks on networks using RPL's default OFs, namely Objective Function Zero (OF0) and the Minimum Rank with Hysteresis Objective Function (MRHOF). Moreover, mobile attackers could affect more nodes in a network due to their mobility. While the security solutions proposed in the literature assume that the network is static, this study takes into account mobile attackers.
Internet of Things IoT Security RPL Objective Functions Attacks Mobility
## 1 Introduction
IoT has become one of the most revolutionary concepts of this century with the advancements in sensor and networking technologies. The adoption of the IPv6 protocol standard has led to the emergence of numerous IoT devices that are capable of communicating with each other and with remote machines through the Internet. It is estimated that total installed base of IoT devices will reach around 75 billion in 2025 [1]. IP-connected IoT devices have recently opened the door to the development of several life-enhancing applications. These include healthcare monitoring, smart cities, transportation and logistics, military and defense, robots, and the like.
Low Power and Lossy Networks (LLN) in IoT are characterized by high packet loss and low throughput. Due to the characteristics of LLN, traditional routing protocols, even those proposed for WSNs are not applicable to LLNs. Therefore, the Internet Engineering Task Force-Routing over LLN (IETF-RoLL) group designed an IPv6-based routing protocol specific for the LLNs: RPL, which operates on the IEEE 802.15.4 standard using IPv6 over Low-power Wireless Personal Area Network (6LoWPAN) adaptation layer.
RPL makes use of OF in order to build an optimal route between the IoT devices (or nodes) in the network. There are different routing metrics employed in OFs that play key role in selecting parent node and hence routes to the root node
or destination nodes. These include Expected Transmission Count (ETX), hop count, energy, and the like [2]. There is no obligation to use a specific OF metric; it often depends on the requirements of IoT applications. The appropriate selection of OFs is very important because it significantly affects the performance of the network including packet delivery ratio, end-to-end delay, power consumption. Although various kinds of OFs have been proposed until far, OF0 and MRHOF are known as standard OFs defined for the RPL.
RPL ensures efficient routing among IoT devices on LLN, and that's why, it is adopted as the standard routing protocol today. However, there are a number of significant challenges faced by RPL. The first is that the RPL protocol is vulnerable to attacks (particularly to insider attacks) that aim to consume resources of IoT devices and hence reduce the lifetime of the network. The other is that RPL does not support mobility, it is specifically designed for static networks. Considering the fact that most of the application scenarios in IoT such industrial automation involve the use of mobile nodes attached to agents such as workers, robots, products, and the like, this can be regarded as one of the major drawbacks of RPL. Therefore, new improvements on RPL have been explored by researchers [3; 4]. Such improvements should be carried out for providing its security as well, since the existence of mobile attackers could severely damage the network. This is one of the main objectives of the current study.
This study explores the performance of RPL under attack. Although some analysis is carried out for a particular type of attack, such as rank attack [5], version number attack [6; 7]; this study differs from these studies by taking into account both OF and mobility that could change the effect of attacks on networks. Moreover, this study does not focus on a particular attack but covers different types of attack namely version number, DIS flooding, worst parent attacks are included in this study. As stressed earlier, the mobility of the nodes can harm the RPL-operated network with changing rates depending on several factors. Among them, the OF used in the network and the density of mobile nodes are priori because they are highly correlated to the performance of RPL. This becomes alarming when there are attack nodes present in the network. From this point of view, this study will provide a great insight to the researcher studying to enhance RPL towards the mobility and to develop security solutions. This is the continuation study of [8] which analyze attacks on static networks only. To the best of our knowledge, this is the first study that thoroughly analyzes the performance of RPL on networks with varying mobile attacker densities and with different OFs. RPL is analyzed under different network scenarios by using the following performance metrics: packet delivery ratio, power consumption, overhead, and latency.
The rest of the paper is organized as follows. The background information that covers an overview of RPL, the standard/default objective functions used in RPL, and attacks targeting RPL is given in Section 2. Section 3 summarizes the studies in the literature that analyze attacks and consider mobility in RPL security. The experimental settings are introduced, and the experimental results are discussed in Section 4. Finally, Section 5 concludes the study.
## 2 Background
### Overview of RPL
RPL creates a topology called Destination Destination Oriented Directed Acyclic Graph (DODAG). DODAG is a DAG rooted at single destination. A network can operate on one or more RPL instances where multiple DODAGs can take part. The role of each instance is to define an objective function to calculate the optimum path within the DODAG. A DODAG is built by using the following RPL control packets:
* _DODAG Information Object (DIO):_ It is initiated and broadcast only by the root node. DIO packets carry network information (e.g., instance ID, version number). Each of the receiving nodes adds the sender to its parent list, calculates its own rank value, which states its position in the graph with respect to the root node, and finally, it forwards DIO to its neighbors. DIO packets are relayed throughout the graph and play a major role in constructing the default upward routes. The transmission interval of the DIO packets should be well adjusted. The lower the interval is the higher the overhead leading to shorter lifetime of the network; the higher the interval is, however, the lower the responsiveness to the network's inconsistencies. The management of transmission rate of DIO packets by the nodes is governed by an algorithm in RPL called _trickle timer_. This algorithm enables DIO packets being broadcast more frequently initially to make DODAG stable, and increases the time interval to avoid unnecessary propagation of DIO packets in the network. The timer is reset when an inconsistency is reported by the nodes, causing the DIO packets being broadcast instantly again.
* _DODAG Information Solicitation (DIS):_ It is used as a solicitation for having DIO information when a new node is to join the DODAG. DIS packets are broadcast by the new node to its neighbors.
* _Destination Advertisement Object (DAO):_ It is used for the construction of the downward routes from the root to sensor nodes. Based on the mode of operation, the child unicasts DAO packets either to root node
in non-storing mode or to its selected parent node in storing mode so that it records downward routes in its routing table for the sub-DODAG.
* _Destination Advertisement Object Acknowledgement (DAO-ACK):_ Upon receiving DAO packets from a parent node, DAO-ACK packets are sent to the the sender node as an acknowledgement.
### Objective functions in RPL
RPL objective function is used for the calculation of rank value assigned to each node in the network. Therefore, it implicitly governs the selection of the preferred parent of the nodes in the network, and consequently, determines the routing path that is optimal with respect to the utilized OF. The packets are forwarded in the selected routes according to three traffic patterns: point-to-point, point-to-multipoint, and multipoint-to-point. Objective functions differ with respect to the RPL instances; hence, different OFs could be simultaneously used within an RPL network by different instances. For example, one can take 'hop count' into consideration to build routes of a DODAG, the'residual energy' of the nodes can be used for finding the routes of another DODAG in the same network. The selection of appropriate objective functions is critical and changes in accordance with the requirements of the application.
Even though there have been a number of OFs proposed in the literature so far, OF0 [9] and MRHOF [10] are proposed as the default OFs in RPL:
#### 2.2.1 Or0
OF0 takes the hop count between the root node and a sensor node into account for the calculation of the rank value of that node. Therefore, it aims to minimize the number of hops to reach to the root node by choosing the node that has the lowest rank from its reachable neighbors as its parent. When OF0 is used as the objective function in the network, for a given node \(n\), the rank of this node can be calculated using (1).
\[R(n)=R(p)+RI \tag{1}\]
\(R(n)\) is the new rank of node \(n\), \(R(p)\) is the rank of the preferred parent node, and \(RI\) stands for the rank increase metric that is calculated by using (2).
\[RI=(Rf\times Sp+Sr)\times MHRI \tag{2}\]
\(Rf\) is a configurable rank factor and it uses 1 as the default value. \(Sp\) is the step of the rank, and \(Sr\) is the maximum value assigned to the rank level. \(MHRI\) stands for \(MinHopRankIncrease\) which is a constant value defined as 256 in RFC6550 [11].
#### 2.2.2 Mrhof
Unlike to OF0, a number of link- and node-based additive routing metrics can be easily integrated into MRHOF. The rank value (\(R(n)\) in Eq. 1), and hence the routing path, is determined according to the employed routing metric which is stored in the'metric container' sub-option in the DIO packet.
By using one of these routing metrics (e.g., latency, RSSI, and etc.), MRHOF ensures the lowest-cost path in the LLN. Two metrics are integrated into MRHOF in this study: MRHOF with ETX (MRHOF-ETX), which is a link-based metric, and MRHOF with energy (MRHOF-ENERGY), which is a node-based metric.
MRHOF-ETX chooses the paths with the lowest number of transmission value by considering ETX values of the links. The ETX value of the links is calculated using (3).
\[ETX=\frac{1}{Df\times Dr} \tag{3}\]
\(Df\) is the probability that the neighbor will reach the packet and \(Dr\) is the probability that the acknowledgment packet will be received.
MRHOF-ENERGY chooses the path that provides the maximum remaining energy for the RPL nodes. Energy metric of the nodes is calculated using (4).
\[ENERGY=\frac{P_{max}}{P_{now}} \tag{4}\]
where \(P_{max}\) is defined as the targeted maximum power, and it is calculated from the initial energy of node divided by the targeted lifetime; \(P_{now}\), however, is the actual power of node.
### RPL specific attacks
RPL attacks can vary according to what they primarily target, and they are categorized accordingly: attacks targeting network resources, network topology, and network traffic [12]. The basic goal of network resource attacks is to use the resources of legitimate nodes and/or the network, resulting in poor network performance. This type of attack can speed the consumption of a node's battery power, use node memory, and cause a delay in the remaining necessary processes. DIS flooding and version number attacks are in the class of attacks that target network resources. RPL attacks can also be used to disrupt the network's topology and worst parent attack is one of the this type of attack. Lastly, there are attacks aimed to disrupt network communication and this category's major goal is to direct network traffic to a specified node. In this study, we have studied version number, DIS flooding, and worst parent attacks.
* _Version Number:_ Version number in DIO packets is used by the DODAG root to perform global repair, and it is increased only by the root node. In the attack scenario, the malicious node illegitimately increases the incoming version number, causing unnecessary rebuilding of DODAG. In our attack scenario, a malicious node illegitimately increases version number by one in every minute, before forwarding incoming DIO messages to its neighbors.
* _DIS Flooding:_ It is a typical RPL-specific DoS attack that targets consuming network resources. In order to make nodes or links unavailable in LLNs, attacker continuously sends large amount of control packets. This attack is often performed by sending DIS packets after receiving a DIO packet from a node. By doing so, the DIS flooding attack brings about network congestion and overloading of RPL nodes. In our attack scenario, malicious node multicasts the DIS message to its neighbor nodes every 500 milliseconds.
* _Worst Parent:_ As stated earlier, an RPL node chooses its own parent node according to the rank value determined by the objective function, which ensures the 'best parent' for that RPL node. However, in this attack scenario, attacker node contrarily chooses the worst parent, resulting in non-optimized routing path and hence leading LLN to show very poor performance.
## 3 Related Works
The advancements in the sensor- and actuator-equipped devices and communication technologies in the wireless medium have given rise to the emergence of a great number of IoT applications in recent years. Therefore, IoT-based challenges have become one of the key directions studied by the researchers. The attention towards RPL has been tremendously growing; and until now, a good deal of studies have been proposed on this protocol in the literature [13].
Although IoT devices are attached to mobile agents (e.g., people, robots, etc.) in many real-world applications, mobility is not supported in the default implementation of RPL. Therefore, although rare, there has been recent attempt that _i_) analyzes RPL on networks under mobility, _ii_) enhances the protocol to integrate the mobility support, _iii_) proposes mobility-aware security solutions, and that _iv_) analyzes RPL against routing attacks. These studies are briefly discussed here.
There have been few attempts to investigate the performance of RPL-OFs in the static environment; however, we do not take these evaluations into consideration here. Please refer to [8] for the review of these studies. Rather, we focus more on studies that explore the performance of RPL in mobile environments. In [14], an evaluation of PRL routing performance is studied in terms of packet loss and power consumption as a function of data traffic density (that is, 6 and 12 packets per minute). The findings reveal that RPL yields higher packet loss and lower power consumption as data traffic increases. Considering the fact that using at least 25 nodes in network simulations is necessary to see multi-hop characteristics of RPL [15], the reliability of this evaluation is highly questionable because the experiments are conducted with only two network simulations where only 13 nodes are used of which one is mobile. Another performance evaluation of RPL as a function of two radio duty cycles (that is, 8 and 32 Hz) is made in terms of the packet delivery ratio, ETX, power consumption, and latency in [16]. The duty cycle mechanism is used to listen for a packet transmission of neighbors. To receive incoming packet, the node's radio is turned on when a packet is detected. After receiving, an acknowledgment is sent to the transmitter. A sender node sends its packets during the wake-up period until it receives an acknowledgment. The evaluations suggest that the performance of RPL downs when mobile nodes are present in the network, and the amount of degradation depends on the chosen setting of radio cycle.
In order to cope with downsides caused by mobile nodes in RPL, a great deal of modifications is proposed toward the protocol in the literature. _Trickle algorithm_ designed for RPL is not suitable for mobile environments, since it triggers
the control packets to be sent periodically by the nodes. Moreover, if the network is stable, the length of time period increases to reduce the overhead in the network. However, a mobile node will instantly need the control packet to connect to the DODAG and to update its preferred parent. This issue is often handled by most of the works to make RPLs adapted to mobile networks. In [17], the applicability of RPL to Vehicular Ad Hoc Network (VANET) is explored. In order to achieve that, RPL's trickle timer algorithm is disabled because of the vehicles that are highly mobile and often need DIO packets. Instead, a fixed sending time interval is adopted. Therefore, DIO messages are guaranteed to be sent once within each time period, ensuring mobile nodes are connected to DODAG. In addition, ETX values are considered in this approach for the parent selection of mobile nodes.
Instead of completely disabling the trickle algorithm, there are some approaches that modify the algorithm in the literature. In [18] a reverse trickle algorithm is proposed. Here, the router nodes set the interval of the trickle timer to a maximum value believing that they remain connected to their parent for a long time. Then, they periodically decrease the interval until it reaches the minimum value. When the time interval reaches the minimum value, children nodes are expected to send DAO packets to control if mobile nodes are connected. Some assumptions are made for this approach to be applicable to RPL. Firstly, mobile nodes are restricted to be only leaf nodes in DODAG and do not advertise DIO packets. Secondly, mobile nodes are determined with a mobility flag added as a field to the DAO packet. The main drawback of this approach is that an intruder can easily evade this approach once he falsifies the parent nodes by illegitimately changing the mobility flag, which leads to unnecessary propagation of DIO packets. In addition, there is always a static node in range of any mobile nodes. Another approach that modifies the trickle algorithm is proposed in [19]. Here, Received Signal Strength Indicator (RSSI) values are used to configure the time interval of the trickle algorithm. Upon receiving a packet, the nodes read RSSI values, compare them with the last reading values, and send DIS packets immediately after a worsening on RSSI value is detected so that they keep connected to DODAG.
An adaptive strategy for setting the time interval is integrated into the trickle algorithm in [20]. In the proposed strategy, each mobile node checks the number of neighbors that plays a key role in setting the interval. The more neighboring nodes are, the larger the interval is. In addition to that, the parent selection mechanism is also improved. The rank values are first considered for building the parent list and then the parent node is chosen with respect to, in order, ETX, expected lifetime (ELT), and RSSI. Another adaptive adjustment of time interval is proposed in [21] as an alternative to the trickle timer. It relies on fixed length of time interval which should be adjusted according to the speed of the mobile nodes. This approach enhances RPL with the Corona architecture, a simple concept that divides the network area into the coronas, to locate mobile nodes in the network. Therefore, when links to the parent nodes are broken, mobile nodes are allowed to set the best neighbor node as their preferred parent within the same Corona where the radius is centered at the DAG root. This approach introduces additional flags to DIS and DIO packets. The illegitimate setting of these flags by attackers prevents this approach from functioning properly.
Recently, few security solutions taking into account mobility are proposed. In [22], a trust-based security mechanism, which involves a modified rank computation, is proposed against Sybil and DoS attacks. In this approach, not only the OF value but also trust and RSSI values are used for calculation the rank values of nodes. The trust value is used to identify the malicious nodes, and hence, to isolate them. It increases every time a trusted event occurs and decreases when an attempt is malicious. RSSI value, however, is used by mobile nodes to select parent nodes considering the signal strength. Sybil attack is also targeted in [23] in a network where the mobility-aware RPL proposed in [20] is adopted. Here, Artificial Bee Colony (ABC) algorithm [24] is used to model the behavior of Sybil intruders and to ensure a very harsh attack condition. Then, a lightweight intrusion detection approach is proposed against this attack environment learned. This approach is based on three trust factors driven by _i_) DODAG and NONCE IDs, _ii_) control message counters, and _iii_) timestamps for control messages. Another lightweight security mechanism against the DIS flooding attack, called Secure-RPL, is proposed in [25]. Secure-RPL prevents the nodes from resetting the trickle timer redundantly. Therefore, it dramatically reduces unnecessary transmission of DIO packets.
There are also studies in the literature that analyze the extent to which RPL is affected. when attackers are in the network. In [6], the effect of version number attack is studied as a function of attacker location. The findings reveal that RPL is very vulnerable to this attack, reducing PDR while increasing end-to-end delay and energy consumption of the nodes. In addition, it is found that the more distant the attackers are from the root, the higher the network performance they lead to. The main drawback of this study is that only 20 nodes are considered in the topology of which only one is the attacker. Additionally, the impact of mobility is not studied. This attack is also analyzed in [7] with respect to two parameters: the initial location of mobile attackers (with respect to the root node) and attacking probability. They found that the initial location has a clear effect on PDR and overhead. As expected, all performance metrics dramatically decrease as the attacking probability increases. The downside of this study is that a single attacker is used at a time, and impact of attacker density is missing. The rank attack is analyzed with different attacker locations in [5]. The analysis shows that the bigger the forwarding load area, which is the sum of the forwarding load of all nodes in the area, is, the more impact attack leads to on the network performance. In addition, the cooperation of multiple attackers gives severe damage to the network performance.
As seen, there has been considerable amount of efforts made on RPL because the performance of an RPL-based IoT network is very sensitive to mobility and routing attacks. The degree to which RPL is sensitive to these factors depends on the attack itself, the density of mobile attackers, and the OF used. That's why, it is worth analyzing the performance of RPL with different OFs and attacker densities against various routing attacks in mobile environment, which is the main contribution of this study.
## 4 Analysis of RPL Objective Functions Under Mobile Attacks
Mobility is a non-trivial concept in cyber security. On the one hand, if the victim nodes are static, attackers due to their mobility could target more victim nodes and hence expand their effect on the network. Moreover, it allows attackers to evade security solutions. On the other hand, it might limit its effects on mobile victim nodes. Here, we aim to investigate the effect of mobility from the attacker point of view. In the future, we plan to investigate how the mobility of victim nodes limits their exposure to attacks.
### Simulation Settings
In this study, we investigate the behavior of RPL on networks where different number of mobile attackers are involved in networks using different OFs. As explained earlier, we here extend our analysis study proposed in [8] by including mobile attackers into networks and analyzing their effects. In order to ensure a fair comparison between the static and mobile environments, the same networks and simulation settings used in [8] are used here. We adopt _random walk mobility model_ for mobile attacker nodes such that they move with a speed of 5 km/h from the beginning of the simulation to the end. We use the Bonnmotion tool [26] to generate a realistic movement pattern for mobile attacker nodes. Cooja simulator [27] running on Contiki OS (version 2.7) [28] is used to simulate the networks. Each simulation scenario is run with the parameter values listed in Table 1.
### Considered Evaluation Metrics
In order to measure the performance of RPL under attacks, the following four performance evaluation metrics are used: _packet delivery ratio_, _overhead_, _power consumption_, and _latency_.
* _Packet Delivery Ratio (PDR):_ It is calculated from the total number of packets received by the root node divided by the total number of packets sent to the root node.
* _Power Consumption (PC):_ In this study, we use the average power consumption of nodes evaluated in mW. Powertrace tool [29] is integrated into Contiki OS to obtain instant power states of IoT devices.
* _Overhead (OVR):_ It is the total number of control messages transmitted by the nodes to create the DODAG. So, overhead is the summation of DIO, DIS, and DAO packets.
* _Latency (LT):_ Here, we use the average latency (in seconds) of all packets, which is the time taken from sending to receiving. Packets that are lost or dropped are excluded from the calculation.
\begin{table}
\begin{tabular}{l l} \hline
**Simulation Parameters** & **Values** \\ \hline Radio Environment & UDGM: Distance Loss \\ Objective Functions & MRHOF-ETX, MRHOF-ENERGY, and OF0 \\ TX Range & 50m \\ INT Range & 100m \\ Simulation Time & 1 hour \\ Area of Deployment & 200x200 \\ Number of Sink Node & 1 node \\ Number of Sensor Node & 50 nodes \\ Platform & Sky mote \\ Traffic Pattern & UDP packets, every 60 sec. by sensor nodes \\ Mobility Type & Random Walk \\ Mobile Attacker Speed & 5 km/h \\ \hline \end{tabular}
\end{table}
Table 1: Simulation Parameters
### Simulation Results
In this study, we take the results from [8] as the baseline performance of RPL in static environments. In addition, these results are used to highlight differences in the performance of RPL when it operates on a network with mobile attackers. For a fair comparison, we here run simulations as done in [8], only making attacker nodes mobile. Therefore, the ten networks adopted in [8] are also used in the experiments. In order to explore how large the performance of RPL changes according to the attacker density; one, three, and five nodes are set as mobile attacker nodes for each simulated network, which corresponds to 2%, 6%, and 10%, of all nodes in the network, respectively. It is worth stressing here that the nodes that were used as attacker nodes in [8] are selected as mobile attacker nodes to ensure a reliable comparison in the experiments.
The average values of the performance metrics obtained from the ten networks with static and mobile attacker nodes are considered for evaluation. For each OF used (i.e., MRHOF-ETX, MRHOF-ENERGY, and OF0), the comparative results are shown separately for the DIS flooding, the version number, and the worst parent attacks in Figures 1-3. Note that attacker densities are denoted with the performance metric in the figures. Speaking concretely, 'PDR (10%)' represents average PDR value when 10% of all nodes are mobile attack nodes in the network.
The PDR values obtained from MRHOF-ETX, MRHOF-ENERGY, and OF0 clearly suggest that the existence of mobile attacker nodes adversely affects the network. Although not significant, an exceptional case is observed only when MRHOF-ENERGY is used when the network is subject to the worst parent attack with an attacker density of 2%. It is also seen from the figures that the differences in the PDR performances often increase as more mobile attackers are involved in the network. Although MRHOF-ENERGY has shown the least PDR performances on both static and mobile environments, it becomes more resistant (10.20% performance degradation on average) to the mobility status of
Figure 1: Comparative performances of OFs for version number attack.
attackers than MRHOF-ETX and OF0, while OF0 is the most sensitive (24% performance degradation on average) to the attacker's mobility status.
Similarly to PDR, the existence of mobile attackers dramatically harms the PC performance of the nodes in the network for all types of attacks, and a positive correlation between the density of the mobile attackers and overall power consumption can also be observed in the figures. The only exception here is observed with MRHOF-ENERGY when the network is subject to the worst parent attack with attacker densities of 2% and 6%. The least and highest differences in PC performance are obtained with MRHOF-ENERGY (0.40mW increase on average), OF0 (0.69mW increase on average), respectively. However, no matter the attackers are mobile or static, MRHOF-ENERGY yields the highest PC of the nodes, whereas a lowest PC of the nodes is obtained with OF0 on overall.
Unlike the performance of PDR or PC, a clear correlation between the OVR metric and the mobility status of attackers can hardly be concluded. It is seen from the results that, particularly for the version number attack, a lower OVR value is observed contrarily when the attackers in the network switch from static to mobile. For this attack, it is seen that OVR dramatically reduces when attackers become mobile and when MRHOF-ETX is used. This is because mobile attackers can be out of the coverage area of other nodes in the network when the trickle timers are reset, preventing falsified DIO packets from being received by others. This saves the network from unnecessarily rebuilding the whole DODAG graph through the control packets. For DIS flood attack, it is seen that the average OVR increases as the attackers becomes denser in the network, which is the case observed also in PDR and PC evaluation. As in version number attack, a lower OVR can also be observed on average when attackers become mobile. This is because mobile attackers, although rarely, can reach somewhere outside the coverage area, which prevents them from triggering neighboring nodes to send DIS packets. It should be noted from the results that a dramatic jump in OVR is observed with OF0 for the DIS
Figure 2: Comparative performances of OFs for DIS flooding attack.
the number of mobile attackers. As for the worst parent attack, we can easily conclude that the lowest OVR is obtained with all OFs regardless of the attacker being statically positioned or mobile.
Similar to OVR performance, higher LT performances can be observed when the network is subject to version number and DIS flooding attacks no matter the attackers are static or mobile. This is due to the additional OVR introduced by these attacks. This is because mobile nodes are unable to trigger a global repair in the network as they are likely to be out of coverage of parent nodes most of the time, which prevents the network from being congested. Moreover, an interesting results here is that, particularly for version number attack, much lower LT performance is observed when attackers are static in the network. The least difference in LT performance is observed with MRHOF-ETX on average, while MRHOF-ENERGY and OF0 performs very similar to each other.
In order to reveal how large mobile attackers can down the performance of RPL than static attackers with respect to the OFs, we have thoroughly analyzed the differences in the performance of OFs separately for version number, DIS flooding, and worst parent attacks. The overall differences are given in Table 2. Note that the performance results, which are obtained when the network is run with static environment, are taken from [8]. The values in this table represent the overall performance differences between when the network is subject to mobile attackers and static attackers, and so they imply how the network worsened when the attackers become mobile. Note that, the biggest gap in the performance are highlighted with gray color in the table, and that the negative values in this table imply that positive effect is observed when the attackers become mobile.
From the performance differences given in the table, it can easily be concluded that the PDR, LT, PC, and OVR performances of the network down much bigger when the attackers become mobile and when routing is governed OF0
Figure 3: Comparative performances of OFs for worst parent attack.
for all types of attack. This finding is not surprising because, as stressed in [8], OF0 is more robust to static attackers yielding higher network performance than MRHOF. Therefore, much bigger reaction to the mobile attackers can be observed with OF0.
## 5 Conclusion
In this study, the performance of RPL is evaluated against mobile attackers. This is the first study that has detailed analysis of OF0, MRHOF-ETX and MRHOF-ENERGY with realistic networks that consist of mobile attackers. We believe that assessing the impact of mobile attackers is a significant step in order to develop security solutions for RPL specific attacks. As a future study, we are planning to explore the effects of attacks when victim nodes are mobile, and develop security solutions for such mobile IoT networks.
|
2305.12571 | Reproducibility Requires Consolidated Artifacts | Machine learning is facing a 'reproducibility crisis' where a significant
number of works report failures when attempting to reproduce previously
published results. We evaluate the sources of reproducibility failures using a
meta-analysis of 142 replication studies from ReScience C and 204 code
repositories. We find that missing experiment details such as hyperparameters
are potential causes of unreproducibility. We experimentally show the bias of
different hyperparameter selection strategies and conclude that consolidated
artifacts with a unified framework can help support reproducibility. | Iordanis Fostiropoulos, Bowman Brown, Laurent Itti | 2023-05-21T21:21:46Z | http://arxiv.org/abs/2305.12571v1 | # Reproducibility Requires Consolidated Artifacts
###### Abstract
Machine learning is facing a'reproducibility crisis' where a significant number of works report failures when attempting to reproduce previously published results. We evaluate the sources of reproducibility failures using a meta-analysis of 142 replication studies from ReScience C and 204 code repositories. We find that missing experiment details such as hyperparameters are potential causes of unreproducibility. We experimentally show the bias of different hyperparameter selection strategies and conclude that consolidated artifacts with a unified framework can help support reproducibility.
## I Introduction
Evaluating ML research requires reproduction studies such as those in ReScience C 1. However, reproduction studies cost researchers time, energy, and resources and can call into question the validity of the experiment being analyzed.
Footnote 1: [http://recience.github.io/](http://recience.github.io/)
[1] identifies missing and convoluted artifacts as one of the main causes of non-reproducible research. Artifacts include configuration details, details on the methodology, and code. Additionally, results can be sensitive to both the hyperparameter selection strategy and the computational budget, and differences can lead to unreproducible results [2]. Current practices specify all of the hyperparameter details in different artifacts, i.e., within the paper text, hard-coded in the open-source code repository, or as default runtime arguments. Lastly, inter-dependencies between multiple software frameworks used for a single experiment can lead to unreproducibility [3].
We identify that additional effort by the replication studies lead to unreproducible outcomes caused by missing artifacts, complications arising from inter-project dependency errors between ML tools, and differences in hyperparameter selection strategies that lead to erroneous analysis.
## II Reproducibility Analysis
### _Missing Artifacts_
We manually evaluate 142 papers that reproduce previous studies and are published at the open-access peer-reviewed journal ReScience C. We tagged each paper based on the issues the authors faced when reproducing the original work with tags denoting implementation issues, hyperparameter issues, and the responsiveness of the original author.
Of the studies evaluated, 80.99% were able to reproduce, 9.86% were unable to reproduce, and 9.15% were partially able to reproduce the results from the original study. Compared to the reproducible studies, unreproducible studies were missing code 15.79% more often and hyperparameter details 19.91% more often.
The results of our survey suggest that the availability of the code, hyperparameter details and medium by which such details are shared were the most important factors for reproducibility.
### _Problematic Tooling_
We quantitatively evaluate the inter-project dependency issue for reproducibility [3]. We mine 132 repositories from the 142 reproduction studies and 72 repositories corresponding to the official implementations from the original studies.
We consider a repository that uses any combinations of 'Ray', 'HyperOpt', 'PyTorch-Ignite', 'Optuna', 'Hydra', and 'PyTorch-Lighting', as using'multiple frameworks'. We identify repositories that include their hyperparameters in a file format (such as.yaml) as using a 'consolidated configuration'.
Using _multiple frameworks_ has a Pearson correlation coefficient \(r=-0.20\) with reproducible outcomes and \(r=0.24\) for partly unreproducible outcomes. In contrast, _consolidated configuration_ was correlated with reproducibility with \(r=0.15\). We observe that defining hyperparameter values in
Fig. 1: Hyperparameter values can be scatter; at each experiment step independent errors from the frameworks used can lead to failures, such as with _inter-dependencies_. The end result is that only a subset of the original trials are valid, which can lead to biased analysis and unreproducible results.
Fig. 2: We identify the source of the hyperparameter for the replication studies for 71 of the 142 original studies for which the hyperparameters were unconsolidated. Hyperparameters scattered in the original study’s text and artifacts led to unreproducible outcomes.
multiple sources rather than in a consolidated location is error-prone. We found that when replication studies sourced the hyperparameter details scatter in the paper-text were proportionally more likely to have unreproducible results compared to sourcing them from the code fig. 2.
### _Invalid Tuning Comparisons_
To evaluate the bias introduced by different hyperparameter search strategies during ML experimentation, we use Emukit [4] to sample hyperparameters from regions of high accuracy ('Greedy'), both high accuracy and high variance ('Combined-Variance'), and high variance alone ('ModelVariance'). We use Optuna [5] to evaluate a Tree-structured Parzen Estimator ('TPE'). We also test quasi-random 'Sobol' sequence sampling and pseudo-random ('Random') sampling.
We apply each search strategy under identical experiment conditions for 10 repetitions with different random seeds and varying budgets (allocated trials) on the NATS-Bench benchmark [6] dataset. NATS-Bench includes a total of 39,419 experimental trials from an exhaustive search of topological variant networks evaluated on the CIFAR-10 dataset. We identify two topological variants: with Residual connections ('ResNets') [7] and without.
Of the strategies evaluated, those that accounted for variance underestimated the performance of 'ResNets' as they sampled from regions of high variance and low average performance. In contrast, TPE overestimated the true model performance. Random and quasi-random approaches (Sobol) had the least biased estimates of the true performance. Counter-intuitively, the quasi-random approach performed poorly on a smaller budget, but eventually converged to perform similarly to random; fig. 3. Our results indicate that comparing methods optimized with different hyperparameter selection strategies will yield misleading results; however, we do not recommend the use of a specific strategy.
## III Conclusions
From the studies we analyzed unconsolidated experimental artifacts, which were the leading factor associated with unreproducible outcomes. The results indicate that failing to account for hyperparameter selection strategies as well as missing experimental artifacts can lead to incorrect analysis, since different search strategies and budgets can have significant effects on the statistical validity of the analysis.
Missing artifacts and differences in hyperparameter selection are caused by poor experimental practices and can be mitigated by improved tooling. ML experiments may require large computation budgets of many experimental trials to produce a valid analysis [8]. We observe that running multiple experimental trials required for the evaluation of a method using current best practices is cumbersome and error-prone.
Current practices require researchers to use different tools for different parts of the experiment, such as algorithm design, distributed execution, and hyperparameter search. We conclude that reproducibility for ML experiments would be improved with a unified framework that facilitates large-scale experimentation and enforces the consolidation of experiment details and hyperparameter search values in concise artifacts.
## Acknowledgement
This work was supported by C-BRIC (one of six centers in JUMP, a Semiconductor Research Corporation (SRC) program sponsored by DARPA), DARPA (HR00112190134) and the Army Research Office (W911NF2020053). The authors affirm that the views expressed herein are solely their own, and do not represent the views of the United States government or any agency thereof.
|
2301.00482 | FEVA: Fast Event Video Annotation Tool | Video Annotation is a crucial process in computer science and social science
alike. Many video annotation tools (VATs) offer a wide range of features for
making annotation possible. We conducted an extensive survey of over 59 VATs
and interviewed interdisciplinary researchers to evaluate the usability of
VATs. Our findings suggest that most current VATs have overwhelming user
interfaces, poor interaction techniques, and difficult-to-understand features.
These often lead to longer annotation time, label inconsistencies, and user
fatigue. We introduce FEVA, a video annotation tool with streamlined
interaction techniques and a dynamic interface that makes labeling tasks easy
and fast. FEVA focuses on speed, accuracy, and simplicity to make annotation
quick, consistent, and straightforward. For example, annotators can control the
speed and direction of the video and mark the onset and the offset of a label
in real time with single key presses. In our user study, FEVA users, on
average, require 36% less interaction than the most popular annotation tools
(Advene, ANVIL, ELAN, VIA, and VIAN). The participants (N=32) rated FEVA as
more intuitive and required less mental demand. The code and demo are available
at http://www.snehesh.com/feva. | Snehesh Shrestha, William Sentosatio, Huiashu Peng, Cornelia Fermuller, Yiannis Aloimonos | 2023-01-01T22:20:33Z | http://arxiv.org/abs/2301.00482v2 | # FEVA: Fast Event Video Annotation Tool
###### Abstract
Video Annotation is a crucial process in computer science and social science alike. Many video annotation tools (VATs) offer a wide range of features for making annotation possible. We conducted an extensive survey of over 59 VATs and interviewed interdisciplinary researchers to evaluate the usability of VATs. Our findings suggest that most current VATs have overwhelming user interfaces, poor interaction techniques, and difficult-to-understand features. These often lead to longer annotation time, label inconsistencies, and user fatigue. We introduce FEVA, a video annotation tool with streamlined interaction techniques and a dynamic interface that makes labeling tasks easy and fast. FEVA focuses on speed, accuracy, and simplicity to make annotation quick, consistent, and straightforward. For example, annotators can control the speed and direction of the video and mark the onset and the offset of a label in real time with single key presses. In our user study, FEVA users, on average, require 36% less interaction than the most popular annotation tools (Advene, ANVIL, ELAN, VIA, and VIAN). The participants (N=32) rated FEVA as more intuitive and required less mental demand. The code and demo are available at [http://www.snehesh.com/feva](http://www.snehesh.com/feva).
Video Annotation Tools User Interface Design User Interaction
Figure 1: “Speed Label” enables you to create annotations (red rectangle) in real-time by marking start (\(st_{p}\)) and end-time (\(et_{p}\)) with a single key press respectively. FEVA automatically adjusts these times of the event annotation based on your reaction-time (\(\Delta r\)) in order to give you the most precise intended times (\(st_{i}\) and \(et_{i}\)).
## 1 Introduction
Social scientists need to code conversations and behaviors of videotaped interviews and experiments that quickly add up to hours of footage [1, 2, 3]. Computer scientists need datasets with appropriately labeled ground truth for machine learning that contains video clips spanning hundreds of hours [4, 5, 6], or even thousands of hours [7]. Annotating videos is thus time-consuming and tedious.
There are existing VATs [8, 9, 10, 11, 12, 13] that offer a range of different features for video annotation activities. However, they often fail to meet the need of the researchers due to the steep learning curves with complicated features, overwhelming interfaces, and poor interaction techniques leading to longer annotation time, inconsistencies, and user fatigue.
For example, to analyze soccer games, researchers annotate player ball possessions, kicks, and assists. Annotating with tools that pause for the user to name the annotation every time, needing to annotate with mouse context menu options, and not allowing overlapping annotations make it an extremely tedious and time-consuming task. On the contrary, coding by hand is more straightforward. An annotator can play the video at a slower speed, use a clicker count or a stopwatch [14] to lap the timestamps in real-time, then enter them into a spreadsheet when completed. As a result, many annotators still code by hand and rely on spreadsheets [15]. At scale, when you have hours of such footage, computer scientists often outsource or crowdsource such annotation tasks [16, 17, 18, 19]. This can raise concerns about privacy and reliability [20]. To get around this, researchers blur faces to obfuscate participant identity. However, this could compromise the quality of the emotional judgment, causing inconsistencies in the results.
In this paper, we aim to design a video annotation tool that makes labeling tasks easy and fast. We interviewed researchers from different disciplines to understand standard practices, workflows, and tools used for video annotation. Specifically, we interviewed 13 researchers from 5 fields (neuroscience, behavioral psychology, film studies, and computer science). Researchers expressed reservations with existing VATs leading to avoiding them. We further surveyed 59 VATs (the list is available in the appendix ). We categorized their main features and interface design choices from firsthand experience and analyzed video tutorials for tools that are not accessible. According to the interviews and survey results, we propose five design criteria that would benefit video annotation activities, which are detailed in section 4:
* D1. Organize space based on operational workflow
* D2. Streamline high-frequency actions
* D3. Use algorithmic support when possible
* D4. Adopt what works and redesign what doesn't
* D5. Allow flexibility
These criteria inform the design of Fast Event Video Annotation (FEVA), a video annotation tool with streamlined interaction techniques and a dynamic interface that makes labeling tasks easy and fast. Simplified UI and features such as real-time labeling with reaction time adjustments (figure 1), and precise fine-tuning mechanisms (figure 2), help annotators create a large number of accurate labels faster than any VAT.
Figure 2: The shift keys activate the fine-tuning feature. The left and right shift keys correspond to the start and end time of the label, respectively. Pressing the arrow key while holding down the shift key adjusts the label’s start or end time by a single frame.
To evaluate FEVA, we conducted two comparative studies. In the first study, we compared the number of inputs users need to complete a task with FEVA versus five other event-based VATs [8; 9; 10; 12; 13]. As seen in table 3, on average, FEVA required 36.0% fewer inputs than competing VATs to do the same task. In the second study, we asked 34 participants to perform annotations with one of the VATs in random order. We found that regardless of the background, users thought FEVA was more intuitive at 88% and easier to use at 91% of the time. Users also rated FEVA as requiring lower mental demand by 46% (\(p<0.00003\)), caused fewer frustrations by 62% (\(p<\)0.00147), had less physical demand by 41% (\(p<0.00187\)), and required less effort by 34% (\(p<0.00324\)).
In Summary, in this paper, our contributions are as follows:
* a comprehensive understanding of **existing VATs** with interviews, tool surveys, and pilot tests.
* a list of **criteria** for VAT tool **design**
* **FEVA**, an event-based video annotation tool that lets you annotate faster and more accurately.
* a comparative study and a **user study** that demonstrates FEVA as more intuitive and requiring less effort while creating more consistent and accurate annotations.
## 2 Related Work
In this section, we introduce event-based annotation and tools we build upon, the workflow used in the annotation process, and finally, the user interface and the related interaction techniques. To understand the annotation workflow and goals, we interviewed researchers and surveyed the literature on the steps required to complete the desired annotations. We found two primary groups of annotators: ones that annotate with the assistance of VAT and ones that rely on more heuristic methods.
### Event Based Annotation
Video annotation is the process of marking regions of interest (ROI) either a) spacial (object annotation, OA) or b) temporal (event annotation, EA). OA is marked on a single-time instance, referred to as frame-based annotation. EA is the period when the event occurs. OA is prevalent in the CV community for object detection and recognition, and bounding boxes [21; 22; 23; 24], dots [25; 12], or polygon a.k.a segmentation [26; 27; 28] or masks [25; 29; 23] are used to mark the annotation. While some VATs also track objects [30; 31] over time [12; 25; 22; 24], it is different from EA. EA focuses on what is happening in the scene, what the objects are doing, or what is done to the objects [32; 33] rather than the objects themselves. So the start and the end time marks are used. EA includes behaviors, interactions, emotional response, speech, and movements [34; 35; 36]. Their application range across disciplines from computer science for activity recognition [37; 12; 38; 39; 40; 41], psychology for behavior analysis [11; 42; 35], to journalism to track and present stories over timelines [43]. However, events annotation can be challenging due to the complex nature of temporal navigation, the ability to mark at the exact desired time of sliding events, use of available features of the tools to facilitate the annotation. Therefore, the focus of developing FEVA was to simplify the workflow and optimize the steps for the annotation of events to make annotation easier and faster.
### Heuristic Workflows
Some researchers prefer not to use specialized video annotation tools. For example, some researchers thought it was easier to use a clicker to count the time certain events occurred. This is error-prone, less reliable, and makes it difficult to review the records in the future. Even though some VATs can provide a similar functionality [12] using keypresses, researchers showed hesitation using VATs with the fear of the initial setup time and the repeated learning curve for new coders. In another study [14], research assistants (RA) used a stopwatch to obtain the time taken "from when the OK button was pressed to when the device beeped to signal completion of the RR count." These clicker counts or stopwatch intervals are recorded either by hand with pen and paper or entered in a spreadsheet. In another workflow, an interpersonal relationship researcher described how, from a video of two people interacting, they asked RAs to respond to research questions set up by the researchers while watching a video. For example, multiple RAs focus on the entire interaction or a specific individual in the video and respond to a Likert scale on how attentive one of the partners was when the other spoke. These are either recorded on paper or online using Qualtrics or Google forms.
### Video Annotation Tools Workflows
The primary workflow using a VAT does not differ from the heuristic methods for simple coding. The main difference is the higher learning curve and a longer setup process. However, as the coding gets more complex, heuristic methods
have a diminishing return on speed as you have more annotators. The annotation process is much slower, needing to do a lot of steps typically tools might facilitate manually. However, not all VAT are created equal. The workflow design, screen layout of features, interaction steps, and techniques differ significantly across VATs.
Computer scientists and engineers design most VATs for computer vision, machine learning, and robotics research and applications [12; 35; 4; 16; 7]. However, there are a handful of tools specifically designed for and used by other domains such as film studies [13; 44] where VAT allows for character analysis and scene analysis. Journalists use VAT to annotate and synchronize events from multiple sources to present a cohesive story [43]. In sports, teams annotate games and significant events and move to study for their teams [45; 46]. While most tools have different focuses, the primary annotation is marking a binary exists or not and a temporal range of events of interest. To this end, events annotation is the foundation that makes further work easier or, in some cases, possible. So it is essential to have accurate annotations. Support for synchronized multi-camera views, micro and macro visualization of events in a timeline, and convenient searching and reviewing video features make FEVA a favorable choice in these areas.
#### 2.3.1 Layout design: Balance of features and space usage
Some tools have a lot of features with complicated workflows and many permutations of features that can be customized and used, so the UI is packed with small windows, buttons, and layers of menu items to get to them. These tools take much longer to learn and get used to; however, they can be powerful once you learn them. Due to many features, such as media player controls, annotation controls, and visualizations, the available screen real estate can be quite challenging. So some tools group them into smaller windows [13; 9; 8], organize them through multiple levels of menus [9; 10], or through different operational modes [12; 10; 13]. While this helps organize the functionality, it isn't easy to access, and one needs a lot of practice to remember the various steps required to use the software. While there are other much simpler tools that serve a very niche purpose [46; 47; 44; 48; 49], and the layout is simple, and they make excellent use of the screen real estate for the functionality they offer. These tools are easy to learn and start to use. However, the features are limited. Our interviews found that most researchers' and annotators' needs are in between. Most features are never used in the complex VAT, while simple VAT limits them from doing more than the niche they offer, and they need to pair with other tools to augment the gap to complete their needs. We designed FEVA with the motivation of creating a tool that is simple to navigate and use but comes with features that more complex tools offer without it being overwhelming that you have to take a course to use a tool.
### Adoption
The keyboard shortcuts for VIA [12] are intuitive and practical, especially the hints that pop up based on the context is something most tools lack that FEVA adopts as well. FEVA uses popular shortcuts that have become the standard for media players, such as the spacebar to play or pause, ctrl+Z, and ctrl+Y to undo and redo, etc. VIAN [13] is the only tool you don't have to select before dragging the label with the mouse, which is the same in FEVA. Advene [9], ELAN [10] and VIA [12] provide alternative ways to visualize or execute the playback option, such as continuous mode. While useful in some instances, most annotators used the default way without changing, which could be confusing. While some tools [12] allows one to create a label when the movie is playing, it only expects the start time with a fixed length. Unfortunately, most users need to return to the label and readjust them. FEVA improves upon this interaction by allowing a second key press to mark the endpoint.
## 3 Understanding the State-Of-The-Art VATs and Their Comparisons
### Target Users
To understand if researchers from different disciplines annotate their data, what that entails, and what the workflow looks like, we interviewed 3 neuroscience researchers, 3 behavior psychology researchers, 2 film study instructors, and 5 computer scientists. The interview was semi-structured to answer the following 3 questions:
* **IQ1**: The nature of their research involving human studies, the kinds of data collected, and if it entails video.
* **IQ2**: The workflow in the data collection process, post-processing of these data, and code generation.
* **IQ3**: The structure and workflow for annotating the videos, and how the annotations are used after.
Post-interview, the responses were tallied and coded for technical challenges. The interview insights (II) are as follows:
* **III1**. There were three primary temporal annotations. 1) A binary label to mark the presence or absence of certain events. For example, researchers were interested in counting "how many times a person touched their
face as one indicator of how nervous the participants were." 2) A range label marks the beginning and end of an event of interest. "Participants annotate videos for specific moments. For instance, a couple might interact, then the researchers have each couple member watch the video and annotate whenever a specific thought or feeling occurred to them. They do this with both participants to see convergence and dissonance of thoughts, feelings, goals, etc., etc." 3) Certain kinds of labels, such as mood or scenario, lasted longer than an action label. "For instance, we might want to compute a rating of how responsive or caring one individual is to their relationship partner. So a Research Assistant might watch an entire interaction, focusing on a specific individual in the video, and then answer a question about how attentive they are when their partner speaks." Most VATs do a poor job of supporting these labels, so researchers had workarounds such as creating multiple label tracks, each dedicated to a specific response.
* **II2**: The time, effort, and cost for data annotation are exponentially high. So any system that made some improvements to make the annotation faster and more reliable was always a huge win. "We spend a lot of our RA hours doing these annotations. If there were a tool that could cut the time by even an hour, I definitely would be using that tool."
* **II3**: Even with very explicit codes, annotated data often had low temporal precision, so the agreement rules were relaxed. "Many RAs review and annotate the videos. There will be slight variations in when each RA thinks a certain event happened."
* **II4**: Collaboration and sharing of the annotation during the annotation process and analysis step was cumbersome and required multiple steps to be in sync between the teams. "...the students need to refresh the page if they are working on annotating the same clip simultaneously to see what their classmates are writing. That is also a problem for me since I like to add my comments while they are working (to encourage them to elaborate on points or explore a new point)."
* **II5**: Current VAT provided a poor interface for researchers to explore, analyze, and search the annotated data.
* **II6**: Crowd-sourced or AI-generated annotation often needed so much review that it was easier and faster for researchers' RAs to do the annotations. "So I used the [online automatic speech recognition tool]. With this, with the premium, I still have to go in and edit everything. The labels for the actions and the labels for the word-for-word speech need to match up in terms of start and end time. So creating the labels by hand is actually easier for me."
This paper focuses on II2, II3, and II5, which help shape the design decisions made in section 4.
### Comparing Video Annotation Tools specifications
We created an extensive list of 59 VAT from the literature as listed in appendix B. We were able to find their websites, download links or shared open source codes, or at the minimum online videos of either talk by the authors or how-to videos. We cataloged typical features most software supported and unique features and techniques distinctive to each tool. In this extensive survey, we share the table for the narrowed-down selected five tools. We detail the selection method in section 6.1. Two types of tables were created:
* based on high-level taxonomy as shown in table 1 and
* based on features as shown in table 2
As a contrasting difference, here is an example of the steps required to create a new annotation using 3 different VATs, as shown in figure 3. We make a more thorough step-by-step comparison in the evaluation section in 6.2.
* In the upper left corner is ANVIL. It requires you to double-click the starting point and click on the endpoint time mark to select the region. Then right-click the shaded area to select from the context menu to create and edit the label.
* On the upper right corner is VIAN. In VIAN, you right-click in the space between the tracks and the ruler. Then select create an annotation, then select text as the type for text annotation.
* In VIA, you create the track for the particular annotation and name it ("Bird" in this case), then play the video till the movie's starting point and hit the letter 'A' to create a fixed-size annotation. You can go back and adjust the annotation.
We surveyed the layout and the functionalities available in a number of events annotation software by using them to annotate videos or watch videos posted by the authors. We noticed the following:
\begin{table}
\begin{tabular}{l l l l l l l l l} \hline \hline
**SN** & **Features** & **Description** & **Advenue** & **ANVIL** & **ELAN** & **VIA** & **VIAN** & **FEVA (Ours)** \\ \hline
0 & Last Updated & Month Year & Jun 2020 & Mar 2019 & Mar 2020 & Jul 2020 & May 2020 & Sep 2020 \\ \hline
1 & SW Platform & Cloud vs Edge & Edge & Edge & Edge & Edge & Edge & Cloud or Edge \\ \cline{2-8} & & Native vs Web & Native & Native & Native & Web Based & Native & Web Based \\ \cline{2-8} & & Modular vs & Static & Static & Static & Static & Static & Modular \\ \hline
2 & License & Open Source & Open Source & Proprietary & Open Source & Open Source & Open Source & Open Source \\ \cline{2-8} & & Commercial vs & Open Access & Open Access & Open Access & Open Access & Open Access \\ \cline{2-8} & & Minimaled vs & Maintained & Maintained & Maintained & Maintained & Maintained \\ \hline
3 & Cost & Free, low cost, vs Expensive & Free & Free & Free & Free/Low Cost & Free \\ \cline{2-8} & & One time vs & N/A & N/A & N/A & N/A & N/A \\ \hline
4 & Collaboration & Single User & ✓ & ✓ & ✓ & ✓ & ✓ \\ \cline{2-8} & & Multi-User (Simultaneous) & ✗ & ✗ & ✗ & ✗ & ✗ & ✗ \\ \cline{2-8} & & Crowd & ✗ & ✗ & ✗ & ✗ & ✗ & ✗ \\ \hline
5 & Target Users & Technical vs & Technical vs & Technical & Technical & Technical & Technical & Technical & Both \\ & & Non-Technical & & & & & & \\ \cline{2-8} & & Academic & vs & Academic & Academic & Academic & Academic & Academic \\ \hline
6 & Input Type & Image & ✗ & ✗ & ✗ & ✓ & ✗ & ✗ \\ \cline{2-8} & & Video & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ \\ \cline{2-8} & & Audio & ✗ & ✗ & ✓ & ✓ & ✗ & ✓ \\ \hline
7 & Annotation & Object & ✗ & ✗ & ✗ & ✓ & ✗ & ✓ \\ \cline{2-8} & & Action & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ \\ \cline{2-8} & & Events & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ \\ \cline{2-8} & & Hybrid & ✗ & ✗ & ✗ & ✓ & ✗ & ✓ \\ \hline
8 & Annotation Approach & Manual & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ \\ \cline{2-8} & & Automatic & ✗ & ✗ & ✗ & ✗ & ✗ \\ \cline{2-8} & & Hybrid & ✗ & ✗ & ✗ & ✗ & ✗ \\ \hline
9 & Annotation & Format & JSON & ✗ & ✓ & ✗ & ✗ & ✓ \\ \cline{2-8} & & XML & ✓ & ✗ & ✓ & ✗ & ✗ \\ \cline{2-8} & & SQL & ✗ & ✗ & ✗ & ✗ & ✗ \\ \cline{2-8} & & Others? & ✓ & ✓ & ✓ & ✓ & ✓ & ✗ \\ \hline \hline \end{tabular}
\end{table}
Table 1: VAT taxonomy comparison table.
\begin{table}
\begin{tabular}{l l l c c c c c c} \hline \hline
**SN** & **Features** & **Description** & **Advenue** & **ANVIL** & **ELAN** & **VIA** & **VIAN** & **FEVA** \\ \hline
1 & Annotation types & Object Bounding Box & ✗ & ✗ & ✗ & ✓ & ✗ & ✓ \\ \cline{2-8} & & Object Mask & ✗ & ✗ & ✗ & ✓ & ✗ & ✗ \\ \cline{2-8} & & Object Dot & ✗ & ✗ & ✗ & ✓ & ✗ & ✓ \\ \cline{2-8} & Temporal Events & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ \\ \hline
2 & Playback controls & Play Pause FF RR & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ \\ \cline{2-8} & & Speed +/- & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ \\ \cline{2-8} & & Timeline Jump & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ \\ \hline
3 & Preview & Thumbual Previews & ✓ & ✗ & ✗ & ✗ & ✓ \\ \hline
4 & Label & Multi-track & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ \\ \cline{2-8} & & Group tracks & ✗ & ✗ & ✗ & ✗ & ✗ \\ \cline{2-8} & & User-defined Label Types & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ \\ \cline{2-8} & & Show/HideCollips/Expand & ✗ & ✗ & ✗ & ✗ & ✗ \\ \hline
5 & Speed Label & Sudo-Pedal & ✗ & ✗ & ✗ & ✓ & ✗ & ✓ \\ \cline{2-8} & & Transcribing Pedal Support & ✗ & ✗ & ✗ & ✗ & ✗ & ✓ \\ \hline
6 & Resize & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ \\ \hline
7 & Move & & ✓ & ✓ & ✓ & ✓ & ✓ \\ \hline
8 & Add & & ✓ & ✓ & ✓ & ✓ & ✓ \\ \hline
9 & Delete & & ✓ & ✓ & ✓ & ✓ & ✓ \\ \hline
10 & Edit & & ✓ & ✓ & ✓ & ✓ & ✓ \\ \hline
11 & Import & & ✓ & ✓ & ✓ & ✗ & ✓ & ✓ \\ \hline
12 & Import other formats & & ✓ & ✓ & ✓ & ✗ & ✓ & ✓ \\ \hline
13 & Video Support & MP4 & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ \\ \cline{2-8} & & Others & ✓ & ✓ & ✓ & ✓ & ✓ & ✗ \\ \hline
14 & Cameras & Multi-Cam & ✓ & ✗ & ✗ & ✗ & ✗ & ✓ \\ \cline{2-8} & & Switch View & ✓ & ✗ & ✗ & ✗ & ✗ & ✓ \\ \cline{2-8} & & Instant Switch View & ✓ & ✗ & ✗ & ✗ & ✗ & ✓ \\ \hline
15 & History & Undo/ Redo & ✓ & ✗ & ✓ & ✗ & ✓ & ✓ \\ \hline
16 & Search & Keyword & ✓ & ✗ & ✓ & ✗ & ✓ & ✓ \\ \hline & & Filter by label type & ✓ & ✗ & ✓ & ✗ & ✓ & ✓ \\ \hline
17 & User Config & Remember/ Save & ✗ & ✗ & ✓ & ✗ & ✗ & ✓ \\ \hline
18 & Modular/ API & Add-In Support & ✓ & ✗ & ✓ & ✗ & ✓ & ✗ \\ \cline{2-8} & & Full Open Source Support & ✓ & ✗ & ✓ & ✓ & ✗ & ✓ \\ \cline{2-8} & & Custom Layers & ✗ & ✗ & ✓ & ✓ & ✓ & ✗ \\ \cline{2-8} & & Custom Tracks & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ \\ \hline
19 & Layers & Show/Hide Layers & ✗ & ✗ & ✗ & ✗ & ✓ & ✓ \\ \cline{2-8} & & Human Joints Keypoints Support & ✗ & ✗ & ✗ & ✗ & ✓ \\ \cline{2-8} & & Human Bounding Box Support & ✗ & ✗ & ✗ & ✓ & ✗ & ✓ \\ \cline{2-8} & & Human Mask Support & ✗ & ✗ & ✗ & ✓ & ✗ & ✗ \\ \hline
20 & Export Support & Video Clips & ✓ & ✗ & ✓ & ✗ & ✗ & ✓ \\ \cline{2-8} & & Image Frames & ✗ & ✓ & ✗ & ✗ & ✓ \\ \cline{2-8} & & Closed Caption & ✓ & ✗ & ✓ & ✗ & ✗ & ✓ \\ \hline \hline \end{tabular}
\end{table}
Table 2: VAT features comparison table.
* Most VATs except for VIA [12] screen was filled with buttons, menus, and windows. While essential features were buried in multiple levels of menu items, there needed to be more screen real estate utilized. The new VAT requires to be simple to understand and easy to navigate.
* Observations through video instructions and pilot testing, we noticed that only limited features and controls were needed, primarily a) project and data management, b) annotation management, c) video navigation, d) annotations space, and e) the tool configurations. The new VAT needs a layout such that these features are upfront and eliminate unnecessary features or allocate them into rarely used spaces.
* The annotation visualization was limiting. An entire track was dedicated to a single label [13], and no overlapping time labels were possible [8; 10]. Only some [12; 9] allow overlapping time labels. However, it is difficult to distinguish and manipulate the labels. The new VAT needs to organize the annotations automatically and better use annotation space.
* While some tools [12; 11] have good annotation UX controls to create and edit temporal placements, the use of mouse and keyboard had to be used interchangeably between annotation steps. Some tools [8; 9; 10] have a very cumbersome way to move or resize annotations. Some tools provided a history option to undo/ redo [9; 10], while others provided no way to backtrack user mistakes or perform experimental steps. The new VAT needed simple mouse control and default keyboard shortcuts while allowing users to redefine them.
* All tools required you to stop and annotate except VIA [12], which provided real-time annotation during video playback. However, VIA only allowed fixed-length annotation flags requiring adjustment of the endpoint later. Furthermore, VIA has poor visualization and lacks control for overlapping labels. The new VAT needs a fast real-time way to continue annotating without stopping.
Figure 3: Example of steps needed to annotate in 3 different VATs. The Upper left screenshot is ANVIL, the upper right is VIAN, and the bottom screenshot is of VIA.
## 4 Design Considerations
According to the interviews, literature survey, and the use of the tools, we propose five design criteria that would benefit video annotation activities:
* D1. _Organize space based on operational workflow:_ Features should be laid out in a logical flow of the workflow while not veering too far away from standard software conventions. Features not needed in that context should not be visible or active to free up valuable screen real estate. High-frequency controls and features should be in the middle of the screen.
* D2. _Streamline high-frequency actions:_ The highly repeated actions, such as creating annotations and fine-tuning them, should be optimized for easier and faster execution. Try to accomplish them with a single key press when possible. Stick to a single device (i.e., do not require some steps to use the keyboard and the other steps mouse movements, using up precious time in transitioning between the devices.) Leverage D1 when possible to minimize user interaction required to accomplish the task.
* D3. _Use algorithmic support when possible:_ Whenever possible, offload the user and rely on the algorithm to take on the burden. For example, when the timing is concerned, consider user reaction time and adjust for the lag in the user input from the intended time. Additionally, allowing external modules such as movement detection, human detection, speech detection, and recognition can offload users' need to find and annotate the events manually. However, we should be cautious that no matter how good machine learning modules are, no algorithm is 100% accurate. These should be considered as only additional assistance and not to be completely relied on for ground truth generation.
* D4. _Adopt what works and reinvent what doesn't:_ Instead of reinventing the wheel, adopt from other tools what works well based on user tests or pilot tests and not instincts and personal preferences.
* D5. _Allow flexibility:_ While having D4, tested default input methods is great, adding flexibility by providing redundancies with mouse context menus and keyboard shortcuts might be more intuitive for some users. Additionally, let users redefine the key mapping as people have personal preferences.
## 5 Feva
Figure 4: A screenshot of FEVA, where a participant interacting with a robot, is being annotated.
The design considerations inform the minimalist design of FEVA, with most of the screen real estate allocated for the video and the annotation. Inspired by the clicker/ stopwatch methods and transcribing pedal, we created the speed label. This feature lets you create labels during video play, where a user can press a key to mark the starting and stopping points and continue to watch, creating multiple labels with desired lengths. The system additionally accounts for the user's reaction time and adjusts the marked times as seen in figure 1.
The speed label enables users to annotate in real-time, the fine-tuning control lets the user make frame level adjustments, and the label organizer arranges the label without ever overlapping them for direct access. The flat UI is responsive and aesthetically pleasing. Using F-shape design [50], the initial workflow items are on the top left, while the most viewed items are in the middle of the screen, and the most interacted components are at the bottom of the screen. Keeping most used elements in the hot zones while less used elements such as camera selection and configuration buttons are on the right-hand side in the less noticed area, making them easily tuned out unless needed. The UI has redundant user input for flexible and fast interaction to cater to novice and expert users. And underlying context and configuration-dependent UI model comes to life only when you need them. Expert users can annotate videos faster than in real-time by using media speed control and speed labeler without touching the mouse. Researchers can also use the zoom feature to visualize and analyze labels at a micro or macro level. FEVA UI can display the most number of labels on one screen without losing their meaning.
### Workflow
To create any project, upon selecting a video, FEVA imports it and creates an empty annotation file, also referred to as a label or the dataset file (D1, D2). Default tracks will be loaded, which can be customized from the configuration (D3). Users can add or remove tracks. To play or pause the movie, users can use the media player overlay on the main video or the keyboard shortcut'spacebar.' You can use your mouse scroll button to navigate the video in the timeline. You can also use the arrow keys with or without the 'ctrl' key to move the video by different amounts. You can click and drag the filmstrip, roll over the global timeline and click at the desired time, or double-click the filmstrip or the local timeline window (D1, D2, D5). You can also change the movie playback speed. You can zoom in and out at different time intervals using the + and - icons around the white box on the global timeline or roll over the timeline and ctrl and scroll.
To annotate, users can right-click on the tracks and select the label type they want to create. Users can also hit the letter 'A' key on the keyboard to mark the start and a second time to mark the stop. See figure 1. This is called the speed label, as you can keep annotating without stopping the movie. Speed annotation requires a two-pass, but in our pilot tests, the speed label is at least 1.5x faster than the traditional methods.
Following left-to-right and top-to-bottom conventions, workflow such as loading the project, dataset, and manipulating the video and labels are organized in that order. The main video is at the center of the screen occupying the most space. Below the video are the annotation tracks that follow conventions and eyes and the hand layout of users' gaze and action areas. See figure 5 and figure 4.
### View/ Layout
As seen in figure 5, 'b' shows the project selector from which you create a new project and import videos. The 'd' shows your label file selector to create, load, save, merge, import, and export labels. You can search using the 'e' and filter labels by type. You can double-click the label from the '2' label list to find the corresponding label in the timeline 'j.' You can see the thumbnail preview in 'h' along with the current time window 'g' and the global timeline 'f.'
### Control: User input system
Every media control controlled by the mouse also has its associated keyboard shortcuts. For flexibility and efficiency, GUI for novice users that are based on principles of recognition rather than recall [51], and shortcuts for expert users for faster control. Users can press the play button overlayed on the video or use the spacebar key at any point to play or pause the video. You can also choose to speed up or slow down. To create an annotation while the video is in play, you can press the letter 'A' twice to indicate the start of the label and end the label. This can be customized from the configuration. A blank label is created after adjusting for your reaction time. You can also double-click an empty space on a track or right-click the tracks and select a label type you wish to create. While these shortcuts were selected to be consistent with existing standards from other VATs, all shortcuts can be user-defined easily in the user configuration.
To navigate, you can use your mouse to scroll or click and drag the filmstrips, double click the desired point in the filmstrip or the global progress bar. You can also jump to a specific label from the label list by double-clicking it from the label list. Users can choose what they prefer by providing multiple redundancies with both keyboard and mouse and keyboard shortcuts that can be re-customized by the user.
### Model/ State: Underlying UI support system
The UI makes use of the limited screen space (x-y plain) by using the z-axis to layer components displayed and state changes based on contexts such as if the video is playing, if labels are selected, if labels are being edited, or if your mouse is hovering over a component or a specific feature is enabled in the configuration. Every area is compacted with features that feel intuitive based on affordance users naturally would assign those components. An example is the video player area. When the video is playing, one would only see the video. When a mouse pointer hovers over, media controls, current time and frame number, and layer control are displayed. From the configuration, you can also enable showing or hiding the video, human body keypoints [52], the bounding box of humans or objects, segmentation masks, etc., that are extensible for researchers to customize.
## 6 Evaluation
To evaluate FEVA, we compared FEVA with existing state-of-the-art (SOTA) VAT with two studies.
* Interaction Benchmark: To evaluate the theoretical limits of how fast users could annotate with each VAT, we counted the number of user inputs required to perform various tasks.
* User Study: To evaluate user experience based on the user's perceived workload with each VAT, we conducted user studies where the users provided feedback based on their experience.
### The State-of-the-Art VAT Selection Method
We first created a master list of highly cited VAT that we could download and use. In this list, we only included software that supported temporal annotation that could be downloaded, installed, and run without taking extreme measures for practical reasons. Therefore, VCode [11], SVAT [53], and VACA [35] could not be included. Tools that were too specific such as ToolScape [47], HistoryTracker [46], and CASAM [54] due to missing functionalities such as start and end time, were removed from the list. Tools focused on crowd-sourcing such as Glance [55] and CoAT [42], were excluded as we conducted a single-user study. Using these criteria, we could narrow down the tools to be compared in
Figure 5: FEVA Screen Default Layout black boxed areas are 1) Toolbar, 2) Label list, main video player, and multi-camera selector, 3) Video navigation timeline, and 4) Label tracks. Key components are a) Logo, b) Project selection, c) toolbar icons, d) Label data file selection, e) Search bar, f) Global progress bar timeline, g) Local timeline ruler, h) Filmstrips, i) Label type, and j) tracks and labels.
the study. EagleView [36] was extremely unstable to run, so we could not test them. We narrowed down to Advene [9], ANVIL [8], Elan [10], VIA [12], and VIAN [13].
### Interaction Benchmark
To compare the steps required to do a particular task with each VAT, we counted the number of clicks, double-clicks, mouse movements, and keyboard key presses and took a cumulative sum as seen in table 3. If there were multiple ways of completing a task, we included the fastest method for that tool. For example, if you can press Ctrl+N to create a new project (keypresses count = 2) or can move your mouse to the main file menu, click the file, move the mouse to a new project, and click on the menu item (mouse move = 2 and mouse clicks = 2, total = 4), then we took the lesser of the two.
Table 3 shows the 15 tasks considered for the evaluation. These included basic setting, label creation, and manipulation tasks. We selected tasks that the majority of the VATs could do. If a VAT missed a feature, we assigned the worst count received by competing with the VATs. For example, [12] does not support "undo" or "redo" and received a count of 2.
The number of inputs required in FEVA is significantly less than the SOTA, as shown in table 3. On average, FEVA requires 36% less input than the SOTA. Based on the T-test, FEVA required significantly less input than all tools except VIA, which was not statistically significant.
### User Study
To evaluate user experience, we conducted a user study where participants used two VAT, FEVA, and one another selected SOTA VATs (Advene, ANVIL, ELAN, VIA, and VIAN) in a round-robin fashion. We counterbalanced the order of the two tools by alternating the order with the next participant. Due to the COVID-19 regulations, we conducted our study via Zoom remote shared screen and control feature. This introduced some lag in the user experience. However, since both tools were remote, we assumed that the effects of the lag on the outcome were not significantly discriminant. Participants were given an approximate time range where an event occurs with clear descriptions of the events to annotate, for instance, as seen in figure 6, "between 4 minutes and 20 seconds and 4 minutes 40 seconds, please annotate bunny jump roping." Participants completed approximately 24 tasks until they gave up on the tool. After completing all
\begin{table}
\begin{tabular}{l l c c c c c c} \hline \hline
**SN** & **Tasks** & **Advene** & **ANVIL** & **ELAN** & **VIA** & **VIAN** & **FEVA** \\ \hline
1 & Create a project + Import a video & 7 & 8 & 8 & 7 & 14 & **3** \\ \hline
2 & Create a single label & 4 & 6 & 7 & **2** & 5 & **2** \\ \hline
3 & Create multiple labels & 8 & 12 & 14 & **3** & 9 & **3** \\ \hline
4 & Create and name label & **4** & 9 & 7 & 7 & 9 & 7 \\ \hline
5 & Edit labels & 4 & 7 & **3** & **3** & 5 \\ \hline
6 & Resize labels & 6 & 5 & 6 & **4** & **4** & **4** \\ \hline
7 & Move labels & 6 & 12 & 6 & 5 & **4** & **4** \\ \hline
8 & Change label type & 6 & 6 & 6 & 6 & 6 & **4** \\ \hline
9 & Delete labels & **3** & 5 & 4 & **3** & **3** & **3** \\ \hline
10 & Find labels & 3 & 3 & 5 & 5 & 3 & **2** \\ \hline
11 & Save labels & **2** & **2** & **2** & **2** & **2** & **2** \\ \hline
12 & Load labels & **4** & **4** & **4** & **4** & 8 & **4** \\ \hline
13 & Navigate video & 2 & 6 & 2 & **1** & 2 & **1** \\ \hline
14 & Play/Pause video & 2 & **1** & 2 & **1** & **1** & **1** \\ \hline & Play only label video & 6 & 5 & 4 & 3 & 6 & **2** \\ \hline
15 & Undo/ Redo & **2** & 2 & **2** & 2 & **2** & **2** \\ \hline & **TOTAL SCORE** & 69 & 93 & 82 & 58 & 81 & **49** \\ \hline & **FEVA Faster by** & **29\%** & **47\%** & **40\%** & **16\%** & **40\%** & \\ \hline
**p-value** & 0.0255 & 0.0013 & 0.0149 & 0.1321 & 0.0223 & \\ \hline \hline \end{tabular}
\end{table}
Table 3: The list of tasks done using the fastest possible methods in each software (shortcuts where applicable). Each number reflects a cumulative sum of mouse clicks, double clicks, movement, and key presses. The last row shows how much FEVA is faster than the SOTA in percent (%) and the T-test p-values.
the tasks in the first software, they filled out the NASA Task Load Index [56] questionnaire with a 5-point Likert scale. And this was repeated with the second tool.
#### 6.3.1 Participants
We recruited 34 participants in the University community via email and social media forums. In our study of N=32, the participants were 53% male and 47% female, had a mean age of 30.4 +/- 5.9, with 84% not having video annotation experience, and 66% had no experience with video editing. Two participants had to be dropped due to zoom connection issues and are not counted in N=32.
#### 6.3.2 Procedure
For this study, we trained annotators for 3 minutes by watching a short training video that taught the basics of media controls, video timeline navigation, and how to create and edit annotations, followed by practice trials for each item with the research coordinator answering any questions. They then spent another 2 minutes exploring the tool on their own. The participants spent the next 5 minutes practicing the tasks assigned individually by the researcher where they were allowed to ask questions. Once they got comfortable, they were randomly given four categories of tasks shown in the list below, with each type of task repeated at least three times. The tasks chosen were the most fundamental and repeated tasks annotators must do during video annotation. The tasks ranged in complexity, with some tasks requiring combinations of the fundamental steps. For instance, some tasks simply asked the participant to navigate the video to 2 minutes and 20 seconds. In contrast, others asked participants to annotate three consecutive events between a specific time and name them appropriately. They were no time limits to perform the tasks. They worked on the standard freely available "Big Buck Bunny" video which is approximately 10 minutes long at 720p resolution. Figure 6 demonstrates one example task. After completing all the tasks with the first tool, the participants filled out the NASA TLX workload questionnaire and repeated the tasks with the second VAT.
Figure 6: An example of a task where a participant was asked to annotate an event of the bunny jump roping between 04:28 minutes mark and 04:31 minutes mark in FEVA.
We conducted the following four categories of basic tasks during the user study:
* Navigate the video a) play/ pause the video, b) jump to a specific point in the timeline, and c) jump to a precise point where a particular label is.
* Label Creation a) Create a new label at a specific time with a specific length, b) Create a label when a participant shows a specific behavior (e.g., a character yawns, eats an apple, etc.), and c) create multiple labels in a row.
* Label Content Manipulation a) Write text annotation for a label created and b) Modify annotation text.
* Label Temporal Manipulation a) Move the label by a specific number of seconds and b) Resize the label to change its starting time or ending time to match a specific behavior by the person in the video
#### 6.3.3 Results
In this study, on average, the users felt less metal demand by **46%** (_p_ < 0.00003) with FEVA than the SOTA, less physical demand by **41%** (_p_ < 0.00187), less effort was required by **34%** (_p_ < 0.00324), and felt less frustration by **62%** (_p_ <0.00147). The difference in the temporal demand and the performance level indexes was not significant. We attributed this to there not being a time limit enforced during the study, and except for one user on VIAN, where the user gave up, all other users completed all the tasks.
### User Feedback
#### 6.4.1 The Good
On average, users expressed FEVA was more intuitive 88% of the time and that FEVA was easier to use 91% of the time. The features users liked the most were the speed label, fine adjustments, "cooler feel," locating label and label playback. A few users wished they could go back and change their feedback for the first tool once they used the second tool. This was typical when they felt they gave the first tool too high scores after using FEVA. In contrast, this did not happen when it was the other way around. One user said the user wanted to start a fan club and wanted to volunteer to annotate because it was "so fun."
#### 6.4.2 The bad
One user mentioned that the user preferred traditional windowed UI for serious work. So the user thought FEVA looked too mainstream tablet app-like. Another user stated, "while I think it was fine for me, I don't think my mom will be able to use either of the tools. So I gave low scores to both of them." A few users complained that they did not like pressing the enter key to confirm the label after editing them. Clicking elsewhere, causing the loss of what was just typed as a cancel feature, was not popular.
\begin{table}
\begin{tabular}{l l l l l l} \hline \hline
**FEMA** & **Mental** & **Physical** & **Temporal** & **Performance Effort** & **Frustration** \\ & **Demand** & **Demand** & **Demand** & **Level** & **Level** \\ \hline
**MEAN (m-32)** & 1.8 & 1.5 & 1.8 & 4.3 & 2.3 & 1.7 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Shows FEVA’s average score on a NASA Task Load Index with a 5-point Likert scale.
\begin{table}
\begin{tabular}{l l l l l l l} \hline \hline
**FEMA vs. (\%)** & **Mental** & **Physical** & **Temporal** & **Performance Effort** & **Frustration** \\ & **Demand** & **Demand** & **Demand** & **Level** & **Level** \\ \hline
**Advene (m-5)** & 75\% & 57\% & 0\% & 14\% & 67\% & 183\% \\ \hline
**ANVIL (m-8)** & 25\% & 36\% & -8\% & 3\% & 17\% & 6\% \\ \hline
**ELAN (m-8)** & 29\% & 42\% & 13\% & 12\% & 38\% & 83\% \\ \hline
**VAI (m-8)** & 44\% & 36\% & 6\% & 11\% & 22\% & 36\% \\ \hline
**VIAN (m-3)** & 120\% & 40\% & 20\% & 17\% & 83\% & 140\% \\ \hline
**MEAN (m-32)** & **46\%** & **41\%** & 5\% & 10\% & **34\%** & **62\%** \\ \hline \hline \end{tabular}
\end{table}
Table 5: Shows how FEVA compared to the other VAT. A positive number indicates how much people perceived FEVA to be better than other VATs in the NASA TLX respective six dimensions, and a negative number indicates how much worse.
#### 6.4.3 The ugly
The majority of the confusion, however, was about the global and the local timeline due to needing a clear separation and sharing the same preview component. Participants remotely controlling the UI of the VATs over a Zoom call on the research coordinator's computer noticed a lag in the effect of their actions. Some users complained that the UI did not update fast enough due to the Zoom lags. "Maybe because I am controlling your computer through Zoom, but a huge delay made it harder for me to resize the labels."
## 7 Implementation
### Framework and Dependencies
We used ReactJS [57], an efficient component-based JavaScript library, and wrote the architecture to be lightweight and responsive, so it works on most people's computers. The installation has only two dependencies of Python and Flask. The front end relies on standard HTML, Javascript, ReactJS, and CSS. We designed all the controls to optimize for performance and flexibility to customize. We detail the UI layout breakdown in figure 5 and section 5.
### Architecture
FEVA uses a simple server-client architecture typical for many web-based applications. The server side runs on Python 3.5x or newer with Flask as the webserver. The server side primarily handles servicing data (web content, annotation data, and video streaming) when requested by the client-side application. For FEVA, most of the modules and the design are on the client side. We show more details of all the modules and their interaction with other modules in the block diagram in appendix 8.
## 8 Discussion
This is the first version release of our tool FEVA, where we focused on building the fundamental tool while streamlining the user interface and interactions to make annotating events faster, intuitive, robust, and more accurate. While these early results look promising, there are more research questions that need to be further explored.
### User Study
In our pilot and user study, we conducted a short-duration study. In our user study, we assumed that 15 minutes was sufficient time for participants to learn and practice annotating, which is how we designed the first evaluation of comparing multiple VATs. However, we need to further our research by conducting a longitudinal user study to understand the impact of our design on users as they get more comfortable with the software.
### User Input
In the input sector, we considered the keyboard and mouse/ trackpad as the interaction devices at this stage. Still, we need to expand this to other kinds of inputs, such as touch, speech, and gesture, to explore the potential benefits of multi-modal methods.
Figure 7: FEVA Client-Server Architecture Diagram. We include a more detailed block diagram of the different modules on the client side in the appendix C.
### Target Users
As more general public gets involved in the coding process, we focused this study so anyone can participate in the coding process. Future studies to gather feedback from seasoned coders will be valuable in understanding how they use VAT.
### VAT benchmarking study
In section 3 study, we counted the number of inputs as the pilot study showed the correlation between the number of clicks and the time taken to complete a task. A more comprehensive study should be considered, including the mouse movements and time taken that can reflect user confusion and a more accurate performance metric for completing the task.
### Implicit
In section 6.3, we focused on the user's perception to reflect their experience. Future studies should expand to more quantitative measures implicit in evaluating user confusion, performance, and success. We would also like to conduct more studies to understand labeling consistency.
### The layout
Based on user feedback, lessons learned from our observations, and the process of comparing with existing tools, we could have done better. The multi-camera selector layout takes up a lot of screen real estate. Many users found the global timeline being so close to the local timeline and the thumbnail view without any separation confusing and more challenging to get used to. We have planned to redesign those experiences in the subsequent versions. We will focus on several optimization opportunities in the next release to make FEVA even faster.
### Future Work
Beyond the incremental improvements, there were key features that we have planned for the future:
* Adaptive: All the input controls were linear. We plan to explore dynamic and adaptive control systems to various new interaction techniques for faster annotation and a more intuitive experience.
* Extensible: An easier workflow for the open-source community to extend the features.
* AI assisted: While this had some algorithmic support, better integration with machine learning and deep neural network models is needed. We will further research how AI can augment the annotation process while exploring ways to inform the users of these models' inaccuracies, uncertainties, and inherited bias.
* Remote videos: While our internal prototypes support YouTube, there are optimization opportunities that we need to explore before they can be used seamlessly as an alternative to the local MPEG videos. We will also explore other online streaming platforms.
* Case study: While we are working with some research labs in evaluating the FEVA for their video annotation [58], we want to invite other interested research labs to try out FEVA, collaborate with us, and grow as a community to address needs that may not have been realized by our research so far.
## 9 Conclusion
We present a new event video annotation tool with streamlined interaction techniques and a dynamic UI, contextually visible, and active features organized based on the workflow and usage frequency. With features like speed labeling, users can accurately annotate videos in real-time. With simplified onboarding and workflow, researchers can set up and start annotating videos using minimal time. We release FEVA's source code in GitHub for everyone to try and further extend its features. The community can also find project samples, tutorial videos, GitHub issues for support, and future updates on the GitHub page. As we expand our case studies, we invite more researchers to use FEVA or contact us if you wish to collaborate.
## Acknowledgments
We thank Chethan Parameshwara, Levi Burner, Lindsay Little, and peers from UMD and the Perception and Robotics Group for their valuable feedback and discussions. We extend special thanks to all our project contributors Johnny Chiu, Rachelle Sims, John Gao, Leya Abraham, Vikram Sehgal, Swagata Chakroborty, Lucas Stuart, and Lin Chen. The support of NSF under grant OISE 2020624 is greatly acknowledged.
|
2305.04313 | Outage and DMT Analysis of Partition-based Schemes for RIS-aided MIMO
Fading Channels | In this paper, we investigate the performance of multiple-input
multiple-output (MIMO) fading channels assisted by a reconfigurable intelligent
surface (RIS), through the employment of partition-based RIS schemes. The
proposed schemes are implemented without requiring any channel state
information knowledge at the transmitter side; this characteristic makes them
attractive for practical applications. In particular, the RIS elements are
partitioned into sub-surfaces, which are periodically modified in an efficient
way to assist the communication. Under this framework, we propose two
low-complexity partition-based schemes, where each sub-surface is adjusted by
following an amplitude-based or a phase-based approach. Specifically, the
activate-reflect (AR) scheme activates each sub-surface consecutively, by
changing the reflection amplitude of the corresponding elements. On the other
hand, the flip-reflect (FR) scheme adjusts periodically the phase shift of the
elements at each sub-surface. Through the sequential reconfiguration of each
sub-surface, an equivalent parallel channel in the time domain is produced. We
analyze the performance of each scheme in terms of outage probability and
provide expressions for the achieved diversity-multiplexing tradeoff. Our
results show that the asymptotic performance of the considered network under
the partition-based schemes can be significantly enhanced in terms of diversity
gain compared to the conventional case, where a single partition is considered.
Moreover, the FR scheme always achieves the maximum multiplexing gain, while
for the AR scheme this maximum gain can be achieved only under certain
conditions with respect to the number of elements in each sub-surface. | Andreas Nicolaides, Constantinos Psomas, Ghassan M. Kraidy, Sheng Yang, Ioannis Krikidis | 2023-05-07T15:42:34Z | http://arxiv.org/abs/2305.04313v1 | # Outage and DMT Analysis
###### Abstract
In this paper, we investigate the performance of multiple-input multiple-output (MIMO) fading channels assisted by a reconfigurable intelligent surface (RIS), through the employment of partition-based RIS schemes. The proposed schemes are implemented without requiring any channel state information knowledge at the transmitter side; this characteristic makes them attractive for practical applications. In particular, the RIS elements are partitioned into sub-surfaces, which are periodically modified in an efficient way to assist the communication. Under this framework, we propose two low-complexity partition-based schemes, where each sub-surface is adjusted by following an amplitude-based or a phase-based approach. Specifically, the _activate-reflect_ (AR) scheme activates each sub-surface consecutively, by changing the reflection amplitude of the corresponding elements. On the other hand, the _flip-reflect_ (FR) scheme adjusts periodically the phase shift of the elements at each sub-surface. Through the sequential reconfiguration of each sub-surface, an equivalent parallel channel in the time domain is produced. We analyze the performance of each scheme in terms of outage probability and provide expressions for the achieved diversity-multiplexing tradeoff. Our results show that the asymptotic performance of the considered network under the partition-based schemes can be significantly enhanced in terms of diversity gain compared to the conventional case, where a single partition is considered. Moreover, the FR scheme always achieves the maximum multiplexing gain, while for the AR scheme this maximum gain can be achieved only under certain conditions with respect to the number of elements in each sub-surface.
MIMO, reconfigurable intelligent surfaces, partition-based schemes, outage probability, diversity-multiplexing tradeoff.
## I Introduction
The development of future generations of wireless communications is envisioned to satisfy the constantly increasing demands in the number of devices that need to communicate, with extremely high data rates and ultra-reliable connectivity capabilities [2]. In particular, recent research advances towards the realization of the 6G era suggest that, by successfully controlling the wireless propagation environment in an efficient and intelligent manner, the performance of wireless networks could be enhanced beyond the current limits [3]. Towards this direction, reconfigurable intelligent surfaces (RISs) have been proposed as a potential and appealing solution [4]. An RIS is a planar metasurface equipped with a large number of passive reflecting elements that are connected to a smart controller, where each element can modify the phase and/or induce an amplitude attenuation to the incident signal [5]. Their employment is therefore a cost effective solution, which can improve energy efficiency, due to the passive operation of the elements, and spectral efficiency, since they operate in ideal full-duplex mode, as well as increase the coverage of wireless networks.
Due to these performance benefits, the use of RISs has been considered in several applications, and the performance of RIS-aided wireless networks has been investigated for various communication scenarios. Specifically, the authors in [6] studied the performance of a single-input single-output (SISO) system, and provided tight closed-form approximations for fundamental metrics such as the ergodic capacity and the outage probability. In [7], the system performance of RIS-aided orthogonal multiple access (OMA) and non-orthogonal multiple access (NOMA) SISO networks was studied in terms of outage probability and ergodic rate. It was shown that such networks obtain significant performance gains with the employment of an RIS, which is superior to the use of full-duplex decode-and-forward relays. Furthermore, the work in [8] proposed RIS-enabled random-rotation schemes for SISO networks, which can be implemented with limited or without channel state information (CSI) knowledge. It was demonstrated that the presented schemes improve the performance in terms of energy efficiency, outage probability and diversity gain. The diversity order achieved by an RIS-aided SISO system was also studied in [9], where the authors derived the minimum number of required quantization levels for the RIS discrete phase shifts to achieve full diversity. In [10], the authors introduced an RIS-assisted system for two-way communications and proposed two transmission schemes, for which they investigated the performance limits in terms of outage probability and spectral efficiency.
The employment of RISs has also been considered to assist the communication in wireless networks with multiple antennas at the transmitter and/or the receiver. In particular, a single-cell wireless network, where an RIS is deployed to assist the communication between a multi-antenna access point (AP) and multiple single-antenna users, was investigated in [11]. It was demonstrated that the system's performance can be enhanced in terms of both spectral and energy efficiency, by jointly optimizing the active and passive beamforming vectors at the AP and the RIS, respectively. The implementation of RIS has been also considered under index modulation schemes, enabling simultaneous passive beamforming and information transfer of the RIS in multiple-input single-output (MISO) systems [12, 13]. Specifically, in [12], the authors introduced an amplitude-based scheme, called the reflection pattern modulation (RPM) scheme, where the joint passive beamforming and information transfer occured by activating different subsets of elements at the RIS. It was shown that the presented approach achieves great improvement of the achievable rate performance. In [13], an RIS reflection modulation scheme was proposed, referred to as the quadrature reflection modulation (QRM) scheme, which was proven to outperform the RPM scheme. Moreover, a multi-hop RIS-enabled scenario was considered in [14], where multiple RISs were deployed to assist the communication between a multi-antenna AP and multiple users with single antennas, in order to improve network coverage of terahertz communications.
The fundamental capacity limit for an RIS-aided multiple-input multiple-output (MIMO) network was provided in [15], by developing algorithms for jointly optimizing the RIS reflection coefficients and the transmit covariance matrix. The same optimization parameters were also considered for multi-cell communications in [16], where the weighted sum-rate of a multi-cell multi-user MIMO system was enhanced by employing an RIS at the cell boundary. Furthermore, the asymptotic performance of an RIS-aided MIMO channel was investigated in [17], where it was demonstrated that the achieved multiplexing gain can be enhanced if the information data stream is available both at the transmitter and the RIS. In [18], the authors provided a closed-form approximation for the outage probability of an RIS-aided MIMO system, and proposed a gradient descent algorithm to minimize the outage probability with statistical CSI. In addition, they characterized the achieved diversity-multiplexing tradeoff (DMT) for a finite signal-to-noise ratio (SNR) regime. Furthermore, an opportunistic rate splitting scheme was proposed in [19] for an RIS-aided two-user MISO system. In particular, by utilizing the RIS to modify the channel characteristics through an alternating CSI formulation, which was inspired by the information theoretic framework presented in [20], it was shown that the achievable rate of such systems can be improved.
Most of the aforementioned works assume that perfect CSI knowledge is available for the implementation of the proposed schemes. However, this assumption can be impractical, especially for a large number of RIS elements, due to the high implementation complexity or limited resources. Recently, some channel estimation protocols for RIS-aided networks have been proposed by considering several techniques, such as channel decomposition [21] and discrete Fourier transform training [22]. It can be observed that the training time of the proposed solutions increases proportionally with the size of the RIS, which could induce a large feedback overhead and compromise the expected performance gains. Moreover, several works claim that by increasing the number of elements, the performance of RIS-aided networks is enhanced. However, this highly depends on the information that is available in the system, as well as the considered communication scenario. In particular, in [18] it was shown that, when only statistical CSI is available, larger RISs improve the throughput of the system, but the gain diminishes quickly by increasing the RIS elements. In addition, the authors in [23] considered an unmanned aerial vehicle (UAV)-based system, where an RIS is mounted to the UAV, and demonstrated that increasing the RIS size may lead to reduced data collection from the UAV.
It is, therefore, an important and challenging task to provide efficient solutions, which can enhance the performance of RIS-aided networks with reduced implementation complexity. Motivated by this, in this paper, we study the performance of RIS-assisted MIMO communications under partition-based RIS schemes. Specifically, by incorporating the idea of parallel partitions in multi-hop MIMO channels [24], we create a parallel channel in the time domain by intelligently adjusting different subsets of RIS elements based on a fixed reconfiguration pattern. The presented schemes have low complexity and do not require any CSI knowledge at the transmitter or for the RIS design. Specifically, the main contributions of this paper are summarized as follows:
* A general framework for RIS-aided MIMO systems is presented, where the RIS is partitioned into non-overlapping sub-surfaces, which are sequentially reconfigured to assist the communication. Based on this framework, we propose two low-complexity schemes, by considering an amplitude-based or a phase-based approach. In particular, the _activate-reflect_ (AR) scheme activates each sub-surface in sequential order, by changing the reflection amplitude of the corresponding elements. On the other hand, the _flip-reflect_ (FR) scheme creates an equivalent time-varying channel, by modifying periodically the phase shift of the elements at each sub-surface between two discrete values. The proposed schemes can be easily implemented since, in contrast to other works, channel training is not required for adjusting the reflection coefficients of the RIS elements, and a fixed reflection sequence is considered for the reconfiguration of the sub-surfaces, based on a binary state control.
* A complete analytical methodology for the performance of the partition-based schemes is provided. Based on the channel statistics, we derive analytical expressions which characterize the outage probability of each of the proposed schemes. Moreover, under specific practical assumptions, a more tractable methodology on the outage analysis is provided. We also study the asymptotic performance gains of the presented schemes and provide expressions for the achieved DMT. Through this analysis, we show how the idea of partitioning boosts the perfor
mance of the considered system and obtain useful insights into how some key parameters affect the performance gains.
* Our results demonstrate that, by employing the partition-based schemes, the outage performance of the considered system can be significantly improved compared to the conventional case, where all the RIS elements belong to a single partition and randomly rotate the signals. Moreover, both the proposed schemes achieve the same diversity gain. In particular, the achieved diversity gain is enhanced, compared to the conventional case, and this improvement is proportional to the number of partitions. Finally, it is shown that the RIS-aided MIMO system under the FR scheme always achieves the maximum multiplexing gain, while in the AR scheme the maximum multiplexing gain is obtained if certain conditions are satisfied, regarding the number of elements in each sub-surface.
The remainder of this article is organized as follows. Section II introduces the considered system model. In Section III, we describe the implementation of the proposed partition-based schemes and provide analytical expressions of the outage probability analysis. Section IV focuses on the DMT achieved by the proposed schemes. Our numerical and simulation results are presented in Section V, and finally, some concluding remarks are stated in Section VI.
_Notation:_ Lower and upper case boldface letters denote vectors and matrices, respectively; \(\mathbf{I}_{L}\) denotes the \(L\times L\) identity matrix, \([\cdot]^{\dagger}\) is the conjugate transpose operator and \(\left\|\cdot\right\|_{F}\) is the Frobenius matrix norm; \(\mathbb{P}\left\{X\right\}\) denotes the probability of the event \(X\) and \(\mathbb{E}\left\{X\right\}\) represents the expected value of \(X\); \(\Gamma(\cdot)\) denotes the complete gamma function, \(I_{0}(\cdot)\) is the modified Bessel function of the first kind of order zero and \(\mathcal{Q}_{1}(\cdot,\cdot)\) is the first-order Marcum \(Q\)-function; \(G_{p,q}^{m,m}\left(\begin{smallmatrix}a_{1},\ldots,a_{p,q}\\ b_{1},\ldots,b_{q}\end{smallmatrix}\left|x\right)\) is the Meijer-\(G\) function [25] and \(H_{p,q}^{m,n}\left[\begin{smallmatrix}(a_{i},\alpha_{i},A_{i})_{1,p}\\ (b_{i},B_{i})_{1,p}\end{smallmatrix}\left|x\right]\) is the generalized upper incomplete Fox \(g\) function [26]; \(\Im(x)\) returns the imaginary part of \(x\) and \(j=\sqrt{-1}\) denotes the imaginary unit; \([x]^{+}=\max(0,x)\) and \(\lfloor x\rfloor=\max\{z\in\mathbb{Z}|z\leq x\}\).
## II System Model
We consider a topology where a transmitter (Tx) communicates with a receiver (Rx) through the employment of an RIS, as shown in Fig. 1. The Tx and the Rx are equipped with \(N\) and \(L\) antennas respectively, and the RIS consists of \(Q\) reflecting elements connected to a smart controller. Note that a direct link between the Tx and the Rx is not available1 (e.g. due to high path-loss or deep shadowing) [8, 9]. Moreover, we consider that adjacent elements are uncorrelated i.e., a half-wavelength of spacing exists between them [11]. Let \(\mathbf{H}\in\mathbb{C}^{Q\times N}\) denote the channel matrix from the Tx to the RIS, and \(\mathbf{G}\in\mathbb{C}^{L\times Q}\) the channel matrix from the RIS to the Rx. We assume a frequency-flat Rayleigh block fading channel, in which the channel coefficients remain constant during one time slot, but change independently between different time slots [27, 28]. Moreover, each channel coefficient follows a circularly symmetric complex Gaussian distribution with zero mean and unit variance i.e., \(h_{i,j}\), \(g_{i,j}\sim\mathcal{CN}(0,1)\). For the rest of this paper, the channel under this topology will be referred to as the (\(N,Q,L\)) channel.
Footnote 1: This assumption is not restrictive and the proposed RIS schemes can also be implemented when a direct link between the Tx and the Rx exists, but our focus is to highlight the gains provided by these schemes.
At an arbitrary time slot, the Tx sends a signal vector \(\mathbf{x}\in\mathbb{C}^{N\times 1}\). We assume that CSI is perfectly known at the Rx, while CSI knowledge is not available at the Tx and the RIS. Therefore, if \(\mathbf{x}\) is transmitted with a constant power \(P\), the power is uniformly allocated to the \(N\) transmit antennas. In the considered scenario, every time slot is divided into \(K\) sub-slots of equal duration. We denote by
\[\mathbf{\Phi}_{k}=\text{diag}[a_{1,k}e^{j\phi_{1,k}}\ a_{2,k}e^{j\phi_{2,k}}\ \ldots\ a_{Q,k}e^{j\phi_{Q,k}}], \tag{1}\]
the diagonal reflection matrix of the RIS, where \(a_{i,k}\in[0,1]\) and \(\phi_{i,k}\in[0,2\pi)\) are the reflection amplitude and the phase shift of the \(i\)-th RIS element at the \(k\)-th time sub-slot, respectively. Thus, the end-to-end channel matrix during one sub-slot is written as
\[\mathcal{H}_{k}=\mathbf{G}\mathbf{\Phi}_{k}\mathbf{H}, \tag{2}\]
and the received signal vector at the \(k\)-th sub-slot is given by
\[\mathbf{y}_{k}=\sqrt{\frac{P}{N}}\mathcal{H}_{k}\mathbf{x}+\mathbf{n}_{k}, \tag{3}\]
where \(\mathbf{n}_{k}\in\mathbb{C}^{L\times 1}\) is the additive white Gaussian noise (AWGN) vector with entries of variance \(\sigma^{2}\), i.e., \(n_{i,k}\sim\mathcal{CN}(0,\sigma^{2})\). As such, the mutual information between the Tx and the Rx over one time slot is equal to
\[\mathcal{I}=\frac{1}{K}\sum_{k=1}^{K}\log_{2}\left[\det\left(\mathbf{I}_{L}+ \frac{\rho}{N}\mathcal{H}_{k}\mathcal{H}_{k}^{\dagger}\right)\right], \tag{4}\]
where \(\rho=P/\sigma^{2}\) is the average SNR.
## III Partition-based RIS schemes
In this section, we describe our proposed partition-based RIS schemes and analytically evaluate their performance. The proposed schemes are inspired by the idea of parallel partitions in multi-hop MIMO channels, which have been introduced in [24]. The main objective of these schemes is to incorporate the temporal processing into the RIS-aided channels by partitioning in a proper way the RIS elements, in order to enhance the
Fig. 1: The considered RIS-aided channel model.
performance of such networks. In particular, we assume that the RIS is partitioned into \(K\) non-overlapping sub-surfaces \(\mathcal{S}_{k}\), \(1\leq k\leq K\)[29]. This partitioning into sub-surfaces is not restricted by any specific method. Hence, for simplicity and without loss of generality, we assume that \(K\) is a divisor of \(Q\) so that each sub-surface has an equal number of elements2\(m\) i.e., \(Km=Q\). An example of the considered scenario is demonstrated in Fig. 1, with \(K=4\) and \(m=9\), and where the sub-surfaces are defined by the black solid lines.
Footnote 2: The results of the proposed scheme can be readily extended to the case where the defined sub-surfaces may contain different number of elements.
At each time sub-slot \(k\), \(1\leq k\leq K\), the reflection configuration of the sub-surface \(\mathcal{S}_{k}\) (i.e. the phase shifts and reflection amplitudes of the corresponding elements) is adjusted, according to the scheme that is deployed at the RIS. By using this approach, the information is transmitted at the destination by creating a set of \(K\) parallel sub-channels in the time domain [30]. It is important to note that the number of partitions, the resulting sub-surfaces as well as the configuration sequence are determined in advance. Moreover, this information remains fixed throughout the transmission procedure and therefore can be provided to the Rx _a-priori_[30].
In the following sub-sections, we present two partition-based schemes: the AR scheme, where each sub-surface is sequentially activated to assist the communication by changing the reflection amplitude of the respective elements, and the FR scheme, which modifies the phase shift of the RIS elements at each sub-surface. For these schemes, we focus on the performance evaluation in terms of outage probability. In particular, the outage probability is defined as the probability that the mutual information is below a non-negative predefined target rate of \(R\) bits per channel use (bps/Hz). The general expression for the outage probability is given by
\[\Pi(R,K)=\mathbb{P}\{\mathcal{I}<R\}=\mathbb{P}\left\{\sum_{k=1}^{K}\mathcal{ I}_{k}<RK\right\}, \tag{5}\]
where \(\mathcal{I}\) is given by (4) and
\[\mathcal{I}_{k}=\log_{2}\left[\det\left(\mathbf{I}_{L}+\frac{\rho}{N}\mathcal{ H}_{k}\mathcal{H}_{k}^{\dagger}\right)\right], \tag{6}\]
is the mutual information of the \(k\)-th sub-channel. Next, we introduce the design framework that is considered for the implementation of the AR scheme.
### _Activate-Reflect scheme_
For the AR scheme, we assume that each RIS element can be reconfigured between two possible states, either to be turned ON or OFF [31, 32]. If an element is ON, it can rotate the phase of the incident signals by a specific value, while in the OFF state, the transmitted signals can not be reflected by the element. In other words, the reflection amplitude of the ON elements is set to one (full reflection), otherwise it is set to zero (full absorption). Without loss of generality, we assume that the induced phase shift by every element is equal to zero3 i.e., \(\phi_{i,k}=0\ \forall i,k\)[32].
Footnote 3: The resulting channel gain is statistically equivalent to the case of random phase shifts at the RIS i.e., each \(\phi_{i,k}\) uniformly distributed in \([0,2\pi)\)[8].
For the implementation of the AR scheme, a design framework is considered by following the steps below:
* Before transmission, all the elements of each sub-surface are set to the OFF state i.e., \(a_{i}=0\ \forall i\). This is considered as the default configuration of each element.
* Then, each sub-surface is sequentially switched to the ON state to assist the communication of the considered network. Specifically, at each sub-slot \(k\), \(1\leq k\leq K\), the sub-surface \(\mathcal{S}_{k}\) is activated by turning ON only the elements that belong to the specific sub-surface. Note that any other element that does not belong to \(\mathcal{S}_{k}\) will be reset to the default configuration. Therefore, the reflection amplitude of each RIS element at the \(k\)-th sub-slot is given by \[a_{i,k}=\begin{cases}1,&\text{$i$-th element}\in\mathcal{S}_{k};\\ 0,&\text{otherwise}.\end{cases}\] (7)
* The end-to-end channel is then composed of \(K\) parallel sub-channels, where each sub-channel can be represented by an equivalent \((N,m,L)\) channel by considering only the activated RIS elements at each time sub-slot.
Note that the above configuration pattern corresponds to an ideal phase shift model, where the reflection amplitude of each element is independent from its corresponding phase shift; however, the AR scheme can be also employed when the reflection amplitude and the phase shift of each element are correlated [33]. An example of the above procedure is shown in Fig. 2, where we consider the same RIS-aided channel as in Fig. 1. Specifically, Fig. 2 depicts how the RIS configuration changes during an arbitrary time slot, by indicating the activated elements at each sub-slot. The proposed AR scheme could be implemented by following the approach presented in [34], where each element is connected to an RF switch, which tunes the reflection amplitude of the element as either zero or one, resulting in a two-level reflection amplitude control. It is apparent that, apart from requiring no CSI knowledge, this scheme has low design complexity and is cost-effective, since we only need a two-level amplitude control for the implementation, which significantly simplifies
Fig. 2: The different instances of the RIS configuration over one time slot for a system with \(Q=36\) and \(K=4\); under the AR scheme, \(A=\text{ON}\) and \(B=\text{OFF}\) (expression (7)); under the FR scheme, \(A=\pi\) and \(B=0\) (expression (12)).
the RIS hardware. Moreover, although the expected rate decreases, as only \(m\) out of the \(Q\) elements are activated at each time sub-slot, the power consumption at the RIS is reduced compared to the conventional case. Below, we present two mathematical expressions that can be used to evaluate the outage performance of the AR scheme. Specifically,
* In Proposition 1, the outage probability achieved by this scheme is derived numerically for an arbitrary number of transmit and receive antennas, by using the Gil-Pelaez inversion theorem [35].
* Theorem 1 derives the outage probability for the special case of the SISO channel i.e., \(N=L=1\), by using the central limit theorem (CLT), which approximates the channel \(\mathcal{H}_{k}\) as a complex Gaussian random variable.
We first provide a preliminary result in the following lemma for the conventional case of \(K=1\), representing the _pure reflection_ (PR) scheme, where all the elements randomly rotate the phase of the incident signals over an arbitrary time slot. This result will assist in the derivation of the analytical results for the presented schemes; the PR scheme will also be used as a performance benchmark for comparison purposes.
**Lemma 1**.: _The characteristic function of the mutual information of the \((N,Q,L)\) channel given in (4) under random reflections is given by_
\[\varphi(Q,t)=\frac{1}{\prod_{z=1}^{n_{0}}\prod_{\theta=0}^{2}\Gamma (z+\nu_{\theta})}\det\bigg{[}\frac{1}{\Gamma(-\jmath t/\ln 2)}\\ \times G_{1,3}^{3,1}\bigg{(}\begin{array}{c}1\\ -\jmath t/\ln 2,\nu_{2}+i,\nu_{1}+i+j-1\end{array}\bigg{|}\frac{N}{\rho} \bigg{)}\bigg{]}_{i,j}, \tag{8}\]
_where \((n_{0},n_{1},n_{2})\) is the ordered version of the \((N,Q,L)\) channel and \(\nu_{i}\triangleq n_{i}-n_{0}\), \(0\leq i\leq 2\)._
Proof.: See Appendix A.
Next, by using the above lemma, the outage probability of the AR scheme is derived as follows.
**Proposition 1**.: _The outage probability of the AR scheme is given by_
\[\Pi_{\mathrm{AR}}(R,K,m)=\frac{1}{2}-\frac{1}{\pi}\int_{0}^{\infty}\frac{1}{t} \Im\bigg{\{}\big{[}e^{-\jmath tR}\varphi(m,t)\big{]}^{K}\bigg{\}}dt, \tag{9}\]
_where \(\varphi(m,t)\) is the characteristic function of the mutual information of the (\(N,m,L\)) channel given by Lemma 1._
Recall that, at each time sub-slot, only \(m\) elements of the RIS are activated. The above result can be easily derived by considering that the random variables \(\mathcal{I}_{k}\) are independent and the characteristic function of the sum \(\sum_{k=1}^{K}\mathcal{I}_{k}\) is calculated by the product of the characteristic functions of each \(\mathcal{I}_{k}\). It is also clear that, for \(K=1\), the above proposition provides a numerical expression for the outage probability of the (\(N,Q,L\)) channel under the conventional PR scheme. We next provide an approximation of the outage probability for the RIS-aided SISO channel.
**Theorem 1**.: _The outage probability achieved by the RIS-assisted SISO channel employing the AR scheme, under the CLT, is approximated by_
\[\Pi_{\mathrm{AR}}(R,K,m)\approx 1-\exp\left(\frac{K}{\rho m}\right)\\ \times H_{1,K+1}^{K+1,0}\left[\begin{array}{c}(1,1,0)\\ (0,1,0)\end{array}\right. \tag{10}\]
_where \(\Theta=(2^{R}/\rho m)^{K}\)._
Proof.: See Appendix B.
The above expression can be easily evaluated by using computational software tools, such as Matlab or Mathematica [26], while in Section V, we show that the provided approximation becomes very tight, even for a small number of elements in each sub-surface. The above approximation can be also considered for the PR scheme by setting \(K=1\). In this case, the expression is simplified to
\[\Pi_{\mathrm{PR}}(R)\approx 1-\exp\left(-\frac{2^{R}-1}{\rho Q}\right). \tag{11}\]
### _Flip-Reflect scheme_
We now consider a partition-based scheme, where the reconfiguration of each sub-surface \(\mathcal{S}_{k}\) occurs on the phase shift of the selected elements. For the FR scheme, in particular, all the elements of the RIS are always turned ON and are thus able to reflect the incident signals i.e., \(a_{i,k}=1\ \forall i,k\). At every sub-slot \(k\), the RIS controller can modify the induced phase shift of each element between \(0\) or \(\pi\). For the implementation of the FR scheme, we adopt a similar framework as for the AR scheme. Specifically, we consider the following procedure:
* Initially, the phase shift of all the RIS elements is set to zero. This setting is considered as the default configuration of each element for the FR scheme.
* Regarding the number of partitions, we consider the cases4\(K=2\) and \(K>2\). If \(K=2\), at the first time sub-slot all the elements remain at the default configuration and simply reflect the signals i.e., \(\phi_{i,1}=0\ \forall i\). At the second sub-slot, the elements of \(\mathcal{S}_{2}\) are reconfigured by setting their phase shift into \(\pi\). On the other hand, for \(K>2\), at the \(k\)-th sub-slot, the sub-surface \(\mathcal{S}_{k}\) flips the signals by changing the phase shift of the corresponding elements into \(\pi\). As such, the phase shift of each RIS element at the \(k\)-th sub-slot is equal to Footnote 4: By considering the above cases for the flipping pattern, we ensure that the RIS reflection matrices are linearly independent, so that the maximum performance gains can be achieved. \[\phi_{i,k}=\begin{cases}\pi,&(K>2\cup(K=2\cap k>1))\cap i\in\mathcal{S}_{k};\\ 0,&\text{otherwise}.\end{cases}\] (12)
* Therefore, the (\(N,Q,L\)) channel under the FR scheme is a time-varying channel consisting of \(K\) parallel sub-channels, where for each sub-channel a different flipping matrix \(\Phi_{k}\) is used [36, 37].
Similarly to the AR scheme, we assume that the reflection amplitude and phase shift of each element can be independently modified, while the proposed design framework can be generalized to the case where the reflection amplitude and the phase shift of each element are coupled [33]. Fig. 2 depicts an example of the above procedure, by indicating the value of the phase shift for the elements of every sub-surface at each time sub-slot. The 1-bit phase-shift control presented in this framework could be implemented based on the binary programmable metasurface fabricated in [38], where each RIS element is connected to a PIN diode that can be switched between two states, resulting in a phase shift difference of \(\pi\). Therefore, similar to the AR scheme, this scheme has low complexity in terms of RIS hardware requirements, as the proposed configuration patterns can be obtained with low-cost binary-state elements.
Based on the presented framework, we can now analyze the performance of the FR scheme in terms of outage probability. Note that, since all the RIS elements are always activated in this scheme, the instantaneous channel gains between different sub-slots are time-correlated. Thus, in this case, the derivation of the outage probability becomes challenging. As such, we present two approximations that can sufficiently describe the performance of the proposed scheme. In particular,
* In Theorem 2, we derive a numerical expression for the outage probability of an RIS-aided SISO channel by approximating the cascaded channels \(\mathcal{H}_{k}\) with time-correlated Rayleigh fading channels.
* For the general case of RIS-aided MIMO channels, we provide a lower bound by assuming that the channels \(\mathcal{H}_{k}\) are mutually independent.
In what follows, we analytically evaluate the correlation coefficient of the channel gains over different sub-slots for the SISO case i.e., for \(N=L=1\), by using the Pearson correlation formula [39].
**Lemma 2**.: _The correlation coefficient of the channel gains over different sub-slots \(k\neq l\) under the FR scheme is given by_
\[\zeta=\frac{(Q-2b)^{2}+2Q}{Q(Q+2)}, \tag{13}\]
_where_
\[b=\begin{cases}m,&K=2;\\ 2m,&K>2.\end{cases} \tag{14}\]
Proof.: See Appendix C.
It can be easily verified that when \(Q\rightarrow\infty\), the correlation coefficient converges to
\[\zeta\rightarrow\begin{cases}0,&K=2;\\ 1-\frac{8(K-2)}{K^{2}},&K>2.\end{cases} \tag{15}\]
From the above expression, we can deduce that, the correlation between the channel gains is relatively small for small partition sizes. In particular, we observe that when \(K=2\) or \(4\) the correlation coefficient converges to zero. On the other hand, as \(K\) increases, \(\zeta\) converges to one i.e., full correlation. These results are also depicted in Fig. 3.
We can now focus on the derivation of the approximated expression of the outage probability for the RIS-aided SISO networks. Since the considered channels are time-correlated, we adopt a correlated Rayleigh fading channel model to approximate their distribution, and incorporate the correlation coefficients provided in the above lemma. Specifically, the approximated channel can be written as [40]
\[\tilde{\mathcal{H}}_{k}=\begin{cases}\sigma_{k}X_{1},&k=1;\\ \sigma_{k}(\sqrt{1-\zeta}X_{k}+\sqrt{\zeta}X_{1}),&2\leq k\leq K,\end{cases} \tag{16}\]
where \(X_{k}\), \(1\leq k\leq K\), are independent complex Gaussian random variables with zero mean and unit variance and \(\sigma_{k}^{2}=\mathbb{E}\left\{\left|\mathcal{H}_{k}\right|^{2}\right\}=Q\). Let \(W_{k}=\left|\tilde{\mathcal{H}}_{k}\right|^{2}\) denote the channel gain at the \(k\)-th sub-slot. Based on the definition of \(\tilde{\mathcal{H}}_{k}\), we deduce that \(W_{1}\) is exponentially distributed with parameter \(1/Q\), where the associated probability density function (PDF) is given by
\[f_{W_{1}}(x)=\frac{1}{Q}\exp\left(-\frac{x}{Q}\right). \tag{17}\]
Conditioned on \(W_{1}\), the remaining terms \(W_{k}\), \(2\leq k\leq K\), follow an independent non-central chi-squared distribution with two degrees of freedom. Therefore, the conditional PDF of \(W_{k}\), given \(W_{1}=y\), is written as [41, Theorem 1.3.4]
\[f_{W_{k}|W_{1}}(x|y)=\frac{\exp\left(-\frac{x+\zeta y}{\Omega}\right)}{\Omega }I_{0}\left(2\frac{\sqrt{\zeta xy}}{\Omega}\right), \tag{18}\]
where \(\Omega\triangleq Q\left(1-\zeta\right)\). As such, the approximated outage expression is given as follows.
**Theorem 2**.: _The outage probability of the SISO channel, under the FR scheme, is approximated by_
\[\Pi_{\mathrm{FR}}(R,K,m)\approx\left(\frac{1}{\rho}\right)^{K-1}\int_{1}^{c _{1}}\int_{1}^{c_{2}}\cdots\int_{1}^{c_{K-1}}\]
Fig. 3: Correlation coefficient \(\zeta\) versus number of elements \(Q\); the theoretical results are depicted with lines and the simulation results with markers.
\[\times\left[1-\mathcal{Q}_{1}\left(\sqrt{\frac{2\zeta(\tau_{1}-1)}{ \rho\Omega}},\sqrt{2\Theta}\right)\right]f_{W_{1}}\left(\frac{\tau_{1}-1}{\rho}\right)\] \[\times\prod_{k=2}^{K-1}f_{W_{k}|W_{1}}\left(\frac{\tau_{k}-1}{\rho }\middle|\frac{\tau_{1}-1}{\rho}\right)d\tau_{K-1}\cdots d\tau_{1}, \tag{19}\]
_where \(c_{i}\triangleq 2^{RK}/\prod_{k=1}^{i-1}\tau_{k}\), \(1\leq i\leq K-1\) and_
\[\Theta\triangleq\frac{1}{\rho\Omega}\left(\frac{2^{RK}}{\prod_{k=1}^{K-1}\tau_ {k}}-1\right). \tag{20}\]
Proof.: See Appendix D.
We show that the presented approach provides a very tight approximation of the outage performance under the FR scheme, and can adequately describe the system's behavior, even for small values of \(Q\).
Finally, for the general case where \(N,L\geq 1\), we point out some remarks on the performance of the considered RIS-aided networks under the FR scheme. For this case, specifically, we provide a lower bound on the outage probability by assuming that all the resulting parallel channel matrices \(\mathcal{H}_{k}\) are mutually independent. Under this assumption, the (\(N,Q,L\)) channel, by using the FR scheme, achieves the same performance as the (\(N,KQ,L\)) channel under the AR scheme with the same partition size \(K\). The outage probability of the FR scheme is therefore lower bounded by
\[\Pi_{\mathrm{FR}}(R,K,m)\geq\Pi_{\mathrm{AR}}(R,K,Q), \tag{21}\]
where \(\Pi_{\mathrm{AR}}(R,K,Q)\) is given by (9). The above performance bound becomes tight when \(m\) (and therefore \(Q\)) is sufficiently large and the number of partitions \(K\) is relatively small, since the correlation of the channel gains between different sub-slots remains low. Moreover, based on this result, we can conclude that the AR scheme requires the employment of an RIS with \(K\) times more elements to outperform the outage probability achieved by the FR scheme.
## IV Diversity-multiplexing tradeoff analysis
We now turn our attention to the DMT achieved by the proposed schemes, which gives the performance limit for uncoded transmission. In general, a scheme achieves multiplexing gain \(r\) and diversity gain \(d(r)\) if the target data rate \(R(\rho)\sim r\log\rho\) and the outage probability of the scheme \(\Pi(\rho)\) satisfy the conditions [42]
\[\lim_{\rho\rightarrow\infty}\frac{R(\rho)}{\log\rho}=r,\]
and
\[\lim_{\rho\rightarrow\infty}-\frac{\log\Pi(\rho)}{\log\rho}=d(r). \tag{22}\]
By following the definitions of [24], if the achieved DMT of the end-to-end channels for any two schemes is the same, then these channels are said to be _DMT-equivalent_. Let \((n_{0},n_{1},n_{2})\) be the ordered version of the (\(N,Q,L\)) channel, with \(n_{0}\leq n_{1}\leq n_{2}\). An ordered (\(l_{0},l_{1},l_{2}\)) channel is a _vertical reduction_ of the considered channel, if both are DMT-equivalent and satisfy the condition \(l_{i}\leq n_{i}\)\(\forall i\). Finally, according to the information theoretic cut-set bound [43], the DMT achieved by any RIS scheme is upper bounded as
\[d_{(N,Q,L)}(r)\leq\min\{d_{(N,Q)}(r),d_{(Q,L)}(r)\}, \tag{23}\]
where the maximum diversity and multiplexing gain are given by
\[d_{\max}=\min\{N,L\}\times Q, \tag{24}\]
and
\[r_{\max}=\min\{N,Q,L\}. \tag{25}\]
Before deriving the DMT achieved by the presented schemes, we need to provide the DMT expression for the conventional PR scheme, which is given below. The \((N,Q,L)\) channel under the PR scheme is DMT-equivalent to the Rayleigh product channel. Therefore, the achieved DMT is a piecewise-linear function defined by the points \((r,d_{\mathrm{PR}}(r)),r=0,...,n_{0}\) with [44, Theorem 2]
\[d_{(N,Q,L)}^{\mathrm{PR}}(r)=(n_{0}-r)(n_{1}-r)-\left\lfloor\frac{[(n_{0}+n_{ 1}-n_{2}-r)^{+}]^{2}}{4}\right\rfloor. \tag{26}\]
Based on the above expression, some important remarks can be extracted about the DMT performance of an RIS-aided network operating under the PR scheme. Specifically,
* The DMT achieved by the PR scheme depends only on the ordered version of the \((N,Q,L)\) channel.
* The DMT of the \((N,Q,L)\) channel under the PR scheme is limited by the DMT performance of the \(N\times L\) channel if the number of RIS elements satisfies the condition \(Q\geq N+L-1\). This can be easily proven since, under this condition, the last term of (26) is equal to zero.
* The \((N,Q,L)\) channel can be vertically reduced to any \((N,\tilde{Q},L)\) channel, with \(Q>\tilde{Q}\geq Q_{min}=N+L-1\), and still achieve the same DMT.
It is therefore easily observed that the PR scheme is sub-optimal in terms of diversity gain, compared to the maximum diversity gain associated with the theoretical cut-set bound. In typical RIS-assisted networks, the number of RIS elements is usually much larger than the number of transmit and receive antennas. Therefore, the DMT achieved by this scheme is limited by the "bottleneck" ordered channel. For \(Q>Q_{min}\) the performance of the \((N,Q,L)\) channel can only be improved in terms of coding gain. However, the achieved coding gain decreases as the number of elements increases. This can be easily proven for the RIS-aided SISO channel, by using the approximated expression in (11). In this case, by using the definition of [45], the coding gain is given by
\[\mathcal{G} =\lim_{\rho\rightarrow\infty}\Pi_{\mathrm{PR}}(R)\rho^{d_{(1,Q,1)} (0)}\] \[=\lim_{\rho\rightarrow\infty}\left[1-\exp\left(-\frac{2^{R}-1}{ \rho Q}\right)\right]\rho\approx\frac{2^{R}-1}{Q}, \tag{27}\]
where we used the approximation \(\exp(-x)\approx 1-x\) for \(x\to 0\). Apparently, when \(Q\rightarrow\infty\), the coding gain converges to zero. We next evaluate the DMT of the proposed AR scheme.
**Corollary 1**.: _The DMT achieved by the (\(N,Q,L\)) channel
under the AR scheme is equal to_
\[d_{(N,Q,L)}^{\rm{AR}}(r)=Kd_{(N,m,L)}^{\rm{PR}}(r), \tag{28}\]
_where \(K\) is the number of sub-surfaces with \(m\) elements._
The above result is derived by considering that in the proposed scheme, the end-to-end channel consists of \(K\) independent parallel sub-channels, and each sub-channel achieves the same DMT given in (26) with \(Q=m\). It is apparent that the AR scheme can significantly enhance the performance of the \((N,Q,L)\) channel in terms of diversity gain compared to the PR scheme, especially for a large number of RIS elements. In particular, this scheme can achieve both the maximum diversity and multiplexing gain associated with the cut-set bound, if the RIS is divided into \(K\) sub-surfaces of \(m\) elements with
\[\min\{N,L\}\leq m\leq|N-L|+1, \tag{29}\]
and \(m\) is a divisor5 of \(Q\). If the above bounds can not be simultaneously satisfied, then the AR scheme can achieve either \(d_{\max}\) or \(r_{\max}\) given by (24) and (25), respectively.
Footnote 5: In the generalized case of an RIS divided into sub-surfaces with different number of elements, \(m_{i}\), \(1\leq i\leq K\), just needs to be an integer number within the defined region (see (29)).
It can be seen that, despite the improvement achieved in diversity gain by the employment of the AR scheme, the end-to-end channel suffers from rate-deficiency, while the maximum multiplexing gain is no longer guaranteed. On the other hand, this issue is resolved with the FR scheme. In this case, the exact DMT is difficult to be obtained for the FR scheme, so we provide a lower bound instead, which is given in the following proposition.
**Proposition 2**.: _The DMT achieved by the (\(N,Q,L\)) channel under the FR scheme is lower bounded by_
\[d_{(N,Q,L)}^{\rm{FR}}(r)\geq\max\Big{\{}d_{(N,Q,L)}^{\rm{AR}}(r),d_{(N,Q,L)}^{ \rm{PR}}(r)\Big{\}}. \tag{30}\]
Proof.: See Appendix E.
We can observe that the FR scheme is superior to both the AR and the PR schemes. Based on the framework of the FR scheme, all the RIS elements are always activated and assist the communication, so the maximum multiplexing gain is ensured. At the same time, through the temporal processing pattern presented in Section III-B, we can increase the achieved diversity gain. Therefore, while the AR and the PR schemes can achieve the maximum diversity and multiplexing gain respectively, the FR scheme achieves both extremes. Moreover, the (\(N,Q,L\)) channel under the FR scheme can achieve both \(d_{\max}\) and \(r_{\max}\), by considering only the upper bound condition of (29).
## V Numerical Results
We provide numerical results to demonstrate the performance of the presented partition-based RIS schemes and validate our theoretical analysis. For the simulations, we consider that the rate threshold is equal to \(R=1\) bps/Hz throughout the simulations, while the variance of each entry at the AWGN vector is normalized to \(\sigma^{2}=1\). Note that, the considered set of parameter values is used for the sake of presentation. A different set of values will affect the performance but will lead to similar observations. Moreover, in the following results, the proposed schemes are compared with the conventional PR scheme i.e., the case of \(K=1\). Unless otherwise stated, the analytical results are illustrated with lines (solid, dashed or dotted) and the simulation results with markers.
Fig. 4 illustrates the system's outage probability with respect to the average SNR under the PR scheme for different values of RIS elements. We show the results of two different cases: the first case considers a SISO network setting i.e., \(N=L=1\) antenna, while in the second case we have a MIMO setting with \(N=L=2\) antennas. We can see that the outage performance is improved, by increasing the number of RIS elements. Specifically, for the RIS-aided SISO channel we observe that for all values of \(Q\) a diversity order equal to one is achieved and the system's performance is improved in terms of coding gain. On the other hand, the performance in the MIMO setting is enhanced in terms of diversity gain until the RIS elements reach the value of \(Q_{min}=N+L-1=3\), where \(d(0)=NL=4\). After this threshold, the diversity order remains the same as we increase \(Q\), so the outage probability can be only improved in terms of coding gain.
Fig. 4: Outage probability versus average SNR for the PR scheme (\(K=1\)).
Fig. 5: Outage probability versus \(Q\) under the PR scheme for \(\rho=5\) dB.
This observation is in accordance with our remark that the asymptotic performance of the PR scheme is limited by the \(N\times L\) channel. Finally, the presented results validate the accuracy of our theoretical analysis. Specifically, we observe that in both cases the simulation results (markers) perfectly match with the theoretical values (solid lines) of Proposition 1 for \(K=1\). In addition, the expression in (11) provides an exceptional approximation (dashed lines) of the achieved outage probability, even for small values of \(Q\), while as \(Q\) increases the approximation becomes tighter.
As noted previously in Fig. 4, for \(Q>Q_{min}\) the outage performance is improved only in terms of coding gain. However, this gain gradually diminishes as the number of elements increases. This observation is more evident in Fig. 5, where the outage probability is shown against the number of elements for the same channel settings at \(\rho=5\) dB, and is consistent with the conclusions derived in [18]. We can therefore deduce that under the PR scheme, employing a larger RIS does not necessarily provide significant gains to the outage probability of the RIS-aided network.
In Figs. 6 and 7, we present the achieved outage probability of the considered RIS-aided system under the proposed partition-based schemes (AR and FR schemes), for a network topology with \(N=L=1\) and \(2\) antennas, respectively, \(Q=60\) elements and different partition sizes. As expected, by increasing the number of transmit and receive antennas, the outage probability as well as the achieved diversity gain are improved. Again, the theoretical results regarding the AR scheme (solid lines) are in agreement with the simulations (markers) in both figures, which validates our analysis. In Fig. 6, we also show the approximations of Theorem 1 and Theorem 2 for the two schemes. It is observed that, in both cases, the derived expressions follow the behavior of the actual performance of the system and provide a very tight approximation. Furthermore, in Fig. 7, we include the lower bound of the FR scheme given in (21), assuming mutually independent channel matrices \(\mathcal{H}_{k}\). We can see that, for \(Q\gg K\) the lower bound is close to the performance achieved by the FR scheme, which validates its consideration.
In both figures, the main observation is that for high SNR values, both the AR scheme and the FR scheme outperform the conventional PR scheme. Specifically, by increasing the number of partitions, the asymptotic performance of the presented schemes is improved in terms of diversity gain. We can also clearly see that the proposed schemes achieve the same diversity order, which is in accordance with our results in Section IV. However, the FR scheme outperforms the AR scheme in terms of coding gain. This is due to the fact that in the FR scheme, all the RIS elements are activated at each sub-slot, while in the AR scheme only the elements of the selected sub-surface are turned on. Moreover, we observe that, as the number of partitions increases, the gain achieved by the FR scheme over the AR scheme increases as well. In particular, in both figures, it can be seen that partitioning the RIS into \(4\) sub-surfaces almost doubles the coding gain between the two schemes, compared to the case of \(K=2\). On the contrary, in the low SNR regime only the FR scheme outperforms the conventional case. This is expected since, according to the framework of the AR scheme, at each time sub-slot the spectral efficiency of the considered model is deteriorated due to the reduced duration of each time sub-slot and the decreased number of RIS elements selected at each sub-surface.
In Fig. 8, we compare the outage performance of the proposed schemes with the case of passive beamforming (PB) at the RIS, by considering an RIS-aided SISO channel. In the PB case, the Tx estimates the cascaded channel through a channel estimation method, e.g. [31], and aligns the phase shifts of the RIS elements i.e., \(\phi_{i}=-\angle{h_{i}g_{i}}\), with \(1\leq i\leq Q\). We consider an ideal scenario where no errors are derived from the channel estimation, so we have perfect CSI knowledge. The outage probability achieved by the PB is obtained in [8, Theorem 3]. Fig. 8 depicts the outage probability of the PB for \(Q=4\) and \(16\) elements, and the outage probability of the AR and FR schemes for \(Q=16\) and different partition sizes. As expected,
Fig. 6: Outage probability versus average SNR for the partition-based schemes; \(N=L=1\) antenna and \(Q=60\).
Fig. 7: Outage probability versus average SNR for the partition-based schemes; \(N=L=2\) antennas and \(Q=60\).
the PB case outperforms the proposed schemes. On the other hand, it can be seen that, by increasing the number of partitions at the RIS, the difference between the outage performance of the proposed schemes and the PB decreases. Moreover, a fair comparison in terms of power consumption at the RIS is considered i.e., same number of activated elements, between the AR scheme, with \(Q=16\) elements and \(K=4\) partitions, and the PB case with \(Q=4\). In this case, we observe that the performance of the two scenarios is comparable. This shows that the proposed schemes have low implementation complexity and conserve the available resources at the RIS, but can still achieve significant performance gains.
Finally, a comparison of the achieved DMT between the PR scheme and the partition-based RIS schemes is depicted in Fig. 9, for a network setting with \(N=L=3\) antennas. We can see that for the PR scheme (\(K=1\)) the optimum DMT is achieved when \(Q=N+L-1=5\), which is denoted by the red solid line. Any other topology with \(Q>5\) can be vertically reduced to the \((3,5,3)\) channel, since it will be DMT-equivalent. By employing the AR scheme, we observe that the DMT can be significantly improved in terms of diversity gain. Moreover, the proposed scheme achieves the maximum multiplexing gain, if the number of partitions satisfies the lower bound of (29) i.e., \(m\geq\min\{N,L\}\). However, under the AR scheme, the considered channel setting can not achieve both \(d_{\max}\) and \(r_{\max}\) provided by the cut-set bound with a single value of \(K\), since (29) can not be fully satisfied with any partition size. Therefore, if a larger number of partitions is considered so that \(m<\min\{N,L\}\), the presented scheme can still increase the achieved diversity gain, but it becomes suboptimal in terms of multiplexing gain. This remark is illustrated in Fig. 9, by considering an RIS with \(Q=10\) and a partition size of \(5\) or \(10\) sub-surfaces. Fig. 9 also shows the provided lower bound on the DMT achieved by the FR scheme. It can be seen that the FR scheme outperforms both the PR and the AR scheme. In particular, with the FR scheme, the RIS-aided channel achieves the same diversity order obtained by the AR scheme, and remains optimal in terms of multiplexing gain, for all partition sizes. Finally, for the considered network topology, only the FR scheme achieves both \(d_{\max}\) and \(r_{\max}\) with a single value of \(K\), which is depicted with the black dotted line for the case with \(Q=K=10\).
## VI Conclusions
In this paper, we studied the performance of RIS-assisted MIMO communications by employing two low-complexity partition-based schemes, that do not require any CSI knowledge at the transmitter side for their implementation. Specifically, the RIS elements are partitioned into sub-surfaces, which are reconfigured periodically in an efficient manner, creating a parallel channel in the time domain. We first proposed the AR scheme, which activates each sub-surface in a consecutive order. Next, we presented the FR scheme, where each sub-surface sequentially flips the incident signals through a phase shift adjustment. Theoretical expressions of the outage probability and the DMT were provided, while the considered schemes were compared to the conventional PR scheme. We demonstrated that, by considering the proposed schemes, we can increase the diversity order achieved by the RIS-aided system and improve the performance beyond the limit of the MIMO channel, while at the same time we keep the implementation complexity low.
### _Proof of Lemma 1_
Since \(K=1\), all the RIS elements belong to the same (single) sub-surface and are reconfigured only at the beginning of each time slot. Due to the random rotations at the elements, the induced phase shifts do not have any effect on the channel gain [8]. Therefore, without loss of generality, we can equivalently consider the case where \(\phi_{i,1}=0\), with \(1\leq i\leq Q\). The reflection matrix of the RIS is then equal to the \(Q\times Q\) identity matrix, \(\Phi=\mathbf{I}_{Q}\), and the resulting (\(N,Q,L\)) channel can be interpreted as a Rayleigh product channel i.e.,
Fig. 8: Outage probability comparison of the partition-based schemes and the PB case for a network topology with \(N=L=1\).
Fig. 9: DMT comparison of the PR scheme and the partition-based schemes for a network topology with \(N=L=3\).
\(\mathcal{H}_{1}=\mathbf{GH}\). The mutual information of the channel is thus equal to
\[\mathcal{I}=\log_{2}\left[\det\left(\mathbf{I}_{L}+\frac{\rho}{N}\mathcal{H}_{1} \mathcal{H}_{1}^{\dagger}\right)\right]=\sum_{i=1}^{n_{0}}\log_{2}\left(1+\frac {\rho}{N}\lambda_{i}\right), \tag{31}\]
where \(\lambda_{i}\), \(1\leq i\leq n_{0}\), are the eigenvalues of \(\mathcal{H}_{1}\mathcal{H}_{1}^{\dagger}\) and their joint PDF is given by [46]. The characteristic function of the mutual information is defined as
\[\varphi(Q,t)=\mathbb{E}\left\{\exp(\jmath t\mathcal{I})\right\}=\mathbb{E} \left\{\prod_{i=1}^{n_{0}}\left(1+\frac{\rho}{N}\lambda_{i}\right)^{\jmath t/ \ln 2}\right\}. \tag{32}\]
By using the analytical results of [46, Section II.B] for the evaluation of the moment generating function of the mutual information of a multi-hop MIMO channel, the final expression of \(\varphi(Q,t)\) is obtained.
### _Proof of Theorem 1_
By considering the SISO channel, the end-to-end channel matrix given in (2) is simplified to a scalar value which, under the AR scheme, is given by
\[\mathcal{H}_{k}=\sum_{i=1}^{Q}h_{i}g_{i}a_{i,k}, \tag{33}\]
where \(a_{i,k}\) is provided according to the framework of the AR scheme in (7). It has been proved in [8] that by applying the CLT, the distribution of the end-to-end channel \(\mathcal{H}_{k}\) converges to a complex Gaussian distribution with zero mean and variance \(m\), since at sub-slot \(k\) only the \(m\) elements that correspond to the sub-surface \(\mathcal{S}_{k}\) are turned ON. Therefore, the channel gain \(W_{k}=|\mathcal{H}_{k}|^{2}\) is exponentially distributed with parameter \(1/m\). Recall that the sub-surfaces \(\mathcal{S}_{k}\) do not overlap, so the random variables \(W_{k}\) are mutually independent. The outage probability is then calculated as follows
\[\Pi_{\mathrm{AR}}^{\mathrm{CLT}}(R,K,m) =\mathbb{P}\left\{\frac{1}{K}\sum_{k=1}^{K}\log_{2}\left(1+\rho W _{k}\right)<R\right\}\] \[=\mathbb{P}\left\{\prod_{k=1}^{K}\left(1+\rho W_{k}\right)<2^{RK} \right\}, \tag{34}\]
which follows from the logarithmic identity \(\log_{2}(x)+\log_{2}(y)=\log_{2}(xy)\). The final expression of the outage probability in (10) is eventually derived by obtaining the cumulative distribution function (CDF) of the product of \(K\) independent shifted exponential random variables, which is given in [26, Corollary 2].
### _Proof of Lemma 2_
Since \(N=L=1\), the channel \(\mathcal{H}_{k}\) is a scalar value given by
\[\mathcal{H}_{k}=\sum_{i=1}^{Q}h_{i}g_{i}\exp{\jmath\phi_{i,k}}, \tag{35}\]
where \(\phi_{i,k}\) is provided in (12) by employing the FR scheme. The random variables \(W_{k}=\left|\mathcal{H}_{k}\right|^{2}\) are correlated since the channel coefficients \(h_{i}\) and \(g_{i}\) remain constant during one time slot i.e., over \(K\) sub-slots. In order to calculate the correlation coefficient between \(W_{k}\) and \(W_{l}\), \(k\neq l\), we consider the Pearson correlation formula which is given by
\[\zeta=\frac{\mathbb{E}\left\{W_{k}W_{l}\right\}-\mathbb{E}\left\{W_{k}\right\} \mathbb{E}\left\{W_{l}\right\}}{\sigma_{W_{k}}\sigma_{W_{l}}}, \tag{36}\]
where \(\sigma_{W_{i}}=\sqrt{\mathbb{E}\left\{W_{i}^{2}\right\}-\mathbb{E}\left\{W_{ i}\right\}^{2}}\). We find that \(\mathbb{E}\left\{W_{i}\right\}=Q\), \(\mathbb{E}\left\{W_{i}^{2}\right\}=2Q(Q+1)\) and \(\mathbb{E}\left\{W_{k}W_{l}\right\}=Q(Q+3)+(Q-b)(Q-3b-1)+b(b-1)\), where \(b\) is given by (14). By substituting the previous results in (36), and after some trivial algebraic manipulations, we get the final expression of \(\zeta\) as in (13).
### _Proof of Theorem 2_
By using the approximated channel \(\tilde{\mathcal{H}}_{k}\) defined in (16) and by setting \(W_{k}=|\tilde{\mathcal{H}}_{k}|^{2}\), the outage probability under the FR scheme is evaluated as
\[\Pi_{\mathrm{FR}}(R,K,m)\] \[=\mathbb{P}\left\{\frac{1}{K}\sum_{k=1}^{K}\log_{2}\left(1+\rho W _{k}\right)<R\right\}\] \[=\mathbb{P}\left\{\prod_{k=1}^{K}\left(1+\rho W_{k}\right)<2^{RK}\right\} \tag{37}\] \[=\mathbb{E}_{W_{k}}\left\{F_{W_{K}}\left[\frac{1}{\rho}\left( \frac{2^{RK}}{\prod_{k=1}^{K-1}\left(1+\rho W_{k}\right)}-1\right)\right] \right\}, \tag{38}\]
which follows by solving the inequality in (37) for \(W_{K}\). The conditional CDF of \(W_{K}\), given \(W_{1}\), is derived by taking the integral of (18), which is equal to
\[F_{W_{K}|W_{1}}(x|y)=1-\mathcal{Q}_{1}\left(\sqrt{\frac{2\zeta y}{\Omega}}, \sqrt{\frac{2x}{\Omega}}\right). \tag{39}\]
Since the random variables \(W_{k}\), conditioned on \(W_{1}\), are all independent between each others, we have
\[\Pi_{\mathrm{FR}}(R,K,m)=\int_{w_{1}}\cdots\int_{w_{K-1}}f_{W_{1} }(w_{1})\] \[\quad\times\prod_{k=2}^{K-1}f_{W_{k}|W_{1}}(w_{k}|w_{1})F_{W_{K}|W _{1}}\left(\vartheta|w_{1}\right)dw_{K-1}\cdots dw_{1}, \tag{40}\]
where
\[\vartheta\triangleq\frac{1}{\rho}\left(\frac{2^{RK}}{\prod_{k=1}^{K-1}\left(1+ \rho w_{k}\right)}-1\right), \tag{41}\]
while \(f_{W_{1}}(w_{1})\) and \(f_{W_{k}|W_{1}}(w_{k}|w_{1})\) are given by (17) and (18), respectively. Regarding the integration limits, we need to ensure that the inequality
\[\frac{2^{RK}}{\prod_{k=1}^{K-1}\left(1+\rho w_{k}\right)}-1>0, \tag{42}\]
is satisfied sequentially for each \(w_{k}\) from \(K-1\) to \(1\). The final expression is derived by using the integral transformation \(\tau_{k}\to 1+\rho w_{k}\).
### _Proof of Proposition 2_
We first prove that the FR scheme achieves the same diversity order as the AR scheme i.e.,
\[d_{(N,Q,L)}^{\mathrm{FR}}(0)=d_{(N,Q,L)}^{\mathrm{AR}}(0)=Kd_{(N,m,L)}^{\mathrm{ PR}}(0). \tag{43}\]
Let us denote by \(\mathcal{H}_{k}^{\prime}\) the \(k\)-th sub-channel created under the FR scheme, and by \(\mathcal{H}_{k}\) the respective sub-channel resulting from the AR scheme. Based on the description of each scheme, the set of matrices \(\left\{\mathcal{H}_{k}^{\prime}\right\}_{k=1}^{K}\) is derived from \(\left\{\mathcal{H}_{k}\right\}_{k=1}^{K}\) by using the transformation
\[\left[\mathcal{H}_{1}^{\prime}\;\mathcal{H}_{2}^{\prime}\;\ldots\mathcal{H}_{ K}^{\prime}\right]=\left[\mathcal{H}_{1}\;\mathcal{H}_{2}\;\ldots\mathcal{H}_{K} \right]\mathbf{T}, \tag{44}\]
where \(\mathbf{T}\) is a \(KN\times KN\) matrix composed of \(K\times K\) sub-matrices and each sub-matrix \(\mathbf{T}_{i,j}\) is equal to
\[\mathbf{T}_{i,j}=\begin{cases}-\mathbf{I}_{N},&(i=j,K>2)\;\text{or}\;(i=j>1,K =2);\\ \mathbf{I}_{N},&\text{otherwise}.\end{cases} \tag{45}\]
We can easily verify that \(\mathbf{T}\) is invertible, constant and linear. Therefore, the two schemes achieve the same diversity order since
\[\sum_{k=1}^{K}\left\|\mathcal{H}_{k}^{\prime}\right\|_{F}^{2} \geq\lambda_{\min}\left(\mathbf{T}\mathbf{T}^{\dagger}\right) \sum_{k=1}^{K}\left\|\mathcal{H}_{k}\right\|_{F}^{2}=K\sum_{k=1}^{K}\left\| \mathcal{H}_{k}\right\|_{F}^{2}\] \[\doteq\sum_{k=1}^{K}\left\|\mathcal{H}_{k}\right\|_{F}^{2}, \tag{46}\]
where \(\lambda_{\min}(\cdot)\) denotes the minimum eigenvalue of a matrix and the relation \(a\doteq b^{c}\) means \(\lim_{b\rightarrow\infty}\frac{\log a}{\log b}=c\).
The next step is to prove that the FR scheme can achieve at least the DMT of the AR scheme for \(r=0,...,\min\{N,m,L\}\). In the considered channel, the multiplexing gain \(r\) can be achieved if an (\(r,r,r\)) sub-channel is reserved for spatial multiplexing at each time sub-slot. According to Corollary 1 and by using the DMT characterization of a Rayleigh product channel given in [24, Theorem 3.4], we have
\[d_{(N,Q,L)}^{\mathrm{AR}}(r)=Kd_{(N-r,m-r,L-r)}^{\mathrm{PR}}(0)=d_{(N-r,Q-Kr, L-r)}^{\mathrm{AR}}(0). \tag{47}\]
Similar to (43), the (\(N-r,Q-Kr,L-r\)) channel achieves the same diversity order under both the FR and the AR scheme. Besides, one can show that \(d_{(N,Q,L)}^{\mathrm{FR}}(r)\geq d_{(N-r,Q-Kr,L-r)}^{\mathrm{FR}}(0)\), since all the elements are activated in the FR scheme and the multiplexing gain \(r\) can be achieved with less than \(Kr\) elements. It follows that
\[d_{(N,Q,L)}^{\mathrm{FR}}(r)\geq d_{(N,Q,L)}^{\mathrm{AR}}(r). \tag{48}\]
Finally, the parallel channel under the FR scheme is in outage if \(\sum_{k=1}^{K}\mathcal{I}_{k}<Kr\log\rho\), which implies that at least one of the sub-channels is in outage with the corresponding random variable \(\mathcal{I}_{k}\) below the target rate \(r\log\rho\). Therefore, the FR scheme can achieve at least the DMT of the conventional case i.e.,
\[d_{(N,Q,L)}^{\mathrm{FR}}(r)\geq d_{(N,Q,L)}^{\mathrm{PR}}(r). \tag{49}\]
By combining (43), (48) and (49) we get the lower bound for the DMT of the FR scheme as (30).
|
2301.10849 | Two-loop helicity amplitudes for $H+$jet production to higher orders in
the dimensional regulator | In view of the forthcoming High-Luminosity phase of the LHC,
next-to-next-to-next-to-leading (N$^3$LO) calculations for the most
phenomenologically relevant processes become necessary. In this work, we take
the first step towards this goal for H$+$jet production by computing the one-
and two-loop helicity amplitudes for the two contributing processes, $H\to
ggg$, $H\to q\bar{q}g$, in an effective theory with infinite top quark mass, to
higher orders in the dimensional regulator. We decompose the amplitude in
scalar form factors related to the helicity amplitudes and in a new basis of
tensorial structures. The form factors receive contributions from Feynman
integrals which were reduced to a novel canonical basis of master integrals. We
derive and solve a set of differential equations for these integrals in terms
of Multiple Polylogarithms (MPLs) of two variables up to transcendental weight
six. | Thomas Gehrmann, Petr Jakubčík, Cesare Carlo Mella, Nikolaos Syrrakos, Lorenzo Tancredi | 2023-01-25T22:17:14Z | http://arxiv.org/abs/2301.10849v3 | # Two-loop helicity amplitudes for \(H\)+jet production to higher orders in the dimensional regulator
###### Abstract
In view of the forthcoming High-Luminosity phase of the LHC, next-to-next-to-next-to-leading (N\({}^{3}\)LO) calculations for the most phenomenologically relevant processes become necessary. In this work, we take the first step towards this goal for H+jet production by computing the one- and two-loop helicity amplitudes for the two contributing processes, \(H\to ggg\), \(H\to q\bar{q}g\), in an effective theory with infinite top quark mass, to higher orders in the dimensional regulator. We decompose the amplitude in scalar form factors related to the helicity amplitudes and in a new basis of tensorial structures. The form factors receive contributions from Feynman integrals which were reduced to a novel canonical basis of master integrals. We derive and solve a set of differential equations for these integrals in terms of Multiple Polylogarithms (MPLs) of two variables up to transcendental weight six.
## 1 Introduction
A little over a decade ago, the Higgs boson was discovered after analysing the data collected during Run I at the Large Hadron Collider (LHC) at CERN [1; 2]. The discovery was achieved while the collider was running at reduced centre-of-mass energies of 7 and 8 TeV and with only a small fraction of the total dataset which will be accumulated during its entire runtime. Indeed, it is expected that the forthcoming High-Luminosity phase of the
LHC (HL-LHC) will yield a dataset corresponding to 3 \(ab^{-1}\) of integrated luminosity for \(pp\) collisions at 14 TeV [3].
Since its discovery, the Higgs boson has been at the centre of the experimental effort at the LHC [4]. Studying its properties improves our understanding of electroweak symmetry breaking (EWSB), the mechanism which is believed to be responsible for the generation of the masses of fermions and weak gauge bosons. The dominant production channel for the Higgs boson at the LHC is gluon fusion. In the Standard Model, the Higgs coupling to two gluons is mainly mediated through a loop of top quarks, making it a loop-induced process already at Leading Order (LO). For this reason, computing higher order perturbative corrections to Higgs production in the full theory quickly becomes prohibitive.
It was realized long ago that radiative corrections can increase the LO Higgs cross-section in gluon fusion by as much as \(\mathcal{O}(100\%)\)[5; 6] and describing its production in hadron collisions thus requires higher-order calculations. Indeed, quite recently the computation of the fully inclusive Higgs cross-section with full dependence on the top quark mass has been pushed to next-to-next-to-leading-order (NNLO) [7] using numerical techniques to handle the required two- and three-loop scattering amplitudes.
A useful alternative to performing calculations with full dependence on the top mass is to work in the heavy top quark mass limit \(M_{t}\to\infty\). Under the assumption that the top quark is the largest scale involved in the calculation, one can integrate out the top mass and formulate an effective Lagrangian for the \(Hgg\) coupling [8; 9; 10]. In this description, the top-quark loop mediating the \(Hgg\) interaction shrinks to a point and calculations start at tree level, involving only massless partons. In this limit, inclusive [11; 12; 13] as well as fully differential [14] predictions for Higgs boson production via gluon fusion are known up to next-to-next-to-next-to-leading-order (N\({}^{3}\)LO).
One of the most promising observables to study the EWSB mechanism is the Higgs transverse momentum, see for example [15]. To remain differential in external radiation, one must study the production of a Higgs boson in association with (at least) one resolved jet. For this process, results in the heavy top quark limit are currently known up to NNLO [16; 17; 18; 19; 20], and the residual theoretical uncertainty can be estimated at around \(\mathcal{O}(5\%)\). The heavy top quark limit approximation is valid for transverse momentum of the Higgs which is lower than two times the top quark mass, \(p_{T}<2\,m_{t}\). For higher \(p_{T}\), the heavy quark loop is resolved and finite mass corrections are needed. Results with finite top mass at NLO were first obtained for the very high transverse momentum kinematic region, \(p_{T}\gg 2m_{t}\), [21], and numerically for general kinematics [22; 23]. More recently, following the calculation of the relevant two-loop master integrals [24; 25], the full NLO analytical calculation has been completed [26].
Taking into account the wealth of currently available data, as well as the expected high-luminosity phase, N\({}^{3}\)LO calculations will become essential [27] in order to perform phenomenological studies at the 1% level at the LHC. A key ingredient to extend the current calculations to N\({}^{3}\)LO are the four-point amplitudes for the production of a Higgs boson and a parton in parton-parton collisions. Working in the effective theory described above, one needs to compute tree level, one-loop, two-loop and three-loop amplitudes for the scattering of three massless partons and one massive scalar. On top of their phenomenological
importance, the structure of these amplitudes to higher loops is also of formal interest and has been the subject of thorough investigation. In this context it is worth noticing that contrary to the naive expectation, up to two loops, the finite remainder amplitudes for the decay of a Higgs boson to three gluons have been shown to be expressible in terms of just classical polylogarithms [28].
Starting at one loop, amplitudes exhibit singularities which can be regulated in the framework of dimensional regularization. In \(D=4-2\epsilon\) dimensions, the loop amplitudes are computed as a Laurent expansion in \(\epsilon\). An N\({}^{3}\)LO computation requires one-loop amplitudes up to \(\mathcal{O}(\epsilon^{4})\), two-loop amplitudes up to \(\mathcal{O}(\epsilon^{2})\) and three-loop amplitudes up to \(\mathcal{O}(\epsilon^{0})\). Our goal in this paper is to provide the first ingredient for such a calculation, namely the two-loop amplitudes up to order \(\epsilon^{2}\). These amplitudes were previously obtained in [29] to order \(\epsilon^{0}\), providing a key ingredient to the calculation of NNLO QCD corrections to Higgs+jet production [16; 17; 18; 19; 20] and the Higgs boson transverse momentum distribution [19; 30].
While our overall approach is relatively standard, we include multiple new elements which help us organize the calculation more efficiently in view of a subsequent extension to three loops. First of all, we cast the relevant amplitudes into a compact tensorial basis and construct new helicity projectors to extract the corresponding helicity amplitudes directly from the Feynman diagrams [31; 32]. We then employ standard integration-by-parts identities [33; 34] to express these amplitudes in terms of so-called master integrals, and evaluate them using the differential equation method [35; 36; 37; 38]. At two loops, the master integrals were computed up to transcendental weight four more than two decades ago [39; 40]. Since then, substantial advances were made in the understanding of mathematical structures underlying Feynman integrals and their associated differential equations.
Indeed, about a decade ago it was realized that a special class of Feynman integrals, dubbed _local integrals_[41; 42], plays a crucial role in representing scattering amplitudes, in particular in the case of (planar) \(N=4\) Super Yang-Mills theory (SYM). These integrals only feature singularities of the logarithmic type and exhibit uniform maximum transcendentality [42; 43], which has for a long time been conjectured to characterize scattering amplitudes in \(N=4\) SYM [44]. While these properties do not translate in an obvious way to non-supersymmetric theories like QCD, it has been shown that such integrals can still substantially simplify the calculations of scattering amplitudes within the Standard Model. Namely, integrals of this type fulfil particularly simple systems of differential equations, in the so-called canonical form [45]. Canonical sets of equations are more easily solved and provide a direct handle on the analytic properties of the corresponding integrals in various singular regions. Importantly, the solution of a canonical system with rational coefficients is straightforwardly expressed in terms of a well-understood class of functions, the Multiple Polylogarithms (MPLs) [46; 47; 48].
Among the initial applications of this formalism were planar ladder-type integrals in \(H\)+jet production up to three loops [49], which include the planar two-loop integrals up to \(\mathcal{O}(\epsilon^{2})\) as a subset. Here we consider the full set of two-loop integrals: planar and non-planar. We construct a pure basis of uniform transcendental weight and demonstrate that it can be solved in terms of MPLs to any order in \(\epsilon\). At variance with [39; 40], we show that using a generalized set of regularity conditions on the canonical master integrals, all
boundary conditions required to fix the solution of the differential equations can be inferred in terms of a small number of one-scale two- and three-point functions which are known in closed form in the literature. In view of applications at N\({}^{3}\)LO, we limit ourselves to perform the calculation explicitly to order \(\epsilon^{2}\), which corresponds to transcendental weight six.
The structure of the paper is as follows. The effective coupling of the Higgs boson to light partons and the definition of kinematics is given in Section 2. In Section 3, we formulate the general structure of the amplitude and use projectors to obtain tensor coefficients and construct the helicity amplitudes. In Section 4, we describe in detail the construction of pure bases for the two-loop integral families and solve their canonical differential equations analytically. The ultraviolet renormalization and the subtraction of infrared singularities of our amplitudes are discussed in Section 5. Finally, the crossing of the helicity amplitudes to all appropriate kinematic configurations and the necessary analytic continuation of the relevant multiple polylogarithms is discussed in Section 6. We conclude with a brief summary in Section 7.
## 2 Notation and kinematics
### The effective Lagrangian
The Higgs boson interacts with Standard Model particles with a coupling strength proportional to their mass, and therefore cannot couple directly to gluons or massless quarks. Nevertheless, starting at one loop, the Higgs can interact with gluons through virtual loops of massive quarks. In the limit of a very heavy quark mass, \(m_{q}\to\infty\), one can show that this coupling becomes independent of \(m_{q}\) and an effective theory can be formulated by integrating out the corresponding quark from the full theory [8; 9; 10]. While most quarks have relatively small masses compared to the typical energy scales of scattering processes at the LHC and can often be considered as massless, the same is not true for the top quark. Since it is the heaviest of all known Standard Model particles, this effective field theory (EFT) works extremely well for the top quark, at least as long as all scales involved are smaller than twice its mass [50; 51].
In the following, we will work in the EFT with a matter content of \(N_{f}=5\) massless quarks and one very heavy quark, the top quark, integrated out. In this case, the effective Lagrangian becomes
\[\mathcal{L}_{int}=-\,\frac{\lambda}{4}HG_{a}^{\mu\nu}G_{a,\mu\nu}, \tag{1}\]
where \(G_{a}^{\mu\nu}\) is the field strength tensor of the gluons and \(H\) is the Higgs field. From dimensional analysis, the effective coupling \(\lambda\) has inverse mass dimension. It was shown long ago [52; 53] how to perform the matching of this effective theory to the Standard Model Lagrangian [54; 55; 6].
### Kinematics
We are ultimately interested in computing the amplitude for the production of a Higgs boson and a hadronic jet in parton-parton annihilation at the LHC. For simplicity, we start
considering the problem in the crossed kinematics which corresponds to the decay of a Higgs boson into three partons. There are two relevant partonic channels, namely the decay into three gluons
\[H(p_{4})\to g_{1}(p_{1})+g_{2}(p_{2})+g_{3}(p_{3}), \tag{2}\]
and into a quark, anti-quark and a gluon
\[H(p_{4})\to q(p_{1})+\bar{q}(p_{2})+g(p_{3}). \tag{3}\]
The amplitudes for the production processes can then be obtained through an analytic continuation of the decay kinematics [56], see Section 6 for more details.
The Mandelstam invariants are defined as
\[s_{12}=(p_{1}+p_{2})^{2}\,,\qquad s_{13}=(p_{1}+p_{3})^{2}\,,\qquad s_{23}=(p_{ 2}+p_{3})^{2}, \tag{4}\]
and satisfy the conservation equation
\[s_{12}+s_{13}+s_{23}=M_{H}^{2}, \tag{5}\]
where \(M_{H}\) is the mass of the Higgs particle. It is more convenient to work with dimensionless ratios
\[x=\frac{s_{12}}{M_{H}^{2}},\qquad y=\frac{s_{13}}{M_{H}^{2}},\qquad z=\frac{s_ {23}}{M_{H}^{2}}, \tag{6}\]
such that (5) implies the relation,
\[x+y+z=1. \tag{7}\]
In the decay kinematic region, all these invariants are non-negative. This, together with (6), defines the corresponding kinematic region
\[z\geq 0,\qquad 0\leq y\leq 1-z,\qquad x=1-y-z. \tag{8}\]
## 3 Tensor Decomposition
Following earlier work on this process [29], we decompose the amplitudes \(\mathcal{M}_{ggg}\) and \(\mathcal{M}_{q\overline{q}g}\) as
\[\mathcal{M}_{ggg} =\mathcal{S}_{\mu\nu\rho}(g_{1},g_{2},g_{3})\,\epsilon_{1}^{\mu} \epsilon_{2}^{\nu}\epsilon_{3}^{\rho}, \tag{9}\] \[\mathcal{M}_{q\overline{q}g} =\mathcal{T}_{\mu}(q,\bar{q},g)\,\epsilon^{\mu},\]
where we used \(\epsilon_{i}\) to denote the polarization vectors of external gluons. The above tensors can be expanded perturbatively in the QCD coupling constant \(\alpha_{s}\) as
\[\mathcal{S}_{\mu\nu\rho}(g_{1},g_{2},g_{3})= \,\lambda\sqrt{4\pi\alpha_{s}}f^{a_{1}a_{2}a_{3}}\Big{[}\mathcal{ S}^{(0)}_{\mu\nu\rho}(g_{1},g_{2},g_{3})+\left(\frac{\alpha_{s}}{2\pi}\right) \mathcal{S}^{(1)}_{\mu\nu\rho}(g_{1},g_{2},g_{3})\] \[\qquad\qquad\qquad\qquad+\left(\frac{\alpha_{s}}{2\pi}\right)^{ 2}\mathcal{S}^{(2)}_{\mu\nu\rho}(g_{1},g_{2},g_{3})+\mathcal{O}(\alpha_{s}^{ 3})\Big{]}\,, \tag{10}\] \[\mathcal{T}_{\mu}(q,\overline{q},g)= \,\lambda\sqrt{4\pi\alpha_{s}}T^{a}_{ij}\Big{[}\mathcal{T}^{(0)}_ {\mu}(q,\overline{q},g)+\left(\frac{\alpha_{s}}{2\pi}\right)\mathcal{T}^{(1)} _{\mu}(q,\overline{q},g)\] \[\qquad\qquad\qquad+\left(\frac{\alpha_{s}}{2\pi}\right)^{2} \mathcal{T}^{(2)}_{\mu}(q,\overline{q},g)+\mathcal{O}(\alpha_{s}^{3})\Big{]}\,, \tag{11}\]
where the coefficients \(\mathcal{S}^{(i)}_{\mu\nu\rho}\) and \(\mathcal{T}^{(i)}_{\mu}\) are the \(i\)-loop contributions to the amplitude. The \(SU(3)\) group generators are normalised as \(\mathrm{Tr}(T^{a}T^{b})=\delta^{ab}/2\).
Given the external states and their possible helicity and spin configurations, the amplitudes \(\mathcal{S}_{\mu\nu\rho}\) and \(\mathcal{T}_{\mu}\) can only depend on a limited number of tensor structures. These structures can be further constrained by exploiting symmetries and choosing a gauge or, following [29], by enforcing gauge invariance through the Ward identities. In this Section, we aim to find such a tensor basis in order to be able to work with the scalar coefficients of the amplitudes with respect to this basis, known as the form factors. The form factors, in turn, are obtained by applying projector operators on the full amplitude expanded in Feynman diagrams. We derive a basis of 4 tensor structures for \(H\to ggg\) and a basis of 2 tensor structures for \(H\to q\bar{q}g\), and the corresponding projectors in Subsections 3.1 and 3.2, respectively.
Since we are ultimately interested in fixing the helicities of external states and computing the associated helicity amplitudes, we introduce the spinor-helicity formalism in Subsection 3.3 and derive a set of genuinely independent helicity amplitudes. These are written as a unique spinor factor times a scalar coefficient which is a linear combination of the form factors. The same linear combination of form factor projectors also defines a helicity amplitude projector.
### Tensor decomposition for \(H\to ggg\)
Let us start considering the decay of a scalar boson into three massless spin-one particles. The most general tensor structure one can build using the four-vectors associated with the external particles is
\[\mathcal{S}_{\mu\nu\rho}(g_{1},g_{2},g_{3})\epsilon_{1}^{\mu} \epsilon_{2}^{\nu}\epsilon_{3}^{\rho} =\sum_{i,j,k=1}^{3}A_{ijk}\,p_{i}\cdot\epsilon_{1}\,p_{j}\cdot \epsilon_{2}\,p_{k}\cdot\epsilon_{3}+\sum_{i=1}^{3}B_{i}\,p_{i}\cdot\epsilon_{ 1}\epsilon_{2}\cdot\epsilon_{3}\] \[\quad+\sum_{i=1}^{3}C_{i}\,p_{i}\cdot\epsilon_{2}\epsilon_{3} \cdot\epsilon_{1}+\sum_{i=1}^{3}D_{i}\,p_{i}\cdot\epsilon_{3}\epsilon_{1} \cdot\epsilon_{2}\,.\]
There are four helicity configurations for this amplitude, so we expect four independent tensor structures in \(D=4\). Indeed, the transversality conditions \(p_{i}\cdot\epsilon_{i}=0\), \(i=1,2,3\) and the cyclic gauge choice \(\epsilon_{1}\cdot p_{2}=0\), \(\epsilon_{2}\cdot p_{3}=0\), \(\epsilon_{3}\cdot p_{1}=0\) restrict the tensor structures considerably:
\[\mathcal{S}_{\mu\nu\rho}(g_{1},g_{2},g_{3})\epsilon_{1}^{\mu} \epsilon_{2}^{\nu}\epsilon_{3}^{\rho} =A_{312}\,p_{3}\cdot\epsilon_{1}\,p_{1}\cdot\epsilon_{2}\,p_{2} \cdot\epsilon_{3}+B_{3}\,\epsilon_{2}\cdot\epsilon_{3}\,p_{3}\cdot\epsilon_{1}\] \[\quad+C_{1}\,p_{1}\cdot\epsilon_{2}\,\epsilon_{3}\cdot\epsilon_{ 1}+D_{2}\,p_{2}\cdot\epsilon_{3}\,\epsilon_{1}\cdot\epsilon_{2}\,, \tag{10}\] \[=\mathcal{G}_{1}T_{1}+\mathcal{G}_{2}T_{2}+\mathcal{G}_{3}T_{3}+ \mathcal{G}_{4}T_{4}\,, \tag{11}\]
where we relabelled the coefficients \(\{A_{i},...,D_{i}\}\) as the form factors \(\mathcal{G}_{i}\) and defined the basis
\[T_{1} =p_{1}\cdot\epsilon_{2}\ \epsilon_{3}\cdot\epsilon_{1},\] \[T_{2} =p_{2}\cdot\epsilon_{3}\ \epsilon_{1}\cdot\epsilon_{2},\] \[T_{3} =\epsilon_{2}\cdot\epsilon_{3}\ p_{3}\cdot\epsilon_{1},\] \[T_{4} =p_{3}\cdot\epsilon_{1}\,p_{1}\cdot\epsilon_{2}\,p_{2}\cdot \epsilon_{3}\,. \tag{12}\]
The form factors can be obtained from a Feynman diagram decomposition of the amplitude \(\mathcal{S}^{(i)}_{\mu\nu\rho}\) at any loop order by applying suitable projectors \(\mathcal{P}_{i}\) defined as
\[\sum_{pol}\mathcal{P}_{i}\,\mathcal{S}_{\mu\nu\rho}(g_{1},g_{2},g_{3})\epsilon_{ 1}^{\mu}\epsilon_{2}^{\nu}\epsilon_{3}^{\rho}=\mathcal{G}_{i}\,, \tag{10}\]
where polarization vectors satisfy the gauge-fixed polarization sum, with reference vectors defined above. The projectors can in turn be decomposed in terms of the dual of the tensor basis in (11):
\[\mathcal{P}_{i}=\sum_{j=1}^{4}c_{i}^{(j)}T_{j}^{\dagger}\,. \tag{11}\]
To work out the projectors explicitly, we insert the decompositions (11) and (11) into the definition (10) and obtain the requirement
\[\sum_{j=1}^{4}c_{i}^{(j)}T_{j}^{\dagger}\sum_{k=1}^{4}\mathcal{G}_{k}T_{k} \overset{!}{=}\mathcal{G}_{i}\,, \tag{12}\]
which is satisfied by the coefficients \(c_{i}^{(j)}=(T_{j}^{\dagger}T_{i})^{-1}\). In particular for the amplitude \(H\to ggg\) with external states in \(D\) dimensions [32], we get
\[\mathcal{P}_{1} =\frac{1}{s_{12}s_{13}(D-3)}(s_{23}\,T_{1}^{\dagger}-T_{4}^{ \dagger}),\] \[\mathcal{P}_{2} =\frac{1}{s_{12}s_{23}(D-3)}(s_{13}\,T_{2}^{\dagger}-T_{4}^{ \dagger}),\] \[\mathcal{P}_{3} =\frac{1}{s_{13}s_{23}(D-3)}(s_{12}\,T_{3}^{\dagger}-T_{4}^{ \dagger}),\] \[\mathcal{P}_{4} =\frac{1}{s_{12}s_{13}s_{23}(D-3)}(D\,T_{4}^{\dagger}-s_{12}T_{3} ^{\dagger}-s_{23}T_{1}^{\dagger}-s_{13}T_{2}^{\dagger})\,. \tag{13}\]
### Tensor decomposition for \(H\to q\bar{q}g\)
Similarly, it is easy to see that the most general tensor decomposition for two external spinors and a four-vector is
\[\mathcal{T}_{\mu}(q,\overline{q},g)\epsilon_{3}^{\mu} =\sum_{\Gamma}A_{\Gamma}\,\bar{u}(p_{1})\,\Gamma_{\mu}\,v(p_{2}) \epsilon_{3}^{\mu}\] \[+\sum_{j=1,2,3}\,\sum_{\Gamma^{\prime}}B_{\Gamma^{\prime}}\, \bar{u}(p_{1})\,\Gamma^{\prime}\,v(p_{2})\epsilon_{3}\cdot p_{j}\,, \tag{14}\]
where we sum over odd products of gamma matrices. In the first sum, all indices but one are contracted amongst each other or with external momenta while in the second, there are no indices left, and the polarization vector contracts with an external momentum vector.
In general, one might expect the length of the spinor chains to be bound by the loop order. By simple enumeration, one can see that at tree level there can only be one gamma matrix, at one loop up to three gamma matrices and at two loops up to five. Nonetheless, it is easy
to see that with the momenta at hand, one cannot build any spinor chain with more than one Dirac \(\gamma\) matrix. Enforcing the transversality condition \(p_{3}\cdot\epsilon_{3}=0\) and gauge choice \(\epsilon_{3}\cdot p_{1}=0\), one is left with the decomposition
\[\mathcal{T}_{\mu}(q,\bar{q},g)\epsilon_{3}^{\mu}=\,\mathcal{F}_{1}\,\bar{u}(p_ {1})\epsilon\hskip-2.845276pt/v(p_{2})+\mathcal{F}_{2}\,\bar{u}(p_{1})p\hskip-2.845276pt/ v(p_{2})\epsilon_{3}\cdot p_{2}. \tag{28}\]
Hence we define the two tensor structures
\[T_{1} =\bar{u}(p_{1})\epsilon\hskip-2.845276pt/v(p_{2}),\] \[T_{2} =\bar{u}(p_{1})p\hskip-2.845276pt/v(p_{2})\epsilon_{3}\cdot p_{2}. \tag{29}\]
The form factors are extracted from the amplitude using the projectors \(\mathcal{P}_{i}\) with
\[\sum_{pol}\mathcal{P}_{i}\,\mathcal{T}_{\mu}(q,\bar{q},g)\epsilon_{3}^{\mu}= \mathcal{F}_{i}, \tag{30}\]
where polarization vector of the gluon satisfies the gauge-fixed polarization sum, with reference vector defined above. Following the strategy outlined in Subsection 3.1, we get for the two projectors
\[\mathcal{P}_{1} =\frac{1}{2s_{12}(D-3)}\Big{(}T_{1}^{\dagger}-\frac{1}{s_{23}}T_{ 2}^{\dagger}\Big{)},\] \[\mathcal{P}_{2} =\frac{1}{2s_{12}s_{23}(D-3)}\Big{(}\frac{D-2}{s_{23}}T_{2}^{ \dagger}-T_{1}^{\dagger}\Big{)}. \tag{31}\]
### Helicity amplitudes
From the tensors found above, we can easily obtain compact expressions for the relevant helicity amplitudes. We work in the 't Hooft-Veltman scheme and consider external states as four-dimensional and assume fixed helicity states. In massless QCD, both gluons and quarks have two helicity configurations, \(\lambda_{i}=\pm\), and the amplitudes can be written as
\[\begin{split}\mathcal{M}_{ggg}^{\lambda_{1}\lambda_{2}\lambda_{3 }}&=\mathcal{S}_{\mu\nu\rho}(g_{1},g_{2},g_{3})\,\epsilon_{1, \lambda_{1}}^{\mu}(p_{1})\epsilon_{2,\lambda_{1}}^{\nu}(p_{2})\epsilon_{3, \lambda_{3}}^{\rho}(p_{3}),\\ \mathcal{M}_{q\bar{q}g}^{\lambda_{1}\lambda_{2}\lambda_{3}}& =\mathcal{T}_{\mu}(q_{\lambda_{1}},\bar{q}_{\lambda_{2}},g)\, \epsilon_{3,\lambda_{3}}^{\mu}(p_{3})\,.\end{split} \tag{32}\]
The two helicity states of a four-component massless spinor are projected out through
\[\psi_{L}=\frac{1}{2}\big{(}1-\gamma_{5}\big{)}\,\psi,\quad\psi_{R}=\frac{1}{2} \big{(}1+\gamma_{5}\big{)}\,\psi, \tag{33}\]
and we fix the spinor-helicity bracket representation of incoming fermions as
\[|p]=\psi_{L}(p)\,,\quad|p\rangle=\psi_{R}(p)\,, \tag{34}\]
and of incoming anti-fermions as
\[\langle p|=\bar{\psi}_{L}(p),\quad[p]=\bar{\psi}_{R}(p). \tag{35}\]
For massless vector bosons, incoming states of positive and negative helicity are given by
\[\epsilon_{+}^{\mu}(p;r)=-\frac{\left[r\,\gamma^{\mu}\,p\right]}{\sqrt{2}\left[ rp\right]}\,,\qquad\epsilon_{-}^{\mu}(p;r)=\frac{\left\langle r\,\gamma^{\mu}\,p \right]}{\sqrt{2}\left\langle rp\right\rangle}, \tag{36}\]
where the reference momentum \(r\) is an arbitrary light-like vector such that \(r\cdot p\neq 0\). Outgoing states have the same representation with switched helicities.
Let us start by considering the decay \(H\to ggg\). There are two independent helicity configurations, which we choose to be \(\mathcal{M}^{+++}_{ggg}\) and \(\mathcal{M}^{++-}_{ggg}\), while the other helicity amplitudes are obtained through parity conjugation and by relabelling of the gluon momenta. Starting from the decomposition (10) with the basis (11) and applying the definitions (23)-(24), we can cast the independent helicity amplitudes in terms of spinor products,
\[\mathcal{M}^{+++}_{ggg} =\alpha\frac{1}{\sqrt{2}}\frac{M_{H}^{4}}{\langle 1\,2\rangle \langle 2\,3\rangle\langle 3\,1\rangle}, \tag{25}\] \[\mathcal{M}^{++-}_{ggg} =\beta\frac{1}{\sqrt{2}}\frac{[1\,2]^{3}}{[2\,3][1\,3]}\,, \tag{26}\]
where the coefficients \(\alpha\) and \(\beta\) are simple linear combinations of the original form factors:
\[\alpha =-\frac{2s_{12}s_{13}\,\mathcal{G}_{1}+2s_{12}s_{23}\,\mathcal{G }_{2}+2s_{13}s_{23}\,\mathcal{G}_{3}+s_{12}s_{13}s_{23}\,\mathcal{G}_{4}}{2M _{H}^{4}}, \tag{27}\] \[\beta =-\frac{2s_{23}\,\mathcal{G}_{2}+s_{23}s_{13}\mathcal{G}_{4}}{2s _{12}}. \tag{28}\]
The helicity projectors for \(\alpha\) and \(\beta\) are built by replacing the \(\mathcal{G}_{i}\) with the corresponding form factor projectors from (22):
\[\mathcal{P}_{\alpha} =-\frac{1}{2M_{H}^{4}}\Big{(}2s_{12}s_{13}\mathcal{P}_{1}+2s_{12} s_{23}\,\mathcal{P}_{2}+2s_{13}s_{23}\,\mathcal{P}_{3}+s_{12}s_{13}s_{23}\, \mathcal{P}_{4}\Big{)}\] \[=\frac{1}{2(D-3)M_{H}^{4}}\Big{(}(6-D)T_{4}^{\dagger}-s_{23}T_{1 }^{\dagger}-s_{12}T_{3}^{\dagger}-s_{13}T_{2}^{\dagger}\Big{)}, \tag{29}\] \[\mathcal{P}_{\beta} =-\frac{2s_{23}\,\mathcal{P}_{2}+s_{23}s_{13}\mathcal{P}_{4}}{2s _{12}}\] \[=\frac{1}{2(D-3)s_{12}^{2}}\Big{(}s_{23}T_{1}^{\dagger}-s_{13}T_{ 2}^{\dagger}+s_{12}T_{3}^{\dagger}+(2-D)T_{4}^{\dagger}\Big{)}. \tag{30}\]
Let us now consider the other partonic process \(H\to q\bar{q}g\). In this case, there is only one independent helicity amplitude, which we choose to be \(\mathcal{M}^{LR+}_{q\bar{q}g}\). In this helicity configuration, the first tensor in (23) vanishes, while the second tensor yields
\[\mathcal{M}^{LR+}_{q\bar{q}g}=\gamma\frac{1}{\sqrt{2}}\frac{[2\,3]^{2}}{[1\,2 ]}, \tag{31}\]
where
\[\gamma=s_{12}\,\mathcal{F}_{2}. \tag{32}\]
Again, the helicity projector is easily obtained by replacing \(\mathcal{F}_{2}\) with the associated projector from (20),
\[\mathcal{P}_{\gamma}=s_{12}\,\mathcal{P}_{2}=\frac{1}{2s_{23}(D-3)}\Big{(} \frac{D-2}{s_{23}}T_{2}^{\dagger}-T_{1}^{\dagger}\Big{)}. \tag{33}\]
Just like full amplitudes in (14) and (15), the helicity amplitude coefficients \(\alpha\), \(\beta\), \(\gamma\) also have a perturbative expansion,
\[\Omega=\lambda\sqrt{4\pi\alpha_{s}}\,T_{\Omega}\,\Big{[}\Omega^{(0)}+\Big{(} \frac{\alpha_{s}}{2\pi}\Big{)}\,\Omega^{(1)}+\Big{(}\frac{\alpha_{s}}{2\pi} \Big{)}^{2}\,\Omega^{(2)}+\mathcal{O}(\alpha_{s}^{3})\Big{]} \tag{34}\]
for \(\Omega=\alpha,\beta,\gamma\). The overall colour factors for the two processes are \(T_{\alpha}=T_{\beta}=f^{a_{1}a_{2}a_{3}}\) and \(T_{\gamma}=T^{a_{3}}_{\ i,j}\). The tree-level helicity amplitudes are well-known and the coefficients evaluate to
\[\alpha^{(0)} =\beta^{(0)}=-1 \tag{3.31}\] \[\gamma^{(0)} =1. \tag{3.32}\]
## 4 Master integrals
The form factors \(\mathcal{F}_{i}\) and \(\mathcal{G}_{i}\), which relate to the helicity amplitudes via (3.24), (3.28), receive contributions from all relevant Feynman diagrams at a given perturbative order. All tree-level, one- and two-loop diagrams with \(H\) and \(ggg\) or \(q\bar{q}g\) external states are generated with standard QCD vertices and the effective Higgs interactions using QGRAF [57]. We shift each diagram to a kinematic crossing of one of the auxiliary topologies presented in Table 1 using Reduze2[58; 59]. After inserting Feynman rules and evaluating the Dirac and Lorentz algebra in FORM [60], the contribution of each diagram to the form factors in (3.5) and (3.12) can be written as a combination of scalar integrals of the form
\[I_{a_{1},\dots,a_{9}}=\int\bigg{(}\prod_{l=1}^{2}(-M_{H}^{2})^{-\epsilon} \epsilon^{\gamma_{E}\epsilon}\frac{\mathrm{d}^{D}k_{l}}{i\pi^{d/2}}\bigg{)} \prod_{i=1}^{9}D_{i}^{-a_{i}}\,, \tag{4.1}\]
with \(k_{l}\) the loop momenta, \(D_{i}\in\{P_{i}\}\cup\{N_{i}\}\) the internal propagators, and \(\gamma_{E}=0.577\dots\) the Euler-Mascheroni constant. The topology of each scalar integral is uniquely identified by the propagators which appear in the denominator (\(a_{i}\geq 1\)). This information can be used to compute the diagram's sector ID within one of the auxiliary topologies in Table 1 as a binary nine-bit number. The scalar integrals defined in (4.1) satisfy integration-by-parts (IBP) identities [33; 34], allowing them to be expressed in terms of a minimal set of so-called master integrals. Note that the physical measure in the bare amplitude is \(\mathrm{d}^{D}k/(2\pi)^{D}\) per
\begin{table}
\begin{tabular}{c|c} Family PL: \(\{P_{i}\}\) & Family NPL: \(\{N_{i}\}\) \\ \hline \hline \(k_{1}\) & \(k_{1}\) \\ \(k_{2}\) & \(k_{2}\) \\ \(k_{1}-k_{2}\) & \(k_{1}-p_{1}\) \\ \(k_{1}-p_{1}\) & \(k_{2}-p_{1}\) \\ \(k_{2}-p_{1}\) & \(k_{1}+p_{3}\) \\ \(k_{1}-p_{1}-p_{2}\) & \(k_{2}+p_{3}\) \\ \(k_{2}-p_{1}-p_{2}\) & \(k_{1}+p_{2}+p_{3}\) \\ \(k_{1}-p_{1}-p_{2}-p_{3}\) & \(k_{1}-k_{2}\) \\ \(k_{2}-p_{1}-p_{2}-p_{3}\) & \(k_{1}-k_{2}+p_{2}\) \\ \end{tabular}
\end{table}
Table 1: The two auxiliary topologies of momenta which label the propagators of every diagram appearing in the amplitudes. PL labels exclusively planar diagrams, whereas NPL accommodates also all non-planar sectors.
loop compared to the integration measure in (4.1), which then requires a simple conversion factor when inserting the solutions of the master integrals into the amplitude.
To evaluate the master integrals, we use the method of differential equations [35; 36; 37; 38], augmented with a canonical basis [45] for the master integrals. In particular, we reduce all scalar integrals appearing in the amplitudes using Reduze2[59] and Kira[61; 62] directly to a canonical basis which involves 89 master integrals. Our basis is different from the one considered in [39; 40], but relations amongst the two sets of integrals can easily be established. There are 4 planar and 2 non-planar sectors with the maximum number of propagators, \(t=7\), depicted in Figure 1. 85 of the 89 IBP master integrals are subsectors of the non-trivial top sectors (d)-(f). They are kinematic crossings of the 16 planar and 8 non-planar topologies depicted in Figures 2 and 3, which will be explained in more detail after introducing the canonical basis.
### Canonical basis
Multiple public packages exist for the derivation of a candidate canonical basis like CANONICA [63], Fuchsia[64], or DlogBasis[65]. We use DLogBasis to provide a list of
Figure 1: The 4 planar and 2 non-planar sectors with \(t=7\), labelled with the propagators \(P_{i}\) and \(N_{i}\), respectively, as listed in Table 1. Their names indicate the integral family and sector ID. The sectors (d)-(f) contain new master integrals and are the focus of this Section. Sectors (a) and (b) are entirely reducible to master integrals found in the trees of the former three top sectors. Sector (c) can be reduced to masters from (d)-(f) and four additional integrals which are easily computed from one-loop results.
candidate UV-finite integrals with unit leading singularities for the planar family. For the non-planar family, we constructed canonical candidates by studying the leading singularities of the relevant master integrals, following a loop-by-loop approach. Note that for all planar and some non-planar integrals, canonical candidates can be obtained just by rescaling a single integral in the sector by its maximal cut, i.e. a solution to the homogeneous part of its differential equation [66]. This can be confirmed by studying the maximal cut of the specific canonical integral in the Baikov representation [67]. The canonical bases sufficient for the computation of all integrals in the families PL and NPL are presented in the supplementary material. The amplitude, however, contains diagrams which produce integrals in kinematic crossings of the families in Table 1. To obtain a basis that is sufficient to represent the physical amplitude, we proceed from the lowest sectors in applying crossings to the canonical integrals in PL and NPL, and appending to our basis those canonical integrals whose reduction contains new masters. This yields a minimal set of crossed and uncrossed canonical master integrals, sufficient for physical applications, whose generic topologies are depicted in Figures 2 and 3. Note that we obtain results first in the Euclidean region, where all invariants are negative \(s_{ij}<0\). For simplicity, we also set \(M_{H}^{2}=-1\) in explicit formulas below.
Up to now, we only considered the non-trivial top sectors (d)-(f) in Figure 1, but the amplitude requires the reduction of integrals from all six topologies. While integrals in (a) and (b) can be reduced to integrals considered earlier, (c) contains four new master integrals, \(I_{86},\ldots,I_{89}\) which need to be added to the basis. They are the integrals (the numbering refers to the canonical basis in the supplementary material):
\[I_{86}=\epsilon^{2}\left[\begin{array}{c}p_{1}\\ p_{2}\\ p_{3}\end{array}\right], \tag{4.2}\]
\[I_{87}=-\epsilon^{3}(1-y-z)z\left[\begin{array}{c}p_{3}\\ p_{2}\end{array}\right] \tag{4.3}\]
and
\[I_{88}=I_{87}(p_{1}\leftrightarrow p_{2})\,,\qquad I_{89}=I_{87}(p_{2} \leftrightarrow p_{3})\,.\]
The full expressions up to \(\mathcal{O}(\epsilon^{6})\) for \(I_{86}\) as well as \(I_{87}\) and its two crossings are given in the supplementary material. In this way, one can complete the set of 89 master integrals sufficient to reduce any integral in this process. The full canonical basis is given in the supplementary material accompanying this paper.
### Solution of differential equations
For the purpose of computing the master integrals, we consider first the 16 integrals in the tree of top sector (d) in PL and the 36 integrals in the tree of the non-planar top sectors
(e), (f) in NPL. Separately for the two auxiliary topologies, we compute the derivatives of the candidate canonical combinations, and insert the IBP reduction to obtain differential equations in the following form
\[\frac{\partial}{\partial y}\vec{I}(y,z;\epsilon) =\epsilon\left(\frac{1}{y}A_{0}+\frac{1}{y-1}A_{1}+\frac{1}{y+z}A _{z}+\frac{1}{y-(1-z)}A_{1-z}\right)\vec{I}(y,z;\epsilon)\,, \tag{4.4}\] \[\frac{\partial}{\partial z}\vec{I}(y,z;\epsilon) =\epsilon\left(\frac{1}{z}B_{0}+\frac{1}{z-1}B_{1}+\frac{1}{z+y} B_{y}+\frac{1}{z-(1-y)}B_{1-y}\right)\vec{I}(y,z;\epsilon)\,, \tag{4.5}\]
where \(A_{i},B_{i}\) are sparse matrices of rational numbers. It is obvious from this form that the solutions for the canonical combinations can be expressed in terms of MPLs [46] with the alphabet \(\{y,z,y-1,z-1,y+z,1-y-z\}\), which are usually written out in terms of a fibration in either \(y\) or \(z\). We recall here that MPLs are defined as iterated integrals over rational functions
\[G(l_{1},...,l_{n};x)=\int_{0}^{x}\frac{dt}{t-l_{1}}G(l_{2},...,l_{n};t)\,,\quad G (\underbrace{0,...,0}_{n};x)=\frac{1}{n!}\log^{n}(x)\,, \tag{4.6}\]
Figure 2: Planar topologies which appear in the canonical basis. The dashed line is the massive leg, dots represent squared propagators. A propagator in brackets denotes an integral with this propagator in the numerator.
with \(G(x)=1\). In this context, \(n\) is referred to as the _transcendental weight_ of the polylogarithm.
By construction, the method of differential equations cannot be used to compute purely one-scale integrals directly. These must instead be obtained from alternative methods and added to the system of differential equations. At two loops, there are five such two- and three-point functions:
\[I_{1} =\epsilon^{2}(1-y-z)\left[\begin{array}{c}p_{1}\\ \end{array}\right]\,, \tag{4.7}\] \[I_{2} =-\epsilon^{2}\left[\begin{array}{c}p_{1}\\ p_{2}\\ \end{array}\right]\,,\] (4.8) \[I_{6} =\epsilon^{3}(1-y-z)\left[\begin{array}{c}p_{1}\\ p_{2}\\ \end{array}\right]\,,\] (4.9) \[I_{9} =\epsilon^{2}(1-y-z)^{2}\left[\begin{array}{c}p_{1}\\ p_{2}\\ \end{array}\right]\,,\] (4.10) \[I_{68} =\epsilon^{4}z^{2}\left[\begin{array}{c}p_{2}\\ p_{3}\\ \end{array}\right]\,, \tag{4.11}\]
and their kinematic crossings. Analytical expressions for all these one-scale integrals are known in closed form in the dimensional regulator \(\epsilon\)[38, 68].
We approach the solution of the differential equations for the remaining integrals as follows. At each order \(n\) in \(\epsilon\), we consider the vector of master integrals \(\vec{I}^{(n)}(y,z)\) and by choice integrate first the equations in the variable \(y\). If the set of differential equations in the two variables are consistent, we obtain a partial solution which differs from the full solution \(\vec{I}^{(n)}(y,z)\) only by a function of the other variable, \(\vec{f}(z)\),
\[\vec{I}^{(n)}(y,z)=\vec{I}^{(n)}_{y}(y,z)+\vec{f}(z)\,. \tag{4.12}\]
The intermediate result is then substituted into the set of equations in \(z\)
\[\frac{\partial}{\partial z}\vec{I}^{(n)}(y,z)=\frac{\partial}{\partial z}\vec {I}^{(n)}_{y}(y,z)+\frac{\partial}{\partial z}\vec{f}(z)\stackrel{{!}}{{=}}B\vec{I}^{(n-1)}(y,z) \tag{4.13}\]
and solved for \(\vec{f}(z)\), which fixes the final solution up to a numerical constant
\[\vec{I}^{(n)}(y,z)=\vec{I}^{(n)}_{y}(y,z)+\vec{I}^{(n)}_{z}(z)+\vec{c}\,, \tag{4.14}\]
with,
\[\vec{I}^{(n)}_{z}(z)=\int^{z}dz^{\prime}\big{[}B\vec{I}^{(n-1)}(y,z^{\prime}) -\frac{\partial}{\partial z^{\prime}}\vec{I}^{(n)}_{y}(y,z^{\prime})\big{]}. \tag{4.15}\]
As stated above, (4.15) cannot depend on \(y\) and the spurious dependence must cancel from the right-hand side of the equation.
### Fixing boundary conditions
The remaining numerical constants \(\vec{c}\) can in principle be fixed by evaluating the integrals at special kinematical points. We opted to obtain all boundary conditions without any additional computations simply by imposing a set of regularity conditions on the general solution (4.14). As suggested in [39; 40], a large set of boundary conditions can be obtained exploiting regularity of the master integrals at various pseudo-thresholds. This can be achieved in practice by multiplying the differential equations (4.4) and (4.5) by those letters which correspond to pseudo-thresholds of the integrals, and taking the limit as follows:
\[\text{left-hand side}: \lim_{x\to l_{i}}\left(\frac{\partial}{\partial x}\vec{I} \right)(x-l_{i})=0\,, \tag{4.16}\] \[\text{right-hand side}: \lim_{x\to l_{i}}\left(\epsilon A\vec{I}\right)(x-l_{i})\ \ =\lim_{x\to l_{i}}\epsilon\sum_{j}A_{j}(x-l_{i})\vec{I}= \epsilon A_{i}\lim_{x\to l_{i}}\vec{I}. \tag{4.17}\]
Requiring the right-hand side to vanish yields non-trivial relations between the integrals in the limit \(x\to l_{i}\). Namely, if the rational factor in question appears in the homogeneous term of the differential equation for a given master integral, its value at a regular kinematic point can be typically related to other integrals in the same sector and its subtopologies. This approach is sufficient for the planar topology.
In order to impose these regularity conditions, we used PolyLogTools[69] to manipulate multiple polylogarithms up to weight 5, evaluate the required limits, and perform changes in the fibration basis. Beyond weight 5, we had to carry out the required fibrations ourselves, building on the implementation of differentiation and integration of MPLs in PolyLogTools. In particular, we differentiated the integrals with respect to the variable we intend to fibrate into and obtained linear combinations of MPLs of weight 5, which can be treated with automated routines. The result can subsequently be integrated back and expressed in the required form, up to an integration constant. All constants can be fixed by comparing the original and fibrated function at a kinematic point, and reconstructing their difference as a number of the appropriate weight using the PSLQ algorithm [70]. In
Figure 3: The non-planar topologies in the canonical basis.
practice, we find that our definition of the variables \(y\) and \(z\), (6), allows us to consider just the 3 limits \(y\to 1\), \(y\to 0\), and \(y\to-z\).
In the NPL topology, integrals possess branch points when \(y\to 0\), \(z\to 0\) and \(1-y-z\to 0\). For this reason, the strategy outlined above can only be applied to a small number of letters and one can show that these conditions are not enough to fix all remaining constants. Taking inspiration from [65; 71], we also consider the singular limits (i.e. genuine thresholds of the master integrals). In terms of the Mandelstam variables, these correspond to the additional limits \(s\to 0\), \(t\to 0\) and \(u\to 0\). Crucially, our canonical basis consists of UV-finite integrals. Hence, we only need to regulate IR divergences and can assume that \(\epsilon<0\). Within this condition and keeping \(\epsilon\) fixed, if a linear combination of integrals develops a singular behaviour for one of the singular limits, this must correspond to a spurious UV-type divergence. Since no new UV divergences can appear when the kinematical invariants take special values, we may impose that such spurious divergences do not occur at the kinematic points where one of the letters vanishes. In this way, we obtain yet more equations between the boundary constants. Explicitly, the solution to the DE near the point \(y\to l_{i}\) is
\[\vec{I}(y,z;\epsilon)=\exp\left\{\epsilon\log\left(y-l_{i}\right)\lim_{y\to l _{i}}[yA_{y}]\right\}\vec{I}\big{|}_{y=l_{i}}+\mathcal{O}(y)\,. \tag{18}\]
The matrix exponential contains elements with terms of the type \((y-l_{i})^{a\epsilon}\). If \(a>0\), such an expression diverges in the limit \(y\to l_{i}\). Since all our integrals must be finite at this kinematic point, the constants \(\vec{I}\big{|}_{y=l_{i}}\) ought to take specific values such that these terms cancel.
We also studied the asymptotic behaviour of our master integrals with asy[72], which is implemented in FIESTA[73], and verified the \(\epsilon\)-dependence in the limit \(y\to l_{i}\). Note that in [40], it was required to integrate one order in \(\epsilon\) higher than necessary to enforce the non-appearance of spurious singularities at the previous order, which is no longer required. We checked our solutions for all top sector master integrals numerically against pySecDec[74] for several Euclidean points up to weight six and found perfect agreement.
In Appendix B, we present the computation of the one-loop master integrals to order \(\mathcal{O}(\epsilon^{4})\), which also serves as a simple example of some of the techniques discussed in this section.
## 5 UV renormalisation and IR regularisation
The bare helicity amplitudes (30) contain ultraviolet (UV) as well as infrared (IR) divergences that manifest as poles in the Laurent expansion in the dimensional regulator \(\epsilon\). The former are treated in the \(\overline{\text{MS}}\) scheme by expressing the amplitudes in terms of the renormalized couplings, \(\alpha_{s}\equiv\alpha_{s}(\mu^{2})\) and \(\lambda\equiv\lambda(\mu^{2})\), evaluated at the renormalization scale \(\mu^{2}\). The resulting amplitudes still contain IR singularities, which will be cancelled analytically by those occurring in radiative processes of the same order [75; 76]. Their structure is universal and it was originally determined up to two loops by Catani [77; 78]. These results were later systematised and extended to general processes and up to three loops in [79; 80; 81; 82; 83; 84; 85; 86; 87].
In this Section, we present the necessary steps and formulae to perform the UV renormalization and subtraction of the IR poles. This allows us to obtain the one-loop and
two-loop finite remainders, which we decompose according to their colour structure. In contrast to [29], where the IR subtraction was performed in the Catani scheme, we followed a subtraction scheme based on Soft-Collinear Effective Theory [82; 83], which can be more naturally extended to higher loops. In subsection 5.5 we provide conversion formulas between the two different schemes.
### Ultraviolet renormalization
We start by denoting all unrenormalized quantities with a superscript \(U\) and then replace the bare coupling \(\alpha^{U}\) with the renormalized strong coupling \(\alpha_{s}\equiv\alpha_{s}(\mu^{2})\), evaluated at the renormalization scale \(\mu^{2}\),
\[\alpha^{U}\mu_{0}^{2\epsilon}S_{\epsilon}=\alpha_{s}\mu^{2\epsilon}\bigg{[}1- \frac{\beta_{0}}{\epsilon}\bigg{(}\frac{\alpha_{s}}{2\pi}\bigg{)}+\bigg{(} \frac{\beta_{0}^{2}}{\epsilon^{2}}-\frac{\beta_{1}}{2\epsilon}\bigg{)}\bigg{(} \frac{\alpha_{s}}{2\pi}\bigg{)}^{2}+\mathcal{O}(\alpha_{s}^{3})\bigg{]}, \tag{10}\]
where \(S_{\epsilon}=(4\pi)^{\epsilon}\mathrm{e}^{-\epsilon\gamma_{E}}\) and \(\mu_{0}^{2}\) is the mass parameter in dimensional regularization introduced to maintain a dimensionless coupling in the bare QCD Lagrangian density. The explicit form of the first two \(\beta\)-function coefficients \(\beta_{0}\), \(\beta_{1}\) reads
\[\beta_{0} =\frac{11C_{A}}{6}-\frac{2T_{R}N_{F}}{3}, \tag{11}\] \[\beta_{1} =\frac{17C_{A}^{2}}{6}-\frac{5C_{A}T_{R}N_{F}}{3}-C_{F}T_{R}N_{F}, \tag{12}\]
with the QCD colour factors,
\[C_{A}=N,\quad C_{F}=\frac{N^{2}-1}{2N},\quad T_{R}=\frac{1}{2}. \tag{13}\]
The effective coupling \(\lambda\) is renormalized as follows,
\[\lambda^{U}=\lambda\bigg{[}1-\frac{\beta_{0}}{\epsilon}\bigg{(}\frac{\alpha_{ S}}{2\pi}\bigg{)}+\bigg{(}\frac{\beta_{0}^{2}}{\epsilon^{2}}-\frac{\beta_{1}}{2 \epsilon}\bigg{)}\bigg{(}\frac{\alpha_{S}}{2\pi}\bigg{)}^{2}+\mathcal{O}( \alpha_{S}^{3})\bigg{]}. \tag{14}\]
The renormalized coefficients of the UV-finite but IR-divergent amplitudes can be written in terms of the \(i\)-loop contribution to the unrenormalized coefficients \(\Omega^{(i),\,U}\) as
\[\Omega^{(0)} =\Omega^{(0),\,U}, \tag{15}\] \[\Omega^{(1)} =S_{\epsilon}^{-1}\Omega^{(1),\,U}-\frac{3\beta_{0}}{2\epsilon} \Omega^{(0),\,U},\] (16) \[\Omega^{(2)} =S_{\epsilon}^{-2}\Omega^{(2),\,U}-\frac{5\beta_{0}}{2\epsilon}S_ {\epsilon}^{-1}\Omega^{(1),\,U}-\bigg{(}\frac{5\beta_{1}}{4\epsilon}-\frac{1 5\beta_{0}^{2}}{8\epsilon^{2}}\bigg{)}\Omega^{(0),\,U}. \tag{17}\]
For the remainder of the paper, we will set \(\mu^{2}=\mu_{0}^{2}=M_{H}^{2}\) for simplicity.
### Infrared factorization
Since the IR poles of \(l\)-loop amplitudes in gauge theories factorize in colour-space in terms of lower loop amplitudes, the IR poles can be subtracted multiplicatively as
\[\mathbf{\Omega}^{\text{finite}}(\{p\})=\lim_{\epsilon\to 0}\mathbf{ \mathcal{Z}}(\{p\};\epsilon)\ \mathbf{\Omega}(p,\epsilon). \tag{18}\]
Contrary to multiplicative renormalization of UV divergences, the bold notation in (114) indicates that, in general, \(\mathbf{\mathcal{Z}}\) and \(\mathbf{\Omega}\) are operators and vectors in colour space, respectively. The all-order nature of (114) makes this approach particularly advantageous for generalizations to higher orders in perturbation theory.
Solving a renormalization group equation for \(\mathbf{\mathcal{Z}}\), one finds
\[\mathbf{\mathcal{Z}}(\epsilon,\{p\},\mu)=\mathbb{P}\exp\Bigl{[}\int_{\mu}^{\infty }\frac{d\mu^{\prime}}{\mu^{\prime}}\mathbf{\Gamma}(\{p\},\mu^{\prime})\Bigr{]}= \sum_{l=0}^{\infty}\left(\frac{\alpha_{s}}{2\pi}\right)^{l}\mathbf{\mathcal{Z}}^{( l)}, \tag{115}\]
where \(\mathbb{P}\) is the path-ordering symbol, meaning that the colour operators are ordered from left to right in decreasing values of \(\mu^{\prime}\). As originally proposed by Catani [78], the anomalous-dimension matrix \(\mathbf{\Gamma}\) for amplitudes with \(n\) QCD partons up to two loops is entirely governed by the dipole colour correlations operator
\[\mathbf{\Gamma}(\{p\},\mu)=\mathbf{\Gamma}_{dipole}(\{p\},\mu), \tag{116}\]
\[\mathbf{\Gamma}_{dipole}(\{p\},\mu)=\sum_{1\leq i<j\leq n}\mathbf{T}_{i}^{a} \mathbf{T}_{j}^{a}\,\gamma^{\text{cusp}}(\alpha_{s})\log\Big{(}-\frac{\mu^{2}} {s_{ij}+i\eta}\Big{)}\,+\,\sum_{i=1}^{n}\gamma^{i}(\alpha_{s}), \tag{117}\]
where \(\gamma^{\text{cusp}}\) is the cusp anomalous dimension and \(\gamma^{i}\) is the anomalous dimension of the \(i\)-th external particle, the latter depending on the nature of the particle. The cusp anomalous dimension carries the information about overlapping soft and collinear divergences, while \(\gamma^{i}\) only involves collinear divergences associated to the \(i\)-th parton. The perturbative expansions up to two loops for the anomalous dimensions are listed in Appendix A. The coupling constant is evaluated at the renormalization scale \(\alpha_{s}=\alpha_{s}(\mu)\).
The colour operators \(\mathbf{T}_{i}^{a}\) are related to the \(SU(N)\) generators and their action on the \(i\)-th coloured parton is defined following the convention in [78] as:
\[(\mathbf{T}_{i}^{a})_{b_{i}c_{i}} =i\,f_{b_{i}c_{i}}^{a}\quad\text{for a gluon}\,, \tag{118}\] \[(\mathbf{T}_{i}^{a})_{l_{i}k_{i}} =+T_{ik_{i}k_{i}}^{a}\quad\text{for a final(initial) state quark(anti-quark)}\,,\] \[(\mathbf{T}_{i}^{a})_{l_{i}k_{i}} =-T_{k_{i}l_{i}}^{a}\quad\text{for a initial(final) state quark(anti-quark)}\,.\]
where \(i\) labels the particle on which the operator is acting. Importantly, colour conservation can be rephrased as
\[\sum_{i}\mathbf{T}_{i}^{a}=0. \tag{119}\]
It follows from these definitions that the repeated action of one operator evaluates to a Casimir,
\[(\mathbf{T}_{i}^{a})^{2}=C_{i}\,, \tag{120}\]
where \(C_{i}=C_{A}\) if particle \(i\) is a gluon and \(C_{i}=C_{F}\) in case of (anti-)quark.
We also define the expansions,
\[\mathbf{\Gamma}_{dipole}=\sum_{l=0}^{\infty}\mathbf{\Gamma}_{l}\,\left(\frac{\alpha_{s }}{2\pi}\right)^{l+1},\qquad\Gamma^{\prime}=\frac{\partial\mathbf{\Gamma}_{dipole}}{ \partial\log(\mu)}=\sum_{l=0}^{\infty}\Gamma^{\prime}_{l}\,\left(\frac{\alpha_ {s}}{2\pi}\right)^{l+1}, \tag{121}\]
where one can drop the bold notation in the derivative because the resulting operator is always diagonal in colour space:
\[\Gamma^{\prime}=-\gamma^{\text{cusp}}(\alpha_{s})\sum_{i}C_{i}. \tag{111}\]
In our specific case, expanding (108) up to two loops, IR divergences in the renormalized two loop amplitudes can be expressed in terms of the renormalized tree and one-loop amplitudes multiplied by appropriate operators
\[\mathbf{\Omega}^{(0)}_{\text{finite}} =\mathbf{\Omega}^{(0)},\] (112) \[\mathbf{\Omega}^{(1)}_{\text{finite}} =\mathbf{\Omega}^{(1)}-\mathbf{\Omega}^{(1)}_{\Omega}\,\mathbf{\Omega}^{(0)},\] (113) \[\mathbf{\Omega}^{(2)}_{\text{finite}} =\mathbf{\Omega}^{(2)}-\mathbf{\Omega}^{(2)}_{\Omega}\,\mathbf{\Omega}^{(0)} -\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{\bmbm{\bm
Accordingly, we can drop the bold notation and we find expressions relevant to the three helicity coefficients \(\alpha,\beta,\gamma\):
\[\Gamma^{\alpha}_{n} =\Gamma^{\beta}_{n}=-\frac{C_{A}}{2}\Big{(}L_{12}+L_{23}+L_{13} \Big{)}\gamma^{\rm cusp}_{n}+3\gamma^{g}_{n}, \tag{5.26}\] \[\Gamma^{\gamma}_{n} =-C_{F}\,L_{12}\,\gamma^{\rm cusp}_{n}-\frac{C_{A}}{2}\Big{(}-L_{ 12}+L_{23}+L_{13}\Big{)}\gamma^{\rm cusp}_{n}+2\gamma^{q}_{n}+\gamma^{g}_{n}, \tag{5.27}\]
with
\[L_{ij}=\log\Big{(}-\frac{\mu^{2}}{s_{ij}+i\eta}\Big{)} \tag{5.28}\]
and the anomalous dimension coefficients defined as in Appendix A.
We stress here that there is an imaginary part arising from the logarithms above whenever the corresponding invariant is positive, which depends on the kinematical region considered.
### Results for the helicity amplitudes
We computed the renormalized amplitudes for the decay process up to order \(\epsilon^{4}\) at one loop and up to order \(\epsilon^{2}\) at two loops. Working in the SCET subtraction scheme, we derived the finite remainder for all the helicity amplitudes. Decomposed according to their colour structure, they read
\[\Omega^{(1)}_{\rm finite} =\bigg{(}N\ A^{(1)}_{\Omega}+\frac{1}{N}\ B^{(1)}_{\Omega}+N_{F} C^{(1)}_{\Omega}\bigg{)}, \tag{5.29}\] \[\Omega^{(2)}_{\rm finite} =\bigg{(}N^{2}A^{(2)}_{\Omega}+N^{0}B^{(2)}_{\Omega}+\frac{1}{N^{ 2}}C^{(2)}_{\Omega}+\frac{N_{F}}{N}D^{(2)}_{\Omega}+NN_{F}E^{(2)}_{\Omega}+N_{ F}^{2}F^{(2)}_{\Omega}\bigg{)}. \tag{5.30}\]
The same structure holds for the renormalized amplitudes, though the respective coefficients will still contain poles in \(\epsilon\).
In the supplementary material, we provide the coefficients of the renormalized amplitudes for the decay processes and of the finite remainder for the Higgs decay kinematics as well as for all crossings [56] relevant to \(H\)+jet production processes.
### Conversion to the Catani scheme
The two-loop helicity amplitudes for the decay of a Higgs boson into three partons were first computed in [29]. The authors obtained the finite remainder by subtracting the IR singularities according to the original Catani prescription [78]. Here we give the conversion rules between the two subtraction schemes, which also served as a cross-check of our results.
Following closely the notation of [29], we write the subtraction operators \({\bf I}_{\Omega}\) of (5.19) in Catani scheme as
\[{\bf I}^{\rm C,(1)}_{\alpha} ={\bf I}^{\rm C,(1)}_{\beta}=-\frac{{\rm e}^{\epsilon\gamma}}{2 \Gamma(1-\epsilon)}\bigg{[}N\bigg{(}\frac{1}{\epsilon^{2}}+\frac{\beta_{0}}{N \epsilon}\bigg{)}\big{(}{\sf S}_{12}+{\sf S}_{23}+{\sf S}_{13}\big{)}\bigg{]},\] \[{\bf I}^{\rm C,(1)}_{\gamma} =-\frac{{\rm e}^{\epsilon\gamma}}{2\Gamma(1-\epsilon)}\bigg{[}N \bigg{(}\frac{1}{\epsilon^{2}}+\frac{3}{4\epsilon}+\frac{\beta_{0}}{2N \epsilon}\bigg{)}\big{(}{\sf S}_{23}+{\sf S}_{13}\big{)}-\frac{1}{N}\bigg{(} \frac{1}{\epsilon^{2}}+\frac{3}{2\epsilon}\bigg{)}{\sf S}_{12}\bigg{]}, \tag{5.31}\]
with,
\[\mathsf{S}_{ij}=\bigg{(}-\frac{\mu^{2}}{s_{ij}}\bigg{)}^{\epsilon}. \tag{112}\]
The second-order operator can be built starting from the one-loop operator as
\[\mathbf{I}_{\Omega}^{\text{C},(2)}= -\frac{1}{2}\mathbf{I}_{\Omega}^{\text{C},(1)}(\epsilon)\,\mathbf{ I}_{\Omega}^{\text{C},(1)}(\epsilon)-\frac{\beta_{0}}{\epsilon}\mathbf{I}_{ \Omega}^{\text{C},(1)}(\epsilon)\] \[+\text{e}^{-\epsilon\gamma}\frac{\Gamma(1-2\epsilon)}{\Gamma(1- \epsilon)}\bigg{(}\frac{\beta_{0}}{\epsilon}+K\bigg{)}\mathbf{I}_{\Omega}^{ \text{C},(1)}(2\epsilon)+\mathbf{H}_{\Omega}^{(2)}(\epsilon)\,, \tag{113}\]
where we introduced the constant
\[K=\bigg{(}\frac{67}{18}-\frac{\pi^{2}}{6}\bigg{)}C_{A}-\frac{10}{9}T_{R}N_{F}. \tag{114}\]
The remaining term in (113) involves the operator \(\mathbf{H}_{\Omega}^{(2)}(\epsilon)\) and produces only a single pole in \(\epsilon\). Its explicit form is
\[\mathbf{H}_{\Omega}^{(2)}(\epsilon)=\frac{\text{e}^{\epsilon\gamma}}{4 \epsilon\Gamma(1-\epsilon)}H_{\Omega}^{(2)}. \tag{115}\]
The constant \(H_{\Omega}^{(2)}\) is renormalization scheme and process dependent and in our case it reads
\[H_{\alpha}^{(2)} =H_{\beta}^{(2)}=3H_{g}^{(2)}, \tag{116}\] \[H_{\gamma}^{(2)} =2H_{q}^{(2)}+H_{g}^{(2)}, \tag{117}\]
where in the \(\overline{\text{MS}}\) scheme the constants \(H_{q}^{(2)}\), \(H_{g}^{(2)}\) are
\[H_{q}^{(2)} =\bigg{(}\frac{7}{4}\zeta_{3}+\frac{409}{864}-\frac{11\pi^{2}}{96 }\bigg{)}\,N^{2}+\bigg{(}-\frac{1}{4}\zeta_{3}-\frac{41}{108}-\frac{\pi^{2}}{9 6}\bigg{)}+\bigg{(}-\frac{3}{2}\zeta_{3}-\frac{3}{32}+\frac{\pi^{2}}{8}\bigg{)} \,\frac{1}{N^{2}}\] \[+\bigg{(}\frac{\pi^{2}}{48}-\frac{25}{216}\bigg{)}\,\frac{(N^{2} -1)N_{F}}{N}\;, \tag{118}\] \[H_{g}^{(2)} =\bigg{(}\frac{1}{2}\zeta_{3}+\frac{5}{12}+\frac{11\pi^{2}}{144} \bigg{)}\,N^{2}+\frac{5}{27}\,N_{F}^{2}+\bigg{(}-\frac{\pi^{2}}{72}-\frac{89}{ 108}\bigg{)}\,NN_{F}-\frac{N_{F}}{4N}. \tag{119}\]
When describing the SCET subtraction scheme, we pointed out that the subtraction operators have no finite \(\mathcal{O}(\epsilon^{0})\) contribution.
In contrast, the Catani operators contain coefficients at higher order in the dimensional regulator, generated by the \(\epsilon\) expansion of the resummed coefficient defined in Eq. (112). The tree-level amplitude is finite. One-loop amplitudes have poles starting from order \(\mathcal{O}(1/\epsilon^{2})\) and two-loop amplitudes have poles starting from order \(\mathcal{O}(1/\epsilon^{4})\). We indicate with \(\mathbf{\Omega}_{n}^{(l)}\), \(\mathbf{I}_{n}^{\text{SCET},(l)}\) and \(\mathbf{I}_{n}^{\text{C},(l)}\) the coefficients of order \(\epsilon^{n}\) of the renormalized amplitude, the SCET operator and the Catani operator, respectively.
From the subtraction formulae (102)-(103), it is easy to obtain the following conversion rules for the finite remainders
\[\mathbf{\Omega}_{\text{finite}}^{\text{SCET},(0)}-\mathbf{\Omega}_ {\text{finite}}^{\text{C},(0)} =0, \tag{120}\] \[\mathbf{\Omega}_{\text{finite}}^{\text{SCET},(1)}-\mathbf{\Omega }_{\text{finite}}^{\text{C},(1)} =\mathbf{I}_{0}^{\text{C},(1)}\mathbf{\Omega}^{(0)},\] (121) \[\mathbf{\Omega}_{\text{finite}}^{\text{SCET},(2)}-\mathbf{\Omega }_{\text{finite}}^{\text{C},(2)} =\mathbf{I}_{0}^{\text{C},(2)}\mathbf{\Omega}^{(0)}+\mathbf{I}_{2}^{ \text{C},(1)}\mathbf{\Omega}_{-2}^{(1)}+\mathbf{I}_{1}^{\text{C},(1)} \mathbf{\Omega}_{-1}^{(1)}+\mathbf{I}_{0}^{\text{C},(1)}\mathbf{\Omega}_{0}^ {(1)}. \tag{122}\]
In deriving the above rules, we made use of the fact that \(\mathbf{I}^{\mathbf{C},(1)}\) and \(\mathbf{I}^{\text{SCET},(1)}\) have the same pole structure. Consequently, terms multiplying \((\mathbf{I}^{\mathbf{C},(1)}_{-1}-\mathbf{I}^{\text{SCET},(1)}_{-1})\) or \((\mathbf{I}^{\mathbf{C},(1)}_{-2}-\mathbf{I}^{\text{SCET},(1)}_{-2})\) in (5.42) vanish. This can be easily understood by inspecting the origin of the poles in the one-loop cancellation in (5.19).
For both the decay and the production kinematics, we verified that the result given in [29] is correctly reproduced converting our finite remainder with the above rules.
## 6 Analytic Continuation
So far, we have described our calculation making explicit reference to decay processes, for which all the kinematic invariants are positive. We considered the decay of a Higgs boson into three gluons, \(H\to ggg\), and into a quark-antiquark pair and gluon, \(H\to q\bar{q}g\). In view of applications to LHC physics, we are interested in the production regions, in which a Higgs is produced together with a parton: \(gg\to Hg\), \(qg\to Hq\), \(\bar{q}g\to H\bar{q}\) and \(q\bar{q}\to Hg\).
The general strategy for performing the analytic continuation for the MPLs appearing in \(2\to 2\) scattering involving 4-point functions with one external off-shell leg and massless propagators was outlined in detail in [56]. Our aim is to describe how this strategy can be applied to the process at hand. Referring to Fig. 1 of reference [56] and using the same labels for the various kinematic regions, the goal is to find a procedure to analytically continue MPLs from the decay region (1a) to the three production regions, (2a), (3a), (4a). Whenever a particle is crossed from the initial state to the final state (or vice versa), two of the invariants become negative and one remains positive, representing the centre-of-mass energy of the incoming partons. We recall here that results in the decay region (1a) are expressed as MPLs of the variables \(y\) and \(z\) defined in (2.6), which fulfil the constraints \(0<z<1\,,\ 0<y<1-z\). Within these bounds, our set of MPLs are real and no branching point is crossed.
To describe the analytic continuation to the scattering kinematics, let us consider the case of region (3a). In (3a), the kinematic constraints are \(z<0\,,\ 1-z<y<+\infty\) and MPLs must be evaluated across a branch cut, developing an imaginary part. Physically, the particle with momentum \(p_{2}\) is crossed and the process is \(p_{1}+p_{3}\to p_{2}+p_{4}\). Crucially, there exists a change of variables which maps region (3a) _linearly_ back into the decay region (1a) and can be implemented analytically on our MPLs. In fact, by defining
\[v\equiv 1/y\,,\qquad\quad u\equiv-z/y\,, \tag{6.1}\]
the new variables \(u\) and \(v\) satisfy again \(0<u<1\,,\ 0<v<1-u\). By re-expressing the MPLs in terms of \(u\) and \(v\), the imaginary part can be made explicit in terms of multiple zeta values and real-valued MPLs. The same manipulations can be performed in regions (2a) and (4a) with different definitions of the \((u,v)\) variables. In general, the \(v\) variable is the reciprocal of the centre-of-mass energy and so its definition depends on which particle is crossed from the final to the initial state.
Instead of performing the analytic continuation in all the three regions (2a), (3a) and (4a), we found it simpler to first consider suitable crossings of the amplitude in the decay
region and in a second step continue these crossed amplitudes only to the region (3a). Explicitly, for the case of three gluons, the two independent amplitudes in the decay kinematics are
\[\mathcal{M}^{+++}: H\to g_{+}(p_{1})+g_{+}(p_{2})+g_{+}(p_{3})\,, \tag{6.2}\] \[\mathcal{M}^{++-}: H\to g_{+}(p_{1})+g_{+}(p_{2})+g_{-}(p_{3})\,. \tag{6.3}\]
In the production region, there are eight different helicity configurations. Thanks to parity symmetry, we can limit ourselves to consider just four of them (see the left column of Table 2) and relate them to the other four (right column). Note that in continuing to the region (3a), the momenta \(p_{1}\) and \(p_{3}\) are always in the initial state. We computed the first three amplitudes with a combination of crossings and analytic continuation as follows (an extra helicity flip due to time reversal is always understood after continuation to (3a)):
\[\mathcal{M}^{+++}_{ggg} \xrightarrow{\text{to (3a)}}\,\mathcal{M}^{--,+}_{gg,g}\,, \tag{6.4}\] \[\mathcal{M}^{+-}_{ggg} \xrightarrow{\text{to (3a)}}\,\mathcal{M}^{--,+}_{ggg}\,,\] (6.5) \[\mathcal{M}^{++-}_{ggg} \xrightarrow{p_{2}\leftrightarrow p_{3}}\mathcal{M}^{+-+}_{ggg} \xrightarrow{\text{to (3a)}}\,\mathcal{M}^{--,-}_{gg,g}\,. \tag{6.6}\]
The fourth amplitude, \(\mathcal{M}^{+-,+}_{gg,g}\), can be derived from \(\mathcal{M}^{--,+}_{gg,g}\) by the crossing \(p_{1}\leftrightarrow p_{3}\), which implies \(v\to v\) and \(u\to 1-u-v\). Since no branch cut is crossed under this transformation, no new analytic continuation is needed and we can conclude that
\[\mathcal{M}^{+-,+}_{gg,g}(v,u)=\mathcal{M}^{-+,+}_{gg,g}(v,u)|_{u\to 1-u-v}\,. \tag{6.7}\]
Let us consider now the decay into a quark-antiquark pair and a gluon:
\[\mathcal{M}^{LR+}:\qquad H\to q_{L}(p_{1})+\bar{q}_{R}(p_{2})+g_{+}(p_{3})\,. \tag{6.8}\]
In the production region there are 12 non-zero independent helicity configurations, but only 3 of them need to be computed, while the others are related by parity and charge conjugation. Our choice of the independent configurations is according to the left column of Table 3.
\begin{table}
\begin{tabular}{c c c} Process & Parity Related \\ \hline \(\mathcal{M}^{--,+}_{ggg}\)\({}^{+}\) : \(g_{-}(p_{1})+g_{-}(p_{3})\to H+g_{+}(p_{2})\) & \(g_{+}(p_{1})+g_{+}(p_{3})\to H+g_{-}(p_{2})\), \\ \(\mathcal{M}^{-+,+}_{gg,g}\)\({}^{+}\) : \(g_{-}(p_{1})+g_{+}(p_{3})\to H+g_{+}(p_{2})\) & \(g_{+}(p_{1})+g_{-}(p_{3})\to H+g_{-}(p_{2})\), \\ \(\mathcal{M}^{--,-}_{ggg}\)\({}^{-}\) : \(g_{-}(p_{1})+g_{-}(p_{3})\to H+g_{-}(p_{2})\) & \(g_{+}(p_{1})+g_{+}(p_{3})\to H+g_{+}(p_{2})\), \\ \(\mathcal{M}^{+-,+}_{gg,g}\)\({}^{+}\) : \(g_{+}(p_{1})+g_{-}(p_{3})\to H+g_{+}(p_{2})\) & \(g_{-}(p_{1})+g_{+}(p_{3})\to H+g_{-}(p_{2})\). \\ \end{tabular}
\end{table}
Table 2: Different kinematical crossings of the \(H\to ggg\) amplitudes. The comma is used to separate the helicities of initial and final state partons.
The three amplitudes were computed with a combination of crossings and analytic continuation as follows (an additional helicity flip due to time reversal is again understood after continuation to (3a)),
\[\mathcal{M}^{LR+}_{q\bar{q}g} \xrightarrow{\text{to (3a)}}\,\mathcal{M}^{R-,R}_{qg,q} \tag{6.9}\] \[\mathcal{M}^{LR+}_{q\bar{q}g} \xrightarrow{p_{1}\leftrightarrow p_{2}}\mathcal{M}^{RL+}_{\bar{q} gg} \xrightarrow{\text{to (3a)}}\,\mathcal{M}^{L-,L}_{\bar{q}g,\bar{q}}\] (6.10) \[\mathcal{M}^{LR+}_{q\bar{q}g} \xrightarrow{p_{2}\leftrightarrow p_{3}}\mathcal{M}^{L+R}_{qg \bar{q}} \xrightarrow{\text{to (3a)}}\,\mathcal{M}^{RL,+}_{q\bar{q},g}\,. \tag{6.11}\]
Finally, we stress that each crossing must also be applied to the spinor prefactors in (3.22). We have cross-checked all amplitudes in all helicity configurations against previously published results [29] at the level of the finite remainder up to \(\mathcal{O}(\epsilon^{0})\) and found perfect agreement. Conventions and normalization of helicity amplitudes have been fixed by numerical evaluation of our result, in the \(q\bar{q}g\) channel, using OpenLoops 2[88].
## 7 Conclusions
In this paper, we presented the calculation of the two loop corrections to the helicity amplitudes for the \(H\to ggg\) and \(H\to q\bar{q}g\) up to order \(\mathcal{O}(\epsilon^{2})\) in the large top Higgs effective field theory. These amplitudes constitute the first missing ingredient towards the calculation of \(H\)+jet production to N\({}^{3}\)LO at the LHC, as they are required to properly define the finite remainder of the corresponding three loop virtual corrections. We follow a standard approach to compute the helicity amplitudes. We start by decomposing the amplitude in a basis of independent tensor structures, and we use spinor-helicity to express the helicity amplitudes in terms of linear combinations of the corresponding scalar form factors. Next, we derived a canonical basis for the relevant master integrals and, through the method of differential equations, provided their solution up to weight six in MPLs. We
\begin{table}
\begin{tabular}{l c} Process & C, P, CP \\ \hline \(\mathcal{M}^{R-,R}_{qg,\bar{q}}:\ q_{R}(p_{1})+g_{-}(p_{3})\to H+q_{R}(p_{2})\) & \(\bar{q}_{L}(p_{1})+g_{+}(p_{3})\to H+\bar{q}_{L}(p_{2})\) \\ & \(q_{L}(p_{1})+g_{+}(p_{3})\to H+q_{L}(p_{2})\) \\ & \(\bar{q}_{R}(p_{1})+g_{-}(p_{3})\to H+\bar{q}_{R}(p_{2})\) \\ & \(q_{R}(p_{1})+g_{+}(p_{3})\to H+q_{R}(p_{2})\) \\ & \(q_{L}(p_{1})+g_{-}(p_{3})\to H+q_{L}(p_{2})\) \\ & \(q_{R}(p_{1})+\bar{q}_{L}(p_{3})\to H+g_{-}(p_{2})\) \\ & \(q_{L}(p_{1})+\bar{q}_{R}(p_{3})\to H+g_{-}(p_{2})\) \\ & \(\bar{q}_{R}(p_{1})+q_{L}(p_{3})\to H+g_{+}(p_{2})\) \\ \end{tabular}
\end{table}
Table 3: Different kinematic crossings of the \(H\to q\bar{q}g\) amplitudes. The comma is used to separate the helicities of initial and final state partons.
verified that our results for the finite remainder of the amplitude match the literature [29], and analytically continued the helicity amplitudes to all kinematic regions. The infrared structure of the result was inspected in the frameworks of SCET and the Catani infrared factorization formula. This updated result marks the first step towards computing the N\({}^{3}\)LO corrections to \(H\)+jet production.
This work was supported in part by the Excellence Cluster ORIGINS funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany's Excellence Strategy - EXC-2094-390783311, by the Swiss National Science Foundation (SNF) under contract 200020-204200, and by the European Research Council (ERC) under the European Union's research and innovation programme grant agreements 949279 (ERC Starting Grant HighPHun) and 101019620 (ERC Advanced Grant TOPUP).
## Appendix A Anomalous dimensions
In this appendix, we give the perturbative coefficients for the cusp anomalous dimension \(\gamma^{\rm cusp}\), the quark anomalous dimension \(\gamma_{q}\) and the gluon anomalous dimension \(\gamma_{g}\) up to two loops, \(\mathcal{O}(\alpha_{s}^{2})\). The perturbative expansion reads
\[\gamma^{\rm cusp}=\sum_{n=0}^{\infty}\gamma_{n}^{\rm cusp}\Big{(}\frac{\alpha_ {s}}{2\pi}\Big{)}^{n+1}\quad\gamma^{i}=\sum_{n=0}^{\infty}\gamma_{n}^{i} \Big{(}\frac{\alpha_{s}}{2\pi}\Big{)}^{n+1} \tag{101}\]
with \(i=q,\overline{q},g\).
The cusp anomalous dimension was computed at two loops in [89]
\[\gamma_{0}^{\rm cusp} =8,\] \[\gamma_{1}^{\rm cusp} =\Big{(}\frac{1072}{9}-\frac{16\pi^{2}}{3}\Big{)}C_{A}-\frac{160} {9}N_{f}\,, \tag{102}\]
while for the (anti-)quark anomalous dimension we have [90; 91],
\[\gamma_{0}^{q} =-6\,C_{F},\] \[\gamma_{1}^{q} =C_{F}^{2}\Big{(}-6+8\pi^{2}-96\zeta_{3}\Big{)}+C_{F}C_{A}\Big{(} -\frac{1922}{27}-\frac{22}{3}\pi^{2}+104\zeta_{3}\Big{)}+C_{F}N_{F}\Big{(} \frac{260}{27}+\frac{4}{3}\pi^{2}\Big{)}\,, \tag{103}\]
and finally, for the gluon, [92],
\[\gamma_{0}^{g} =-\beta_{0},\] \[\gamma_{1}^{g} =C_{A}^{2}\Big{(}-\frac{2768}{27}+\frac{22}{9}\pi^{2}+8\zeta_{3} \Big{)}+C_{A}N_{F}\Big{(}\frac{512}{27}-\frac{4}{9}\pi^{2}\Big{)}+8C_{F}N_{F}. \tag{104}\]
One-loop master integrals
In order to obtain a result for the two-loop amplitudes up to \(\mathcal{O}(\epsilon^{2})\), we ought to calculate the one-loop master integrals and amplitudes up to \(\mathcal{O}(\epsilon^{4})\). Denoting
\[I_{a_{1},\ldots,a_{4}}=\int\frac{e^{\epsilon\gamma_{E}}}{i\pi^{\frac{D}{2}}} \frac{1}{(k)^{a_{i}}(k-p_{1})^{a_{i}}(k-p_{1}-p_{2})^{a_{i}}(k-p_{1}-p_{2}-p_{ 3})^{a_{i}}}\,d^{D}k\,, \tag{104}\]
a canonical basis for this process is (for simplicity we set \(M_{H}^{2}=-1\) again)
\[\vec{I}=\begin{pmatrix}-(1-y-z)\epsilon\,I_{2,0,1,0}\\ -z\epsilon\,I_{0,2,0,1}\\ \epsilon\,I_{2,0,0,1}\\ (1-y-z)z\epsilon^{2}I_{1,1,1,1}\end{pmatrix}\,, \tag{105}\]
representing 3 bubble diagrams with a squared propagator in the variables \(s_{12}\), \(s_{23}\) and \(M_{H}^{2}\), respectively, and ultimately the box diagram. The former three can be related to simple bubbles by IBPs:
\[I_{2,0,1,0} =-\frac{1-2\epsilon}{-1+y+z}\text{Bub}_{s_{12}}=\frac{(1-y-z)^{-1 -\epsilon}}{\epsilon}C(\epsilon)\,, \tag{106}\] \[I_{0,2,0,1} =\frac{1-2\epsilon}{z}\text{Bub}_{s_{23}} =\frac{(z)^{-1-\epsilon}}{\epsilon}C(\epsilon)\,,\] (107) \[I_{2,0,0,1} =(1-2\epsilon)\text{Bub}_{m_{H}^{2}} =\frac{1}{\epsilon}C(\epsilon)\,, \tag{108}\]
where we used the well-known expression for the bubble,
\[\text{Bub}_{s}=s^{-\epsilon}\frac{1}{\epsilon(1-2\epsilon)}\left(-e^{\epsilon \gamma_{E}}\frac{\Gamma(1+\epsilon)\Gamma(1-\epsilon)^{2}}{\Gamma(1-2\epsilon) }\right)=s^{-\epsilon}\frac{1}{\epsilon(1-2\epsilon)}C(\epsilon)\,. \tag{109}\]
The differential equations take the form \(\partial_{y}\vec{I}=A\vec{I}\) and \(\partial_{z}\vec{I}=B\vec{I}\) where
\[A=\begin{pmatrix}\frac{1}{x}&0&0&0\\ 0&0&0&0\\ 0&0&0&0\\ \left(\frac{2}{y}-\frac{2}{y+z}\right)&\frac{2}{y}\,\left(-\frac{2}{y}+\frac{ 2}{y+z}\right)\,\left(\frac{1}{y}+\frac{1}{x}\right)\end{pmatrix}\,, \tag{110}\]
\[B=\begin{pmatrix}\frac{1}{x}&0&0&0\\ 0&-\frac{1}{z}&0&0\\ 0&0&0&0\\ -\frac{2}{y+z}&\frac{-2}{-1+z}\,\left(\frac{2}{-1+z}+\frac{2}{y+z}\right)\, \left(-\frac{1}{z}+\frac{1}{x}\right)\end{pmatrix} \tag{111}\]
and \(x=1-y-z\). They confirm the scaling of the bubbles given in (106, 107, 108), and allow us to compute the box. The boundary condition obtained by multiplying the first equation by \(y\) and sending it to \(0\) reads
\[0=2I_{2,0,1,0}\big{|}_{y=0}+2I_{0,2,0,1}\big{|}_{y=0}-2I_{2,0,0,1}\big{|}_{y=0 }+I_{1,1,1,1}\big{|}_{y=0} \tag{112}\]
and is the only condition needed to find a unique solution for the box integral. We could have obtained this requirement also via considering the limit \(y\to 0\) for \(\epsilon<0\). The solution to the DE in \(y\) near the point \(y=0\) is
\[e^{\epsilon\log y\lim y\to 0[yA_{y}]}\vec{I}\big{|}_{y=0}=\begin{pmatrix}1&0&0&0\\ 0&1&0&0\\ 0&0&1&0\\ 2(-1+y^{\epsilon})&2(-1+y^{\epsilon})&-2(-1+y^{\epsilon})&y^{\epsilon}\end{pmatrix} \vec{I}\big{|}_{y=0}\,.\] (B.10)
Clearly requiring the non-appearance of terms \(y^{\epsilon}\) in the solution replicates the condition (B.9). The strength of the second approach is that it applies to regular as well as singular limits. Expanded solutions for all four integrals can be found in the supplementary material.
|
2304.12880 | Material Hardness Descriptor Derived by Symbolic Regression | Hardness is a materials' property with implications in several industrial
fields, including oil and gas, manufacturing, and others. However, the
relationship between this macroscale property and atomic (i.e., microscale)
properties is unknown and in the last decade several models have unsuccessfully
tried to correlate them in a wide range of chemical space. The understanding of
such relationship is of fundamental importance for discovery of harder
materials with specific characteristics to be employed in a wide range of
fields. In this work, we have found a physical descriptor for Vickers hardness
using a symbolic-regression artificial-intelligence approach based on
compressed sensing. SISSO (Sure Independence Screening plus Sparsifying
Operator) is an artificial-intelligence algorithm used for discovering simple
and interpretable predictive models. It performs feature selection from up to
billions of candidates obtained from several primary features by applying a set
of mathematical operators. The resulting sparse SISSO model accurately
describes the target property (i.e., Vickers hardness) with minimal complexity.
We have considered the experimental values of hardness for binary, ternary, and
quaternary transition-metal borides, carbides, nitrides, carbonitrides,
carboborides, and boronitrides of 61 materials, on which the fitting was
performed. The found descriptor is a non-linear function of the microscopic
properties, with the most significant contribution being from a combination of
Voigt-averaged bulk modulus, Poisson's ratio, and Reuss-averaged shear modulus.
Results of high-throughput screening of 635 candidate materials using the found
descriptor suggest the enhancement of material's hardness through mixing with
harder yet metastable structures (e.g., metastable VN, TaN, ReN$_2$,
Cr$_3$N$_4$, and ZrB$_6$ all exhibit high hardness). | Christian Tantardini, Hayk A. Zakaryan, Zhong-Kang Han, Tariq Altalhi, Sergey V. Levchenko, Alexander G. Kvashnin, Boris I. Yakobson | 2023-04-25T14:51:46Z | http://arxiv.org/abs/2304.12880v4 | # Supporting Information for Hardness Descriptor Derived from Symbolic Regression
###### Abstract
All the data about datasets are available via the github link by request [https://github.com/AlexanderKvashnin/SISSO_hardness.git](https://github.com/AlexanderKvashnin/SISSO_hardness.git)
## Predicted descriptors
There is a list of predicted descriptors by SISSO used for calculations the RMSE and CV10 in Figure 1a.
\[H^{1D}=0.182\cdot\frac{B_{R}}{\sigma\sqrt[3]{Y}}-6.191 \tag{1}\]
\[H^{2D}=0.147\cdot\frac{B_{V}}{\sigma\sqrt[3]{G_{R}}}-1.136\cdot\frac{B_{R}\log R _{X}}{A_{W}}-5.679 \tag{2}\]
\[H^{3D}=0.659\cdot\frac{B_{R}}{\sigma\sqrt[3]{Y}}-1.405\cdot\frac{G_{V}}{A_{W}} \cdot\log R_{X}-0.042\cdot\frac{Fr}{R_{N}\log el}-12.221 \tag{3}\]
\[H^{4D}=0.677\cdot\frac{B_{R}}{\sigma\sqrt[3]{Y}}-0.133\cdot\frac{Y}{D}\cdot \log R_{X}+0.041\cdot\frac{Fr}{R_{N}\log el} \tag{4}\] \[-13.228\cdot\frac{I_{W}}{I_{X}\sqrt{R_{W}}}-1.471\]
\[H^{5D}=0.155\cdot\frac{B_{R}}{\sigma\sqrt[3]{G_{V}}}-0.353\cdot \frac{G_{V}}{D}\cdot\log R_{X}+0.054\cdot\frac{Fr}{R_{W}\log el} \tag{5}\] \[-1027\cdot\frac{|B_{V}-G_{R}|}{\exp A_{N}}+3.190\cdot\frac{R_{W} }{el|B_{R}-G_{V}|}-5.873\]
\[H^{6D}=0.177\cdot\frac{B_{R}}{\sigma\sqrt[3]{G_{V}}}-41.972\cdot \frac{\log R_{X}}{A_{W}}\cdot\sigma+0.046\cdot\frac{G_{R}}{R_{N}\log el} \tag{6}\] \[-1175\cdot\frac{|B_{R}-G_{R}|}{\exp A_{N}}+0.047\cdot\frac{D^{3}} {|B_{V}-G_{V}|}\] \[-0.963\cdot\frac{A_{X}}{A_{W}}\cdot\sqrt{A_{N}}+3.815\]
Figure 1: Correlation between Voigt-averaged bulk modulus and Reuss-averaged shear modulus of stable and metastable structures among borides, carbides, and nitrides. Colorbar shows the energy of formation above the convex hull denoting stability of each structure.
Figure 2: Correlation between Voigt-averaged bulk modulus and Reuss-averaged shear modulus of only stable structures among borides, carbides, and nitrides.
Figure 3: Correlation between SISSO hardness and \(B_{v}/\sigma\) ratio of stable and metastable structures among borides, carbides, and nitrides. Colorbar shows the Pough ratio.
Figure 4: Distribution of CV10 errors for XGBoost model. Maximum absolute error is 25.6 GPa, RMSE is 7.8 GPa.
Figure 5: Correlation between SISSO hardness and XGBoost [7] model for considered stable carbides, borides and nitrides only. Colorbar shows the difference between two sets of data.
# Hardness Descriptor Derived from Symbolic Regression
Christian Tantardini
Hylleraas center, Department of Chemistry, UiT The Arctic University of Norway, PO Box 6050 Langnes, N-9037 Tromso, Norway.
Hayak A. Zakaryan
Department of Materials Science, Rice University, Houston, Texas 77005, United States of America.
Zhong-Kang Han
Institute of Solid State Chemistry and Mechanochemistry SB RAS, 630128, Novosibirsk, Russian Federation.
Sergey V. Levchenko
[email protected]
Alexander G. [email protected]
###### Abstract
Hard and superhard materials are critical components in numerous industrial applications required for sustainable development. However, discovering new materials with high hardness is challenging, because hardness is a complex and multiscale property with a non-trivial connection to atomic properties of the material. Here, we present a low-dimensional physical descriptor for Vickers hardness derived from symbolic-regression artificial intelligence approach to data analysis. The descriptor is a mathematical combination of materials' properties that can be much easier evaluated than hardness itself via the atomistic simulations and it is therefore suitable for a high-throughput screening. The developed artificial intelligence model was trained on the experimental values of hardness and then high-throughput screening were performed among 635 compounds including binary, ternary, and quaternary transition-metal borides, carbides, nitrides, carbonitrides, carboborides, and boronitrides to find the optimal superhard material. The proposed descriptor is an analytic formula, which is physically interpretable, allowing us to get an insight into the multiscale relationship between atomic structure (i.e., micro) and hardness (i.e., macro). In details, we have found that the hardness is proportional to the Voigt-averaged bulk modulus and inversely proportional to the Poisson's ratio and Reuss-averaged shear modulus. Results of high-throughput search showed the possible way of tuning hardness of existing materials by making mixtures with harder, but metastable structures (e.g., metastable VN, TaN, ReN\({}_{2}\), Cr\({}_{3}\)N\({}_{4}\), and ZrB\({}_{6}\) possess high hardness).
## 1 Introduction
Materials that display high mechanical properties are vital for many industrial applications, such as mining, manufacturing, oil and gas production, etc. In particular, hard and superhard materials are important for numerous con
struction and manufacturing applications such as cutting, drilling, and as abrasives for grinding [1, 2, 3]. It has been generally accepted that a material can be called superhard if its Vickers hardness is greater than 40 GPa depending on the applied load [4, 5]. Conventional superhard materials are borides, carbides and nitrides of metals, which are characterised by strong covalent bonds between the non metal and the metal atoms [6, 7]. The known hardest material among single crystals is diamond with hardness varying from 60 to 140 GPa depending on measuring technique [8, 9].
Hardness is a macroscopic property of a material to resist the penetration of another material called _indenter_, which should be harder than the tested one. This intrinsic characteristic of the material depends on many other macroscopic properties, including fluidity, elastic stiffness, ductility, strength, crack resistance, viscosity, etc. [10]. Experimentally, Vickers hardness is measured by the area of the indentation imprint left by a four-sided diamond pyramid (indenter) pressed into the surface, and it is calculated as the ratio of the load applied by the indenter to the area of the imprint [10]. However, it is a serious challenge to calculate hardness analytically or based on atomistic simulation techniques (i.e., microscopic parameters). This limits the efficiency of large-scale screening for new hard and superhard materials via computational simulations.
Nowadays, there are many different empirical models allowing one to calculate the hardness of various materials based on microscopic properties such as bond energy, band gap, valence electron density, and other properties, which can be obtained from calculations and/or experiments [11, 12, 13, 14, 15]. Some of the empirical models [16, 17, 18] require the elastic properties of materials as an input to calculate hardness. These approaches generally seek for correlations that relate hardness to electronic structure, and mechanical characteristics obtained from _ab initio_ calculations [19]. Indeed, multiple macroscopic and microscopic parameters affect the hardness, and the relation is non-linear, making it non-trivial to identify by simple means such as linear regression. A promising direction to find the relation is given by artificial intelligence (AI) [20]. Recent developments in this area [21, 22] are opening exciting opportunities for predicting hardness efficiently, and for discovering hard materials via high-throughput screening.
From another point of view straightforward atomistic models cannot correctly describe hardness as they do not include macroscopic properties of materials. Recently a new method allowing the calculations of nanohardness based on the combination of first-principles calculations with active learning on local atomic environments was proposed by Podryabinkin _et al._[23]. Although highly accurate, the proposed method is still too resource-consuming, which is not appropriate for a high-throughput screening for hard and superhard materials.
Here, we develop a new accurate model of hardness using a compressed-sensing symbolic regression approach so called SISSO [24], which is the acronym of sure independence screening sparsifying operator, to identify the best low-dimensional computationally efficient descriptor for hardness. Our developed model allows one to predict hardness with high accuracy compared to previous models based both on empirical knowledge and machine learning. We used AI to perform high-throughput screening across the available databases for hard and superhard materials.
## 2 Computational Details
SISSO [24, 25] combines sure independence screening (SIS) and a sparsifying operator (SO) to find the lowest dimensional model to describe the target property. At the beginning the user defines the entire space of the features. This space contains all possible features that can be correlated with the target property (in our case target property is hardness). We have selected 20 primary features that are the members of the first mono-dimensional space created by SIS, where they are independent to each others, and they were ranked based on the largest correlation with hardness. The 10-fold cross-validation (CV10) method was used to measure the correlation between the features and the
hardness. During CV10 the dataset was split into 10 subsets, and the descriptor identification along with the model training is performed using 9 subsets. Then the error in predicting properties of the systems in the remaining subset is evaluated with the obtained model. The CV10 error is defined as the average value of the test errors obtained for each of the ten subsets. After that, we have combined them using SO to generate the descriptors with increasing of dimensions in this way
\[\hat{H}^{(m)}\equiv\{+,-,*,/,exp,exp^{-,-1},\,\,^{2},\,^{3},\sqrt{,}\sqrt{,} \sqrt{,}\log,|-|\} \tag{1}\]
where superscript \(m\) indicates the dimension. While, \(\phi_{1}\) and \(\phi_{2}\) indicates that only couple of features are combined between them every time with SO. This means that from the primary space all features are combined between them as couple to generate a new space of features. All members of the new created space of features will be combined as couple with the primary features. This will be done recursively to provide all possible combinations necessary to generate a possible descriptor of requested dimension. In practice, a huge pool of more than ten million candidate descriptors is first constructed iteratively by combining user-defined primary features with a set of mathematical operators. The validity of provided descriptor for each dimension was evaluated by the average root-mean square error (RMSE) of CV10 and RMSE [26, 27, 28]. In SISSO over-fitting may occur with increasing dimensionality of the descriptor (i.e., the number of complex features that are used in construction of the linear model). The descriptor dimension at which the CV10 error starts increasing identifies the optimal dimensionality of the descriptor.
All primary features included in this study were obtained either from the literature [29] or from the Materials Project database [30]. The values of the primary features and the values of the properties for the training data sets can be found in the github via request (see Supporting Information). The dataset of the target property (hardness) was constructed based on the information from Ref. [22].
## 3 Results and Discussion
We have used 20 primary features presented in Table 2 to construct 1260 candidate features using a set of mathematical operators (see Eq. 1) with SISSO model. The list of primary features included the radii of the atoms in the compound, density, bulk and shear moduli, components of the elasticity tensor, elastic anisotropy, Poisson's ratio, Young's modulus, etc. (see Table 2).
To find descriptors of hardness with SISSO, we have collected a dataset of 343 compounds [30] from the Materials Project database [30]. Materials for which no reliable experimental data for Vickers hardness (i.e., target property) could be found were excluded from the collected dataset, as well as those that were not stable according to DFT calculations from the same database. This resulted in a total of 61 materials for our training dataset containing both hard materials (borides, carbides, nitrides, etc.) and relatively soft ionic crystals and oxides (NaCl, Al\({}_{2}\)O\({}_{3}\), etc.). The primary features were then calculated for each material in the final dataset.
The maximum descriptor dimension in SISSO was set to six. The root-mean-square errors (RMSEs) of SISSO models for dimensions from one to six are shown in Figure 1 together with RMSE of CV10. While RMSE monotonically decreases with increasing dimension of the descriptor, the CV10 error increases starting from dimension larger than 2. Thus, the obtained optimal descriptor dimension is 2 (highlighted by vertical dashed line in Figure 1a). This 2D descriptor has a relatively complex analytical form:
\[H_{predicted}^{SISSO}=0.147\cdot\frac{B_{V}}{\sigma\sqrt[3]{G_{R}}}-1.136\cdot \frac{B_{R}\log R_{X}}{A_{W}}-5.679 \tag{2}\]
where \(B_{V}\), \(B_{R}\) are the values of bulk modulus calculated using Voigt and Reuss averaging methods [31, 32], respectively, while \(G_{R}\) is the shear modulus calculated using Reuss averaging method, \(\sigma\) is a Poisson's ratio, \(A_{W}\) is the average atomic mass of the compound, and \(R_{X}\) is the maximum atomic radius of the species in
the compound. SISSO descriptors of other dimensions used in CV10 are listed in Supporting Information. The calculated data of atomic radius and mass was taken from Python library for materials analysis Pymatgen [33]. Distribution of errors for the prediction of hardness using optimal SISSO model with 2D descriptor is shown in Figure 1b. We achieved relatively small RMSE for this model equal to 4.28 GPa with the maximum absolute error (MaxAE) of 10.1 GPa on the training set using CV10.
In order to understand how each component of the two-dimensional descriptor influences the result, we have evaluated an importance score \(IS\) of each term in Eq. 2 to the total error of our model. This was done by removing one component at a time from the descriptor and re-fitting the model with the remaining component. The resulting one-dimensional derivative models have the following form:
\[H_{1}=a_{1}\cdot\frac{B_{R}\log R_{X}}{A_{W}}+b_{1} \tag{3}\]
and
\[H_{2}=a_{2}\cdot\frac{B_{V}}{\sigma\sqrt[3]{G_{R}}}+b_{2} \tag{4}\]
Coefficients \(a_{1}\), \(b_{1}\), and \(a_{2}\), \(b_{2}\) were fitted separately for \(H_{1}\) and \(H_{2}\) by minimising RMSE, and are equal to \(a_{1}=15.384\), \(b_{1}=0\), and \(a_{2}=0.1485\), \(b_{2}=-7.2\).
The \(IS\) is then calculated using RMSE and MaxAE values for \(H_{1}\) and \(H_{2}\) obtained for our dataset as follows:
\[IS_{i}^{\text{RMSE}}=1-\frac{\text{RMSE}(H_{predicted})}{\text{RMSE}(H_{i})} \tag{5}\]
\[IS_{i}^{\text{MaxAE}}=1-\frac{\text{MaxAE}(H_{predicted})}{\text{MaxAE}(H_{i})} \tag{6}\]
Calculated importance scores based on RMSE and MaxAE are presented in Table 1. One can see that generally \(IS_{2}\) is less than 0.1 (both based on RMSE and MaxAE) which means that the first descriptor component in Eq. 2 makes a major contribution to hardness according to our model. However, including both descriptor components in SISSO model decreases both the RMSE and CV10 errors as shown in Figure 1b. The RMSE values of the \(H_{1}\) and \(H_{2}\) on the same dataset are 5.2 and 9.3 GPa, respectively, while their combination gives lower value of 4.28 GPa. This again highlights the importance of using the 2D descriptor rather than 1D.
The obtained 2D model was then used to perform high-throughput screening for the hard and superhard materials among binary, ternary and quaternary transition metal borides, carbides, and nitrides. We have used Materials Project database [30] to extract required crystal
Figure 1: a) RMSE for the SISSO model and the average RMSE of CV10. Dashed vertical line denotes the optimal descriptor dimension. b) Distribution of errors for the best model for hardness using 2D descriptor. Maximum absolute error (MaxAE) is also shown.
structures of experimentally known and hypothetical structures. In total we have collected 635 structures for the chosen classes of materials. For each structure we have also extracted all properties required for the developed model, namely bulk and shear moduli, Poisson's ratio. Averaged atomic mass of each compound, and the maximum radius of the atoms in the compound was added by using Pymatgen library [33].
To perform the analysis of collected data we have constructed the correlation plot between SISSO Vickers hardness, bulk modulus, Poisson's ratio, and shear modulus for 635 inorganic compounds excluding diamond, borocarbides, carbonitrides and layered compounds as shown in Figure 2a. The color scale of the points shows the energy above convex hull, which indicates the (meta)stability of each compound. One can clearly see a trend of increasing hardness with \(B_{v}/\sigma\) value. There are also some exceptions from the general trend, showing high hardness with quite low shear modulus together with low \(B_{v}/\sigma\) value. These outliers correspond to metastable structures (see red and green points in Figure 2a).
Despite the fact that \(G_{R}\) appears in the denominator of Eq. 2, the correlation between \(B_{V}\) and \(G_{R}\) (Figure S1 in Supporting Information) results in the overall increase of the hardness with increase of shear modulus. Correlation between \(B_{V}\) and \(G_{R}\) for stable structures is even better (see Figure S2 in Supporting Information). Moreover, all compounds that adhere to the general trend have the Pugh's ratio between 0.5 and 0.8 as shown in Figure S3 (Supporting Information). This demonstrates the nonlinearity of the relationship between hardness and the other properties, and the importance of accounting for this non-linearity for finding hard materials.
In the Figure 2a the well-known hard and superhard compounds are directly denoted as reference points to understand where other compounds are located with respect to them. The highest values of hardness belong to boride and carbide compounds (see Figure 2b,c).
Among selected borides (Figure 2b) we can highlight one metastable compound ZrB\({}_{6}\) (mp-1001788) located 0.4 eV/atom above convex hull (according to Materials Project data) with predicted SISSO hardness of 46 GPa. ZrB\({}_{6}\) has calcium hexaboride crystal type, consisting of 3D boron cage leading to high bulk modulus and hardness. The influence of boron cage to mechanical and elastic properties of borides was shown previously for the case of hafnium borides [34]. It should be pointed out that such crystal type is common for borides of rare-earth elements, while for transition metals it is an unusual metastable structure having extremely low Reuss-averaged shear modulus of 2 GPa and low \(B_{v}/\sigma\) of 500 GPa (the Poisson's ratio is 0.39). However, this finding reveals a prospective way to increase hardness of rare-earth borides by adding transition metals as substitution in the crystal structure. Also high hardness is predicted for well-known superhard compounds, namely TiB\({}_{2}\), ReB\({}_{2}\), HfB\({}_{2}\), and CrB\({}_{4}\) (see Figure 2b).
Among the carbides the highest hardness of 46 GPa is devoted to cubic polymorphic modification of WC having \(F\bar{4}3m\) space group (see Figure 2c). WC (mp-1008635) has zincblende structure where each tungsten atom is bonded to four equivalent carbon atoms forming corner-sharing WC\({}_{4}\) tetrahedra. The bulk and shear moduli of this structure are 249 and 3 GPa, respectively, leading to very high Poisson's ratio of 0.48. Despite the high hardness, this structure is unstable with energy of formation 0.67 eV/atom above the convex hull (according the data from Materials Project). Well-known hexagonal modification of WC has the SISSO hardness of 35 GPa with bulk and shear moduli equal to 387 and 276 GPa respectively. Predicted values well agree with experimental data and those obtained by other models [19]. Hexagonal WC has highest \(B_{v}/\sigma\) ratio compared to other considered carbides and equal 1842 GPa. Other two structures with comparable mechanical characteristics to WC are CrC
\begin{table}
\begin{tabular}{l l l} \hline & RMSE & MaxAE \\ \hline \(IS_{1}\) & 0.4865 & 0.5235 \\ \(IS_{2}\) & 0.0712 & 0.0580 \\ \end{tabular}
\end{table}
Table 1: Computed importance scores for the components of the 2D descriptor.
(mp-1018050) and MoC (mp-2305), see Figure 2c. Both of them have the structure with hexagonal \(P\bar{6}m2\) space group, the same as for hexagonal WC. Each metal atom in the structure is bonded to six equivalent carbons forming a mixture of distorted face, edge, and corner-sharing \(\mathrm{MeC_{6}}\) pentagonal pyramids. Predicted SISSO hardness for CrC and MoC is about 30 GPa. The bulk modulus of both structures is about 350 GPa, while shear modulus is about 240 GPa. CrC is metastable with energy of formation 80 meV/atom above the convex hull, while MoC is stable and calculated energy of formation is only 1 meV/atom above convex hull.
The hardest found compounds among nitrides are VN, TaN, and \(\mathrm{ReN_{2}}\), see Figure 2d. VN (mp-1002105) has \(Pm\bar{3}m\) space group and is located 0.68 eV/atom above the convex hull. Predicted SISSO hardness is 34 GPa and \(B_{v}/\sigma=1650\) GPa (Poisson's ratio is 0.16). TaN structure (mp-1009831) with the SISSO hardness of 31 GPa has \(P\bar{3}m2\) space groups and it is isostructural to well-known WC structure. It has Poisson's ratio of 0.21 and \(B_{v}/\sigma=1610\) GPa, Figure 2d. Renium dinitride (mp-1019055) is located 0.49 eV/atom above the convex hull and has the predicted hardness of 32 GPa with \(B_{v}/\sigma=1650GPa\).
Another interesting material among nitrides is \(\mathrm{Cr_{3}N_{4}}\) (mp-1014460), see Figure 2d. This structure has \(Pm\bar{3}m\) space group and can be viewed as a rocksalt structure with a missing atom in the \(4a\) Wyckoff position leading to fractional composition. \(\mathrm{Cr_{3}N_{4}}\) has predicted SISSO hardness of 33 GPa with low Poisson's ratio of 0.1 leading to high \(B_{v}/\sigma=1380\) GPa.
To understand how our SISSO model of hardness correlates with other empirical and machine learning models, we have predicted the hardness of structures in the constructed dataset using Teter[16], Chen[17], Mazhnik-Oganov[18], and XGBoost[35] models. Their correlations with SISSO model for stable structures (those lying on the convex hull) are shown in Figure 3. The color scale shows the difference between SISSO and considered reference model. We found good agreement of predicted hardness values between our model and Teter model, see
\begin{table}
\begin{tabular}{l c c} \hline Name & Units & Abbreviation \\ \hline Density & \(g/cm^{3}\) & D \\ Voigt averaging of bulk modulus \(B_{V}\) & GPa & \(B_{V}\) \\ Reuss averaging of bulk modulus \(B_{R}\) & GPa & \(B_{R}\) \\ Voigt-Reuss-Hill averaging of bulk modulus \(B_{VRH}\) & GPa & \(B_{VRH}\) \\ Voigt averaging of shear modulus \(G_{V}\) & GPa & \(G_{V}\) \\ Reuss averaging of shear modulus \(G_{R}\) & GPa & \(G_{R}\) \\ Voigt-Reuss-Hill averaging of shear modulus \(G_{VRH}\) & GPa & \(G_{VRH}\) \\ Young’s modulus & GPa & Y \\ Fraction & & Fr \\ Elastic anisotropy & & el \\ Poisson’s ratio & & \(\sigma\) \\ Maximum atomic radius & Å & \(R_{X}\) \\ Minimum atomic radius & Å & \(R_{N}\) \\ Weighted atomic radius & Å & \(R_{W}\) \\ Maximum atomic wight & a.u. & \(A_{X}\) \\ Minimum atomic wight & a.u. & \(A_{N}\) \\ Weighted atomic wight & a.u. & \(A_{W}\) \\ Maximum first ionization energy & eV & \(I_{X}\) \\ Minimum first ionization energy & eV & \(I_{N}\) \\ Weighted first ionization energy & eV & \(I_{W}\) \\ \hline \end{tabular}
\end{table}
Table 2: Primary features used for construction of the descriptor
Figure 3a. The largest difference between the predictions is 12 GPa for hexagonal NaBPt\({}_{3}\) (mp-28614), and the next largest is 10 GPa for zincblende FeN (mp-6988). The largest difference between SISSO model and Chen's model is 15.5 GPa for NaBPt\({}_{3}\) (see Figure 3b). SISSO hardness of this compounds is 21.2 GPa, while Chen's hardness is about 5 GPa. Such a big difference may be caused by the highly anisotropic structure of NaBPt\({}_{3}\) leading to difference between Reuss- and Voigt-averaged shear moduli equal to 33 GPa according to Materials Project. In our model the Reuss averaging is used leading to higher hardness compared to Chen's model, where Voigt-Reuss-Hill averaged shear modulus is used which is lower compared to Reuss averaged value for NaBPt\({}_{3}\).
Recent Mazhnik-Oganov model's predictions [18] correlate well with our model (Figure 3c), with only exceptions being also NaBPt\({}_{3}\) and FeN with the differences similar to Teter model.
Application of machine-learning XGBoost model for prediction of hardness was innovative and highly efficient [35]. We have trained the same XGBoost model as in Ref. [35] on our training set and then predicted hardness for all considered compounds. First we performed the 10-fold cross-validation using the same techniques and the same dataset as for SISSO training, namely split the dataset into 10 subsets and train XGBoost model using 9 subnets. The CV10 error was defined as the average value of the test RMSE obtained for each of the ten subsets and equal to 7.8 GPa, which is 2 times higher compared to CV10 error for SISSO. Dis
Figure 2: a) The SISSO \(H_{V}\) model predictions are plotted against \(B_{v}/\sigma\) for considered 635 inorganic compounds. Specific classes of materials are also shown, including b) borides, c) carbides, and d) nitrides. Colorbar shows the energy of formation above the convex hull denoting stability of each structure.
tribution of errors for XGBoost model for CV10 is shown in Supporting Information (Figure S4). The correlations between XGBoost model and SISSO model is shown in Figure 3d. There are many structures with hardness difference from 12 to 17 GPa. Most of these structures include carbides of rare-earth metals, namely Y\({}_{2}\)C (mp-1334), Sc\({}_{4}\)C\({}_{3}\) (mp-15661), Y\({}_{4}\)C\({}_{5}\) (mp-9459), Y\({}_{2}\)ReC\({}_{2}\) (mp-21003). Such big difference in hardness predicted by XGBoost and SISSO models for our compounds may come from the fitting hyperparameters of the XGBoost algorithm that needs to be redefined before training on the new training set. Considering only transition metal borides, carbides, and nitrides one can obtain much lower differences between XGBoost and SISSO as XGBoost better describes these classes of compounds (see Figure S5 in Supporting Information). Thus, our results show that SISSO found an important descriptor of hardness \(B_{v}/\sigma\) ratio, which allows one to quickly estimate the hardness of a compound _for a wide range of chemical compositions and crystal structures_. Higher \(B_{v}/\sigma\) ratio leads to higher hardness.
## Conclusion
New descriptor for prediction of hardness of materials is developed by using a compressed-sensing symbolic regression approach (SISSO) and is applied for high-throughput screening of a large number of candidate materials with diverse chemical compositions and crystal structures. A training set of 61 compounds containing both hard materials (borides, carbides, nitrides, etc.) and relatively soft ionic crystals and oxides (NaCl, Al\({}_{2}\)O\({}_{3}\), etc.) was generated
Figure 3: Correlations between SISSO hardness and a) Teter [16], b) Chen [17], c) Mazhnik-Oganov [18], d) XGBoost [35] models for considered stable structures. Colorbar shows the difference between two sets of data.
and used for development of hardness descriptor. Obtained two-dimensional SISSO descriptor describes hardness based on the following features of a material: bulk modulus calculated using Voigt and Reuss averaging methods, shear modulus calculated using Reuss averaging method, Poisson's ratio, the average atomic mass of the compound, and the maximum atomic radius of the species in the compound. Predictive power and accuracy of obtained model was validated by employing 10-fold cross validation technique. The mean square root error was estimated to be 4.28 GPa with the maximum absolute error of 10.1 GPa.
We have used developed hardness descriptor to screen for promising hard and superhard materials across the Materials Project database among binary, ternary and quaternary transition-metal borides, carbides, nitrides, carbonitrides, carboborides, and boronitrides. Overall, hardness of 343 materials was predicted. Our results reveal specific compounds and classes of compounds that may include hard materials. Proposed descriptor is computationally efficient, scalable, and transferable, and is therefore poised to modernize the search for new superhard materials.
**Acknowledgement** Calculations were carried out on the _ElGatito_ and _LaGatita_ supercomputers of the Industry-Oriented Computational Discovery group at the Skoltech Project Center for Energy Transition and ESG. Data collection and hardness prediction were supported by Russian Science Foundation (Grant No. 20-12-00097). The SISSO model development and training was supported by RFBR-INSF grant 20-53-56065.
## Competing Interests
The Authors declare no Competing Financial or Non-Financial Interests
## Data Availability Statement
Data available from github [https://github.com/AlexanderKvashnin/SISS0_hardness.git](https://github.com/AlexanderKvashnin/SISS0_hardness.git) on request from the authors.
## Author Contributions
C.T., Z.-K.H., and H.A.Z. produced and elaborated the data. C.T. and A.G.K. wrote the original draft of the manuscript. S.V.L. and A.G.K. leaded the work and edited the manuscript. All the authors provided critical feedback and helped shape the research.
|
2306.02232 | Stability of Rellich-Sobolev type inequality involving Hardy term for
bi-Laplacian | For $N\geq 5$ and $0<\mu<N-4$, we first show a non-degenerate result of the
extremal for the following Rellich-Sobolev type inequality
\begin{align*}
& \int_{\mathbb{R}^N}|\Delta u|^2 \mathrm{d}x
-C_{\mu,1}\int_{\mathbb{R}^N}\frac{|\nabla u|^2}{|x|^2} \mathrm{d}x
+C_{\mu,2}\int_{\mathbb{R}^N}\frac{u^2}{|x|^4} \mathrm{d}x
\geq \mathcal{S}_\mu\left(\int_{\mathbb{R}^N}|u|^{\frac{2N}{N-4}}
\mathrm{d}x\right)^\frac{N-4}{N},\quad u\in C^\infty_0(\mathbb{R}^N),
\end{align*} where $C_{\mu,1}$, $C_{\mu,1}$ and $\mathcal{S}_\mu$ are
constants depending on $\mu$, furthermore equality only holds for some radial
functions, which is a key ingredient in analyzing the blow-up phenomena of
solutions to various elliptic equations on bounded or unbounded domain.
Moreover, by using spectral estimate combined with a compactness argument, we
give the remainder term of above inequality. | Shengbing Deng, Xingliang Tian | 2023-06-04T02:01:21Z | http://arxiv.org/abs/2306.02232v2 | # Stability of Rellich-Sobolev type inequality involving Hardy term for bi-Laplacian
###### Abstract.
For \(N\geq 5\) and \(0<\mu<N-4\), we first show a non-degenerate result of the extremal for the following Rellich-Sobolev type inequality
\[\int_{\mathbb{R}^{N}}|\Delta u|^{2}\mathrm{d}x-C_{\mu,1}\int_{\mathbb{R}^{N}} \frac{|\nabla u|^{2}}{|x|^{2}}\mathrm{d}x+C_{\mu,2}\int_{\mathbb{R}^{N}}\frac{ u^{2}}{|x|^{4}}\mathrm{d}x\geq\mathcal{S}_{\mu}\left(\int_{\mathbb{R}^{N}}|u|^{ \frac{2N}{N-4}}\mathrm{d}x\right)^{\frac{N-4}{N}},\quad u\in C_{0}^{\infty}( \mathbb{R}^{N}),\]
where \(C_{\mu,1}\), \(C_{\mu,1}\) and \(\mathcal{S}_{\mu}\) are constants depending on \(\mu\), furthermore equality only holds for some radial functions, which is a key ingredient in analyzing the blow-up phenomena of solutions to various elliptic equations on bounded or unbounded domain. Moreover, by using spectral estimate combined with a compactness argument, we give the remainder term of above inequality.
Key words and phrases:Rellich-Sobolev inequality; weighted bi-Laplacian problem; non-degeneracy; stability; remainder term
## 1. **Introduction**
### Motivation
Let us recall the classical Sobolev inequality which states that for \(N\geq 3\), there exists \(\mathcal{S}=\mathcal{S}(N)>0\) such that
\[\|\nabla u\|_{L^{2}(\mathbb{R}^{N})}^{2}\geq\mathcal{S}\|u\|_{L^{2^{*}}( \mathbb{R}^{N})}^{2},\quad\text{for all}\quad u\in\mathcal{D}_{0}^{1,2}( \mathbb{R}^{N}), \tag{1.1}\]
where \(2^{*}=2N/(N-2)\) and \(\mathcal{D}_{0}^{1,2}(\mathbb{R}^{N})\) denotes the closure of \(C_{c}^{\infty}(\mathbb{R}^{N})\) with respect to the norm \(\|u\|_{\mathcal{D}_{0}^{1,2}(\mathbb{R}^{N})}=\|\nabla u\|_{L^{2}(\mathbb{R}^{ N})}\). By using rearrangement methods, Talenti [29] found the optimal constant and the extremals for inequality (1.1). Indeed, equality in (1.1) is achieved by the functions
\[V_{z,\lambda}(x)=A\left(\frac{\lambda}{1+\lambda^{2}|x-z|^{2}}\right)^{\frac{ N-2}{2}},\]
with \(A\in\mathbb{R}\), \(z\in\mathbb{R}^{N}\) and \(\lambda>0\). It is well known that the Euler-Lagrange equation associated to (1.1) is
\[-\Delta u=|u|^{2^{*}-2}u\quad\text{in}\quad\mathbb{R}^{N}. \tag{1.2}\]
By Caffarelli et al. [7], it is known that all positive solutions are Talenti bubble \(V_{z,\lambda}(x)\) with \(A=[N(N-2)]^{\frac{N-2}{4}}\).
In [2], Brezis and Lieb asked the question whether a remainder term - proportional to the quadratic distance of the function \(u\) to be the manifold \(\mathcal{M}:=\{cV_{\lambda,z}:c\in\mathbb{R},\lambda>0,z\in\mathbb{R}^{N}\}\) - can be added to the right hand side of (1.1). This question was answered affirmatively
Introduction
Let \(\mathcal{S}_{0}\) be a bounded bounded domain with Lipschitz boundary \(\partial\mathcal{S}_{0}\) and \(\mathcal{S}_{0}\) be a bounded domain with Lipschitz boundary \(\partial\mathcal{S}_{0}\). We consider the following Cauchy problem
\[\int_{\mathbb{R}^{N}}|\Delta u|^{2}\mathrm{d}x\geq\mathcal{S}_{0}\left(\int_{ \mathbb{R}^{N}}|u|^{2^{**}}\mathrm{d}x\right)^{\frac{2}{2^{2\pi}}},\quad \text{for all }\quad u\in\mathcal{D}_{0}^{2,2}(\mathbb{R}^{N}), \tag{1.1}\]
where \(\mathcal{S}_{0}\) is a bounded domain with Lipschitz boundary \(\partial\mathcal{S}_{0}\) and \(\mathcal{S}_{0}\) is a bounded domain with Lipschitz boundary \(\partial\mathcal{S}_{0}\). We consider the following Cauchy problem
\[\int_{\mathbb{R}^{N}}|\Delta u|^{2}\mathrm{d}x\geq\mathcal{S}_{0}\left(\int_ {\mathbb{R}^{N}}|u|^{2^{**}}\mathrm{d}x\right)^{\frac{2}{2^{2\pi}}},\quad \text{for all }\quad u\in\mathcal{D}_{0}^{2,2}(\mathbb{R}^{N}), \tag{1.2}\]
where \(\mathcal{S}_{0}\) is a bounded domain with Lipschitz boundary \(\partial\mathcal{S}_{0}\) and \(\mathcal{S}_{0}\) is a bounded domain with Lipschitz boundary \(\partial\mathcal{S}_{0}\). We consider the following Cauchy problem
\[\int_{\mathbb{R}^{N}}|\Delta u|^{2}\mathrm{d}x\geq\mathcal{S}_{0}\left(\int_{ \mathbb{R}^{N}}|u|^{2^{**}}\mathrm{d}x\right)^{\frac{2}{2^{2\pi}}},\quad \text{for all }\quad u\in\mathcal{D}_{0}^{2,2}(\mathbb{R}^{N}), \tag{1.3}\]
where \(\mathcal{S}_{0}\) is a bounded domain with Lipschitz boundary \(\partial\mathcal{S}_{0}\) and \(\mathcal{S}_{0}\) is a bounded domain with Lipschitz boundary \(\partial\mathcal{S}_{0}\). We consider the following Cauchy problem
\[\int_{\mathbb{R}^{N}}|\Delta u|^{2}\mathrm{d}x\geq\mathcal{S}_{0}\left(\int_{ \mathbb{R}^{N}}|u|^{2^{**}}\mathrm{d}x\right)^{\frac{2}{2^{2\pi}}},\quad \text{for all }\quad u\in\mathcal{D}_{0}^{2,2}(\mathbb{R}^{N}), \tag{1.4}\]
where \(\mathcal{S}_{0}\) is a bounded domain with Lipschitz boundary \(\partial\mathcal{S}_{0}\) and \(\mathcal{S}_{0}\) is a bounded domain with Lipschitz boundary \(\partial\mathcal{S}_{0}\). We consider the following Cauchy problem
\[\int_{\mathbb{R}^{N}}|\Delta u|^{2}\mathrm{d}x\geq\mathcal{S}_{0}\left(\int_{ \mathbb{R}^{N}}|u|^{2^{**}}\mathrm{d}x\right)^{\frac{2}{2^{2\pi}}},\quad \text{for all }\quad u\in\mathcal{D}_{0}^{2,2}(\mathbb{R}^{N}), \tag{1.5}\]
where \(\mathcal{S}_{0}\) is a bounded domain with Lipschitz boundary \(\partial\mathcal{S}_{0}\) and \(\mathcal{S}_{0}\) is a bounded domain with Lipschitz boundary \(\partial\mathcal{S}_{0}\). We consider the following Cauchy problem
\[\int_{\mathbb{R}^{N}}|\Delta u|^{2}\mathrm{d}x\geq\mathcal{S}_{0}\left(\int_{ \mathbb{R}^{N}}|u|^{2^{**}}\mathrm{d}x\right)^{\frac{2}{2^{2\pi}}},\quad \text{for all }\quad u\in\mathcal{D}_{0}^{2,2}(\mathbb{R}^{N}), \tag{1.6}\]
where \(\mathcal{S}_{0}\) is a bounded domain with Lipschitz boundary \(\partial\mathcal{S}_{0}\) and \(\mathcal{S}_{0}\) is a bounded domain with Lipschitz boundary \(\partial\mathcal{S}_{0}\). We consider the following Cauchy problem
\[\int_{\mathbb{R}^{N}}|\Delta u|^{2}\mathrm{d}x\geq\mathcal{S}_{0}\left(\int_{ \mathbb{R}^{N}}|u|^{2^{**}}\mathrm{d}x\right)^{\frac{2}{2^{2\pi}}},\quad\text{ for all }\quad u\in\mathcal{D}_{0}^{2,2}(\mathbb{R}^{N}), \tag{1.7}\]
which involves the critical Sobolev equation
\[\Delta^{2}u=u^{2^{**}-1},\quad u>0\quad\text{in}\quad\mathbb{R}^{N}. \tag{1.8}\]
Smooth solutions to (1.8) have been completely classified by Lin [23], that is, the author proved that they are given by \(U_{0,\lambda,z}(x)=\lambda^{\frac{N-4}{2}}U_{0}(\lambda(x-z))\) for \(\lambda>0\) and \(z\in\mathbb{R}^{N}\), where
\[U_{0}(x)=[(N-4)(N-2)N(N+2)]^{\frac{N-4}{8}}\left(1+|x|^{2}\right)^{-\frac{N-4}{ 2}},\]
and they are unique (up to scalar multiplications) extremal functions for (1.7). In [25], Lu and Wei classified all solutions of the following linearized equation
\[\Delta^{2}v=\alpha U_{0}^{2^{**}-2}v\quad\text{in}\quad\mathbb{R}^{N},\quad v \in\mathcal{D}_{0}^{2,2}(\mathbb{R}^{N}), \tag{1.9}\]
that is, the authors proved that the eigenvalues \(\alpha\) of the above problem are discrete:
\[\alpha_{1}=1,\quad\alpha_{2}=\alpha_{3}=\cdots=\alpha_{N+2}=2^{**}-1<\alpha_{N+ 3}\leq\cdots\]
and the corresponding eigenfunction spaces are
\[v_{1}=\text{Span}\{U_{0}\},\quad v_{2}=\text{Span}\left\{\frac{N-4}{2}U_{0}+x \cdot\nabla U_{0},\quad\frac{\partial U_{0}}{\partial x_{i}},i=1,\ldots,N \right\},\cdots.\]
Moreover, Lu and Wei [25] proved the stability of inequality (1.7) combining with a compact theory which extends the famous work of Bianchi and Egnell [1] to high order case, that is, there exists constant \(c_{\text{LW}}>0\) such that for all \(u\in\mathcal{D}_{0}^{2,2}(\mathbb{R}^{N})\),
\[\int_{\mathbb{R}^{N}}|\Delta u|^{2}\mathrm{d}x-\mathcal{S}_{0}\left(\int_{ \mathbb{R}^{N}}|u|^{2^{**}}\mathrm{d}x\right)^{\frac{2}{2^{**}}}\geq c_{\text{ LW}}\inf_{v\in\mathcal{M}_{0}}\|\Delta(u-v)\|_{L^{2}(\mathbb{R}^{N})}^{2},\]
where \(\mathcal{M}_{0}:=\{cU_{0,\lambda,z}:c\in\mathbb{R},\ \lambda>0,\ z\in\mathbb{R}^{N}\}\) is the set of extremals for (1.7).
On the other hand, for \(N\geq 5\) and \(u\in C_{0}^{\infty}(\mathbb{R}^{N})\), let us recall some classical inequalities. The classical Hardy inequality indicates
\[\int_{\mathbb{R}^{N}}\frac{|\nabla u|^{2}}{|x|^{2}}\mathrm{d}x\geq\left(\frac {N-4}{2}\right)^{2}\int_{\mathbb{R}^{N}}\frac{u^{2}}{|x|^{4}}\mathrm{d}x,\]
see [19]. And the Rellich inequality reads
\[\int_{\mathbb{R}^{N}}|\Delta u|^{2}\mathrm{d}x\geq\left(\frac{N(N-4)}{4} \right)^{2}\int_{\mathbb{R}^{N}}\frac{u^{2}}{|x|^{4}}\mathrm{d}x,\]
see [26]. Furthermore, Tertikas and Zographopoulos [30] established the following Hardy-Rellich inequality
\[\int_{\mathbb{R}^{N}}|\Delta u|^{2}\mathrm{d}x\geq\left(\frac{N}{2}\right)^{ 2}\int_{\mathbb{R}^{N}}\frac{|\nabla u|^{2}}{|x|^{2}}\mathrm{d}x.\]
Note that the above constants are sharp and the inequality is strict for any nontrivial functions. Although the above three inequalities could not be achieved by nontrivial functions, remainder terms are considered at the right hand, see [27, 30]. A direct application of Theorem A is that, let \(N\geq 5\) and \(0<\lambda<\frac{N^{2}(N-4)^{2}}{16}\), for all \(u\in\mathcal{D}_{0}^{2,2}(\mathbb{R}^{N})\) it holds that
\[\int_{\mathbb{R}^{N}}|\Delta u|^{2}\mathrm{d}x-\lambda\int_{\mathbb{R}^{N}} \frac{u^{2}}{|x|^{4}}\mathrm{d}x\geq\left(1-\frac{16\lambda}{N^{2}(N-4)^{2}} \right)^{(N-1)/N}\mathcal{S}_{0}\left(\int_{\mathbb{R}^{N}}|u|^{2^{**}} \mathrm{d}x\right)^{\frac{2}{2^{**}}},\]
furthermore, the constant \(\left(1-\frac{16\lambda}{N^{2}(N-4)^{2}}\right)^{(N-1)/N}\mathcal{S}_{0}\) is sharp and the inequality is strict for any nontrivial functions, see [11, Corollary 1.9].
### Problem setup and main results
In present paper, we are mainly concerned about the stability of inequality (1.4), that is, we will give its remainder term in appropriate space which extends the work of Lu and Wei [25] to Hardy-Rellich case.
Define
\[\mathcal{E}_{\mu}:=\left\{u\in\mathcal{D}_{0}^{2,2}(\mathbb{R}^{N})|\int_{ \mathbb{R}^{N}}|\Delta u|^{2}\mathrm{d}x-C_{\mu,1}\int_{\mathbb{R}^{N}}\frac{| \nabla u|^{2}}{|x|^{2}}\mathrm{d}x+C_{\mu,2}\int_{\mathbb{R}^{N}}\frac{u^{2}}{ |x|^{4}}\mathrm{d}x<\infty\right\},\]
with the inner product
\[\langle u,u\rangle_{\mu}:=\int_{\mathbb{R}^{N}}\Delta u\Delta v\mathrm{d}x-C_{ \mu,1}\int_{\mathbb{R}^{N}}\frac{\nabla u\cdot\nabla v}{|x|^{2}}\mathrm{d}x+C_ {\mu,2}\int_{\mathbb{R}^{N}}\frac{uv}{|x|^{4}}\mathrm{d}x,\]
and the norm \(\|u\|_{\mu}:=\langle u,u\rangle_{\mu}^{\frac{1}{2}}\). From Hardy-Rellich inequality and (1.4), it is easy to verify that the space \(\mathcal{E}_{\mu}\) is well defined and equivalent to \(\mathcal{D}_{0}^{2,2}(\mathbb{R}^{N})\).
We first concern the uniqueness of positive radial solutions of the Euler-Lagrange equation
\[\Delta^{2}u+C_{\mu,1}\mathrm{div}\left(\frac{\nabla u}{|x|^{2}}\right)+C_{\mu, 2}\frac{u}{|x|^{4}}=u^{2^{**}-1},\quad u>0\quad\text{in}\quad\mathbb{R}^{N} \setminus\{0\},\quad u\in\mathcal{E}_{\mu}, \tag{1.10}\]
where \(N\geq 5\) and \(0<\mu<N-4\), \(C_{\mu,1}\) and \(C_{\mu,2}\) are constants depending on \(\mu\) given in (1.5).
**Theorem 1.1**.: _Suppose \(N\geq 5\) and \(0<\mu<N-4\). Then problem (1.10) admits a unique (up to scalings) positive radial solution of the form \(U_{\mu,\lambda}(x)=\lambda^{\frac{N-4}{2}}U_{\mu}(\lambda x)\) for \(\lambda>0\), where_
\[U_{\mu}(x)=K_{N,\mu}|x|^{-\frac{\mu}{2}}\left(1+|x|^{2(1-\frac{\mu}{N-4})} \right)^{-\frac{N-4}{2}}. \tag{1.11}\]
_Here \(K_{N,\mu}=\left[\left(1-\frac{\mu}{N-4}\right)^{4}(N-4)(N-2)N(N+2)\right]^{ \frac{N-4}{8}}\)._
Then we concern the linearized problem related to Euler-Lagrange equation (1.10) at the function \(U_{\mu}\), which leads to consider the following problem:
\[\Delta^{2}v+C_{\mu,1}\mathrm{div}\left(\frac{\nabla v}{|x|^{2}}\right)+C_{\mu, 2}\frac{v}{|x|^{4}}=(2^{**}-1)U_{\mu}^{2^{**}-2}v,\quad\text{in}\quad\mathbb{R} ^{N}\setminus\{0\},\quad v\in\mathcal{E}_{\mu}. \tag{1.12}\]
It is easy to verify that \(\frac{N-4}{2}U_{\mu}+x\cdot\nabla U_{\mu}\) (which equals \(\frac{\partial U_{\mu,\lambda}}{\partial\lambda}|_{\lambda=1}\)) solves the linear equation (1.12). We say \(U_{\mu}\) is non-degenerate if all the solutions of (1.12) result from the invariance (up to scalings) of (1.10). The non-degeneracy of solutions for (1.10) is a key ingredient in analyzing the blow-up phenomena of solutions to various elliptic equations on bounded or unbounded domain in \(\mathbb{R}^{N}\) or Riemannian manifolds whose asymptotic behavior is encoded in (1.11), we refer to [5, 10] for examples. Therefore, it is quite natural to ask the following question:
_is solution \(U_{\mu}\) non-degenerate?_
We give an affirmative answer.
**Theorem 1.2**.: _Suppose \(N\geq 5\) and \(0<\mu<N-4\). Then the space of solutions for (1.12) has dimension \(1\) and is spanned by \((\frac{N-4}{2}U_{\mu}+x\cdot\nabla U_{\mu})\)._
A direct application of Theorem 1.2 is studying the stability of inequality (1.4). Now, let us state our main result.
**Theorem 1.3**.: _Suppose \(N\geq 5\) and \(0<\mu<N-4\). Then there exists constant \(c=c(N,\mu)>0\) such that for all \(u\in\mathcal{E}_{\mu}\), it holds that_
\[\|u\|_{\mu}^{2}-\mathcal{S}_{\mu}\left(\int_{\mathbb{R}^{N}}|u|^{2^{**}} \mathrm{d}x\right)^{\frac{2}{2^{**}}}\geq c\inf_{v\in\mathcal{M}_{\mu}}\|u-v \|_{\mu}^{2},\]
_where \(\mathcal{M}_{\mu}:=\{cU_{\mu,\lambda}:c\in\mathbb{R},\lambda>0\}\) is the set of extremals for (1.4)._
**Remark 1.4**.: The key step of the proofs for Theorems 1.1 and 1.2 is the change of variable \(u(r)=r^{a}v(r^{b})\) with
\[a=-\frac{\mu}{2}\quad\text{and}\quad b=1-\frac{\mu}{N-4},\]
which requires very careful calculation. The proof of Theorem 1.3 is standard, but surprisingly, a vary useful tool made by Dan et al. [11] is good for us, that is, under the variable \(u(x)=|x|^{a}v(|x|^{b-1}x)\),
\[\int_{\mathbb{R}^{N}}|\Delta v|^{2}\mathrm{d}x-\mathcal{S}_{0}\left(\int_{ \mathbb{R}^{N}}|v|^{2^{**}}\mathrm{d}x\right)^{\frac{2}{2^{**}}}\leq\left(1- \frac{\mu}{N-4}\right)^{-3}\left[\|u\|_{\mu}^{2}-\mathcal{S}_{\mu}\left(\int_ {\mathbb{R}^{N}}|u|^{2^{**}}\mathrm{d}x\right)^{\frac{2}{2^{**}}}\right],\]
then we can directly use Lions' concentration-compactness principle to complete the proof.
The paper is organized as follows: In Section 2 we show that equation (1.10) has a unique (up to scalings) radial solution of the form (1.11) and give the proof of Theorem 1.1. Section 3 is devoted to proving the non-degeneracy of \(U_{\mu}\). Finally, in Section 4 we study the stability of Rellich-Sobolev inequality (1.4) by using spectral analysis combined with a compactness theory, and give the proof of Theorem 1.3.
## 2. Uniqueness of radial solutions
In this section, we will show that problem (1.10) admits a unique (up to scalings) radial solution of the form (1.11).
**Proof of Theorem 1.1.** Let \(u\in\mathcal{E}_{\mu}\) be a radial solution of (1.10), set \(r=|x|\), then (1.10) is equivalent to
\[u^{(4)}+\frac{2(N-1)}{r}u^{\prime\prime\prime}+\frac{(N-1)(N-3)+C_{\mu,1}}{r^ {2}}u^{\prime\prime}-\frac{(N-3)(N-1-C_{\mu,1})}{r^{3}}u^{\prime}+\frac{C_{ \mu,2}}{r^{4}}u=u^{\frac{N+4}{N-4}}, \tag{2.1}\]
where \(C_{\mu,1}\) and \(C_{\mu,2}\) are constants depending on \(\mu\) given as in (1.5). Making the change
\[u(r)=r^{a}v(r^{b}), \tag{2.2}\]
with
\[a=-\frac{\mu}{2}\quad\text{and}\quad b=1-\frac{\mu}{N-4},\]
then by direct calculation, we obtain
\[u^{\prime}(r)= ar^{a-1}v+br^{a+b-1}v^{\prime},\] \[u^{\prime\prime}(r)= a(a-1)r^{a-2}v+b(2a+b-1)r^{a+b-2}v^{\prime}+b^{2}r^{a+2b-2}v^{ \prime\prime},\]
\[u^{\prime\prime\prime}(r)= a(a-1)(a-2)r^{a-3}v+b\left[a(a-1)+(2a+b-1)(a+b-2)\right]r^{a+b-3}v^{\prime}\] \[+b^{2}\left[(2a+b-1)+(a+2b-2)\right]r^{a+2b-3}v^{\prime\prime}+b^{3 }r^{a+3b-3}v^{\prime\prime\prime},\] \[u^{(4)}(r)= a(a-1)(a-2)(a-3)r^{a-4}v\] \[+b\left\{\left[a(a-1)+(2a+b-1)(a+b-2)\right](a+b-3)+a(a-1)(a-2) \right\}r^{a+b-4}v^{\prime}\] \[+b^{2}\left\{\left[a(a-1)+(2a+b-1)(a+b-2)\right]+(3a+3b-3)(a+2b-3 )\right\}r^{a+2b-4}v^{\prime\prime}\] \[+b^{3}\left[(3a+3b-3)+(a+3b-3)\right]r^{a+3b-4}v^{\prime\prime \prime}+b^{4}r^{a+4b-4}v^{(4)}.\]
Then from (2.1), set \(s=r^{b}\), we deduce that
\[v^{(4)}(s)+\frac{A}{s}v^{\prime\prime\prime}(s)+\frac{B}{s^{2}}v^{\prime \prime}(s)-\frac{C}{s^{3}}v^{\prime}(s)+\frac{D}{s^{4}}v(s)=b^{-4}v^{\frac{N+ 4}{N-4}}, \tag{2.3}\]
where
\[A:= b^{-1}\Big{[}\left[(3a+3b-3)+(a+3b-3)\right]+2(N-1)\Big{]},\] \[B:= b^{-2}\Big{[}\left\{\left[a(a-1)+(2a+b-1)(a+b-2)\right]+(3a+3b-3)(a +2b-3)\right\}\] \[+2(N-1)(3a+3b-3)+\left[(N-1)(N-3)+C_{\mu,1}\right]\Big{]},\] \[C:= b^{-3}\Big{[}\left\{\left[a(a-1)+(2a+b-1)(a+b-2)\right](a+b-3)+a(a -1)(a-2)\right\}\] \[+2(N-1)\left[a(a-1)+(2a+b-1)(a+b-2)\right]\] \[+[(N-1)(N-3)+C_{\mu,1}](2a+b-1)-(N-3)(N-1-C_{\mu,1})\Big{]},\] \[D:= b^{-4}\Big{[}a(a-1)(a-2)(a-3)+2(N-1)a(a-1)(a-2)\] \[+[(N-1)(N-3)+C_{\mu,1}]a(a-1)-(N-3)(N-1-C_{\mu,1})a+C_{\mu,2}\Big{]}.\]
By a direct and careful calculation, it holds that
\[A=2(N-1),\quad B=C=(N-1)(N-3),\quad D=0, \tag{2.4}\]
then (2.3) is equivalent to
\[v^{(4)}(s)+\frac{2(N-1)}{s}v^{\prime\prime\prime}(s)+\frac{(N-1)(N-3)}{s^{2}}v ^{\prime\prime}(s)-\frac{(N-1)(N-3)}{s^{3}}v^{\prime}(s)=b^{-4}v^{\frac{N+4}{ N-4}}. \tag{2.5}\]
From [14, Lemma 2.2] (see also [23, Theorem 1.3]), we know that equation (2.5) has a unique (up to scalings) positive solution of the form
\[v(s)=\frac{K_{N,\mu}\lambda^{\frac{N-4}{2}}}{(1+\lambda^{2}s^{2})^{\frac{N-4}{ 2}}},\quad\text{where}\quad K_{N,\mu}=\left[\left(1-\frac{\mu}{N-4}\right)^{4}( N-4)(N-2)N(N+2)\right]^{\frac{N-4}{8}},\]
for some \(\lambda>0\). Therefore, \(u\) must be the form \(U_{\mu,\lambda}:=\lambda^{\frac{N-4}{2}}U_{\mu}(\lambda x)\), where
\[U_{\mu}(x)=K_{N,\mu}|x|^{-\frac{\mu}{2}}\left(1+|x|^{2(1-\frac{\mu}{N-4})} \right)^{-\frac{N-4}{2}}.\]
The proof of Theorem 1.1 is now completed.
## 3. **Non-degenerate result**
In this section, we will prove the non-degeneracy of \(U_{\mu}\), that is, we classify all solutions of linearized problem (1.12).
Firstly, let us consider the eigenvalue problem
\[\Delta^{2}v+C_{\mu,1}\mathrm{div}\left(\frac{\nabla v}{|x|^{2}}\right)+C_{\mu,2 }\frac{v}{|x|^{4}}=\nu U_{\mu}^{2^{**}-2}v,\quad\text{in}\quad\mathbb{R}^{N} \setminus\{0\},\quad v\in\mathcal{E}_{\mu}. \tag{3.1}\]
Following the work of Servadei and Valdinoci [28], we can give the definitions of eigenvalues of problem (3.1) as the following.
**Definition 3.1**.: _The first eigenvalue of problem (3.1) can be defined as_
\[\nu_{1}:=\inf_{v\in\mathcal{E}_{\mu}\setminus\{0\}}\frac{\|v\|_{\mu}^{2}}{ \int_{\mathbb{R}^{N}}U_{\mu}^{2^{**}-2}v^{2}\mathrm{d}x}. \tag{3.2}\]
_Here \(\|v\|_{\mu}^{2}=\int_{\mathbb{R}^{N}}|\Delta v|^{2}\mathrm{d}x-C_{\mu,1}\int _{\mathbb{R}^{N}}\frac{|\nabla v|^{2}}{|x|^{2}}\mathrm{d}x+C_{\mu,2}\int_{ \mathbb{R}^{N}}\frac{v^{2}}{|x|^{4}}\mathrm{d}x\). Moreover, for any \(k\in\mathbb{N}^{+}\) the eigenvalues can be characterized as follows:_
\[\nu_{k+1}:=\inf_{v\in\mathbb{P}_{k+1}\setminus\{0\}}\frac{\|v\|_{\mu}^{2}}{ \int_{\mathbb{R}^{N}}U_{\mu}^{2^{**}-2}v^{2}\mathrm{d}x}, \tag{3.3}\]
_where_
\[\mathbb{P}_{k+1}:=\left\{v\in\mathcal{E}_{\mu}:\langle v,e_{i,j}\rangle_{\mu} =0,\quad\text{for all}\quad i=1,\dots,k,\ j=1,\dots,h_{i}\right\},\]
_and \(e_{i,j}\) are the corresponding eigenfunctions to \(\nu_{i}\) with \(h_{i}\) multiplicity._
**Theorem 3.2**.: _Suppose \(N\geq 5\) and \(0<\mu<N-4\). Let \(\nu_{i}\), \(i=1,2,\dots,\) be the eigenvalues of (3.1) in increasing order as in Definition 3.1. Then \(\nu_{1}=1\) is simple with eigenfunction \(U_{\mu}\) and \(\nu_{2}=2^{**}-1\) with the corresponding one-dimensional eigenfunction space spanned by \((\frac{N-4}{2}U_{\mu}+x\cdot\nabla U_{\mu})\). Furthermore, \(\nu_{3}>\nu_{2}\)._
Proof.: We follow the arguments as those in [4]. Choosing \(v=U_{\mu}\) in (3.2), then since \(U_{\mu}\) is the solution of equation (1.10) we have
\[\mu_{1}\leq \frac{\int_{\mathbb{R}^{N}}|\Delta U_{\mu}|^{2}\mathrm{d}x-C_{\mu,1}\int_{\mathbb{R}^{N}}\frac{|\nabla U_{\mu}|^{2}}{|x|^{2}}\mathrm{d}x+C_{\mu,2}\int_{\mathbb{R}^{N}}\frac{U_{\mu}^{2}}{|x|^{4}}\mathrm{d}x}{\int_{ \mathbb{R}^{N}}U_{\mu}^{2^{**}}\mathrm{d}x}=1.\]
Moreover, by using Holder inequality we deduce that for all \(v\in\mathcal{E}_{\mu}\setminus\{0\}\),
\[\int_{\mathbb{R}^{N}}U_{\mu}^{2^{**}-2}v^{2}\mathrm{d}x\leq\|U_{\mu}\|_{L^{2^{ **}}(\mathbb{R}^{N})}^{2^{**}-2}\|v\|_{L^{2^{**}}(\mathbb{R}^{N})}^{2}= \mathcal{S}_{\mu}\|v\|_{L^{2^{**}}(\mathbb{R}^{N})}^{2}\leq\|v\|_{\mu}^{2}, \tag{3.4}\]
which implies \(\mu_{1}\geq 1\). Then we have \(\mu_{1}=1\), furthermore since equality in (3.4) holds if and only if \(v=\zeta U_{\mu}\) with \(\zeta\in\mathbb{R}\), corresponding eigenfunction of \(\mu_{1}=1\) is \(\zeta U_{\mu}\) with \(\zeta\in\mathbb{R}\setminus\{0\}\).
Note that \(U_{\mu}\) minimizes the functional
\[v\mapsto\Phi(v)=\frac{1}{2}\|v\|_{\mu}^{2}-\frac{1}{2^{**}}\|v\|_{L^{2^{**}}( \mathbb{R}^{N})}^{2^{**}}, \tag{3.5}\]
on the Nehari manifold
\[\mathcal{N}:=\left\{v\in\mathcal{E}_{\mu}\backslash\{0\}:\|v\|_{\mu}^{2}=\|v \|_{L^{2^{**}}(\mathbb{R}^{N})}^{2^{**}}\right\}.\]
Indeed, for \(v\in\mathcal{N}\) we have by (1.4) that
\[\Phi(v)= \left(\frac{1}{2}-\frac{1}{2^{**}}\right)\|v\|_{L^{2^{**}}(\mathbb{R }^{N})}^{2^{**}}=\left(\frac{1}{2}-\frac{1}{2^{**}}\right)\left(\frac{\|v\|_{ \mu}}{\|v\|_{L^{2^{**}}(\mathbb{R}^{N})}}\right)^{\frac{2\cdot 2^{**}}{2^{**}-2}}\] \[\geq \left(\frac{1}{2}-\frac{1}{2^{**}}\right)\mathcal{S}_{\mu}^{\frac {2^{**}}{2^{**}-2}}=\left(\frac{1}{2}-\frac{1}{2^{**}}\right)\left(\frac{\|U_{ \mu}\|_{\mu}}{\|U_{\mu}\|_{L^{2^{**}}(\mathbb{R}^{N})}}\right)^{\frac{2\cdot 2^{ **}}{2^{**}-2}}=\Phi(U_{\mu}).\]
As a consequence, the second derivative \(\Phi^{\prime\prime}(U_{\mu})\) given by
\[(\phi,\varphi)\mapsto\langle\phi,\varphi\rangle_{\mu}-(2^{**}-1)\int_{\mathbb{ R}^{N}}U_{\mu}^{2^{**}-2}\phi\varphi\mathrm{d}x\]
is nonnegative quadratic form when restricted to the tangent space \(T_{U_{\mu}}\mathcal{N}\), then we have
\[\|u\|_{\mu}^{2}\geq(2^{**}-1)\int_{\mathbb{R}^{N}}U_{\mu}^{2^{**}-2}u^{2} \mathrm{d}x,\quad\text{for all}\quad u\in T_{U_{\mu}}\mathcal{N}.\]
Since \(T_{U_{\mu}}\mathcal{N}\) has codimension one, we infer that \(\nu_{2}\geq 2^{**}-1\). Moreover, since \((\frac{N-4}{2}U_{\mu}+x\cdot\nabla U_{\mu})\) is a solution of (3.1) with \(\nu=2^{**}-1\) which indicates \(\nu_{2}\leq 2^{**}-1\), then we conclude that \(\nu_{2}=2^{**}-1\).
Next, we will show that \((\frac{N-4}{2}U_{\mu}+x\cdot\nabla U_{\mu})\) is the only (up to multiplications) eigenfunction of \(\mu_{2}=2^{**}-1\).
Eigenvalue problem (3.1) with \(\nu=2^{**}-1\) is equivalent to
\[\Delta^{2}v+C_{\mu,1}\left[|x|^{-2}\Delta v-2|x|^{-4}x\cdot\nabla v\right]+ \frac{C_{\mu,2}}{|x|^{4}}v=\frac{(2^{**}-1)K_{N,\mu}^{2^{**}-2}|x|^{-\frac{4 \mu}{N-4}}}{\left(1+|x|^{2(1-\frac{\mu}{N-4})}\right)^{4}}v, \tag{3.6}\]
in \(\mathbb{R}^{N}\setminus\{0\}\), \(v\in\mathcal{E}_{\mu}\). Firstly, we write \(v(x)=v(r,\theta)\) and decompose \(v\) as follows:
\[v(r,\theta)=\sum_{k=0}^{\infty}\sum_{i=1}^{m_{k}}\phi_{k,i}(r)\Psi_{k,i}( \theta), \tag{3.7}\]
where \(r=|x|\), \(\theta=\frac{x}{|x|}\in\mathbb{S}^{N-1}\), and
\[\phi_{k,i}(r)=\int_{\mathbb{S}^{N-1}}v(r,\theta)\Psi_{k,i}(\theta)d\theta.\]
Here \(\Psi_{k,i}(\theta)\) denotes the \(k\)-th spherical harmonic, i.e., it satisfies
\[-\Delta_{\mathbb{S}^{N-1}}\Psi_{k,i}=\lambda_{k}\Psi_{k,i},\]
where \(\Delta_{\mathbb{S}^{N-1}}\) is the Laplace-Beltrami operator on \(\mathbb{S}^{N-1}\) with the standard metric and \(\lambda_{k}\) is the \(k\)-th eigenvalue of \(-\Delta_{\mathbb{S}^{N-1}}\). It is well known that \(\lambda_{k}=k(N-2+k)\), \(k=0,1,2,\ldots\) whose multiplicity is
\[m_{k}:=\frac{(N+2k-2)(N+k-3)!}{(N-2)!k!}\]
and that
\[\mathrm{Ker}(\Delta_{\mathbb{S}^{N-1}}+\lambda_{k})=\mathbb{Y}_{k}(\mathbb{R} ^{N})|_{\mathbb{S}^{N-1}},\]
where \(\mathbb{Y}_{k}(\mathbb{R}^{N})\) is the space of all homogeneous harmonic polynomials of degree \(k\) in \(\mathbb{R}^{N}\). It is known that
\[\Delta(\phi_{k,i}(r)\Psi_{k,i}(\theta))= \Psi_{k,i}\left(\phi_{k,i}^{\prime\prime}+\frac{N-1}{r}\phi_{k,i}^ {\prime}\right)+\frac{\phi_{k,i}}{r^{2}}\Delta_{\mathbb{S}^{N-1}}\Psi_{k,i}\] \[= \Psi_{k,i}\left(\phi_{k,i}^{\prime\prime}+\frac{N-1}{r}\phi_{k,i} ^{\prime}-\frac{\lambda_{k}}{r^{2}}\phi_{k,i}\right),\]
and
\[x\cdot\nabla(\phi_{k,i}(r)\Psi_{k,i}(\theta))= \sum_{i=1}^{N}x_{i}\frac{\partial(\phi_{k,i}(r)\Psi_{k,i}(\theta ))}{\partial x_{i}}=\phi_{k,i}^{\prime}r\Psi_{k,i}+\phi_{k,i}\frac{\partial \Psi_{k,i}}{\partial\theta_{l}}\sum_{i=1}^{N}\frac{\partial\theta_{l}}{ \partial x_{i}}x_{i}\] \[= \phi_{k,i}^{\prime}r\Psi_{k,i},\]
because it holds true that
\[\sum_{i=1}^{N}\frac{\partial\theta_{l}}{\partial x_{i}}x_{i}=0,\quad\text{for all}\quad l=1,\dots,N-1.\]
Therefore, by standard regularity theory, the function \(v\) is a solution of (3.6) if and only if for all \(k\in\mathbb{N}\), \(i=1,\dots,m_{k}\), \(\phi_{k,i}\in\mathcal{C}_{k}\) is a classical solution of
\[\phi_{k,i}^{(4)}+\frac{2(N-1)}{r}\phi_{k,i}^{\prime\prime\prime}+\frac{(N-1)(N -3)+C_{\mu,1}}{r^{2}}\phi_{k,i}^{\prime\prime}-\frac{(N-3)(N-1-C_{\mu,1})}{r^ {3}}\phi_{k,i}^{\prime}+\frac{C_{\mu,2}}{r^{4}}\phi_{k,i}\]
\[-\frac{\lambda_{k}}{r^{2}}\left[2\phi_{k,i}^{\prime\prime}+\frac{2(N-3)}{r} \phi_{k,i}^{\prime}-\frac{2(N-4)+\lambda_{k}-C_{\mu,1}}{r^{2}}\phi_{k,i}\right] =\frac{(2^{**}-1)K_{N,\mu}^{2^{**}-2}r^{-\frac{4\mu}{N-4}}}{\left[1+r^{2(1- \frac{\mu}{N-4})}\right]^{4}}\phi_{k,i}, \tag{3.8}\]
in \(r\in(0,\infty)\), where
\[\mathcal{C}_{k}:=\left\{\omega\in C^{2}([0,\infty))|\int_{0}^{\infty}\left\{ \left(\Delta_{r}\omega-\frac{\lambda_{k}}{r^{2}}\omega\right)^{2}-C_{\mu,1} \frac{|\omega^{\prime}|^{2}}{r^{2}}+C_{\mu,2}\frac{|\omega|^{2}}{r^{4}}\right\} r^{N-1}\mathrm{d}r<\infty\right\}.\]
Here \(\Delta_{r}=\frac{\partial^{2}}{\partial r^{2}}+\frac{N-1}{r}\frac{\partial}{ \partial r}\). Same as in Section 2, we make the change \(s=r^{b}\) where \(b=1-\frac{\mu}{N-4}\) and let
\[\phi_{k,i}(r)=r^{-\frac{\mu}{2}}X_{k,i}(s), \tag{3.9}\]
that transforms (3.8) into the following
\[X_{k,i}^{(4)}+\frac{2(N-1)}{s}X_{k,i}^{\prime\prime\prime}+\frac {(N-1)(N-3)}{s^{2}}X_{k,i}^{\prime\prime}-\frac{(N-1)(N-3)}{s^{3}}X_{k,i}^{\prime}\] \[-\frac{\lambda_{k}b^{-2}}{s^{2}}\left[2X_{k,i}^{\prime\prime}+ \frac{2(N-3)}{s}X_{k,i}^{\prime}-\frac{2(N-4)+\lambda_{k}b^{-2}}{s^{2}}X_{k,i}\right] \tag{3.10}\] \[=\frac{(N+4)(N-2)N(N+2)}{(1+s^{2})^{4}}X_{k,i}+\frac{4b^{-2}(b^{- 2}-1)\lambda_{k}}{s^{4}}X_{k,i},\]
in \(s\in(0,\infty)\), \(X_{k,i}\in\widetilde{\mathcal{C}}_{k}:=\left\{\omega\in C^{2}([0,\infty))|\int_{0 }^{\infty}\left(\Delta_{s}\omega-\frac{\lambda_{k}}{s^{2}}\omega\right)^{2}s^{ N-1}\mathrm{d}s<\infty\right\}\). Here we have used the fact
\[b^{-4}(2^{**}-1)K_{N,\mu}^{2^{**}-2}=[(N-4)(N-2)N(N+2)]\left(\frac{2N}{N-4}-1 \right)=(N+4)(N-2)N(N+2).\]
It is easy to verify that (3.10) is equivalent to the system
\[\left(\Delta_{s}-\frac{\lambda_{k}}{s^{2}}\right)^{2}X_{k,i}= -\frac{(b^{-2}-1)\lambda_{k}[2(N-4)+(1+b^{2})\lambda_{k}-4b^{-2} ]}{s^{4}}X_{k,i}\] \[+\frac{2(b^{-2}-1)\lambda_{k}}{s^{2}}\left(X_{k,i}^{\prime\prime }+\frac{N-3}{s}X_{k,i}^{\prime}\right) \tag{3.11}\] \[+(2^{**}-1)\Gamma_{N}(1+s^{2})^{-4}X_{k,i},\]
in \(s\in(0,\infty)\), where
\[\Gamma_{N}:=(N-4)(N-2)N(N+2). \tag{3.12}\]
From [4], we know that when \(k=0\), (3.11) admits only one solution \((\frac{N-4}{2}U_{\mu}+x\cdot\nabla U_{\mu})\) up to multiplications. We claim that for all \(k\geq 1\), (3.11) does not exist nontrivial solutions which implies \((\frac{N-4}{2}U_{\mu}+x\cdot\nabla U_{\mu})\) is the only (up to multiplications) eigenfunction of \(\nu_{2}=2^{**}-1\). Now, we begin to show this claim. By easy computation one easily checks the operator identity
\[\Delta_{s}-\frac{\lambda_{k}}{s^{2}}=s^{k}\left[\frac{\partial^{2}}{\partial s ^{2}}+\frac{N+2k-1}{s}\frac{\partial}{\partial s}\right]s^{-k}.\]
Therefore equation (3.11) can be rewritten as
\[\left(\frac{\partial^{2}}{\partial s^{2}}+\frac{N+2k-1}{s}\frac{ \partial}{\partial s}\right)^{2}Y_{k,i}= (b^{-2}-1)\lambda_{k}[2k(N-4+k)-2(N-4)-(1+b^{-2})\lambda_{k}+4b ^{-2}]\frac{Y_{k,i}}{s^{4}}\] \[+2(b^{-2}-1)\lambda_{k}\left[\frac{Y_{k,i}^{\prime\prime}}{s^{2} }+(N-3+2k)\frac{Y_{k,i}^{\prime}}{s^{3}}\right] \tag{3.13}\] \[+(2^{**}-1)\Gamma_{N}(1+s^{2})^{-4}Y_{k,i}.\]
Here we defined \(Y_{k,i}\in C^{\infty}(0,\infty)\) by \(Y_{k,i}(s):=s^{-k}X_{k,i}\). Now we consider the functions \(Z_{k,i}:\mathbb{R}^{N+2k}\to\mathbb{R}\) defined by \(Z_{k,i}(y)=Y_{k,i}(|y|)\). So following the work of Bartsch et al. [4], we deduce that
\[Z_{k,i}\in\mathcal{D}_{0}^{2,2}(\mathbb{R}^{N+2k}),\]
and \(Z_{k,i}\) is a weak solution of the equation
\[\Delta^{2}Z_{k,i}(y)= (b^{-2}-1)\lambda_{k}[2k(N-4+k)-2(N-4)-(1+b^{-2})\lambda_{k}+4b ^{-2}]\frac{Z_{k,i}(y)}{|y|^{4}}\] \[+2(b^{-2}-1)\lambda_{k}\left[\frac{Z_{k,i}^{\prime\prime}}{|y|^{ 2}}+(N-3+2k)\frac{Z_{k,i}^{\prime}}{|y|^{3}}\right] \tag{3.14}\] \[+(2^{**}-1)\Gamma_{N}(1+|y|^{2})^{-4}Z_{k,i}(y),\quad y\in\mathbb{ R}^{N+2k}.\]
Multiplying (3.14) by \(Z_{k,i}\) and integrating in \(\mathbb{R}^{N+2k}\), we have
\[\|Z_{k,i}\|^{2}_{\mathcal{D}_{0}^{2,2}(\mathbb{R}^{N+2k})}= (2^{**}-1)\Gamma_{N}\int_{\mathbb{R}^{N+2k}}(1+|y|^{2})^{-4}|Z_{k,i }|^{2}\mathrm{d}y-2(b^{-2}-1)\lambda_{k}\int_{\mathbb{R}^{N+2k}}\frac{|\nabla Z_ {k,i}|^{2}}{|y|^{2}}\mathrm{d}y\] \[+(b^{-2}-1)\lambda_{k}[2k(N-4+k)-2(N-4)-(1+b^{-2})\lambda_{k}+4b^ {-2}]\] \[\quad\times\int_{\mathbb{R}^{N+2k}}\frac{|Z_{k,i}|^{2}}{|y|^{4}} \mathrm{d}y.\]
By the Hardy inequality,
\[\left(\frac{N+2k-4}{2}\right)^{2}\int_{\mathbb{R}^{N+2k}}\frac{|u|^{2}}{|y|^{ 4}}\mathrm{d}y\leq\int_{\mathbb{R}^{N+2k}}\frac{|\nabla u|^{2}}{|y|^{2}} \mathrm{d}y,\quad\text{for all}\quad u\in C_{0}^{\infty}(\mathbb{R}^{N+2k}),\]
we deduce that
\[\|Z_{k,i}\|^{2}_{\mathcal{D}_{0}^{2,2}(\mathbb{R}^{N+2k})}\leq (2^{**}-1)\Gamma_{N}\int_{\mathbb{R}^{N+2k}}(1+|y|^{2})^{-4}|Z_{k,i }|^{2}\mathrm{d}y \tag{3.15}\] \[-\frac{1}{2}(b^{-2}-1)\lambda_{k}[N(N-4)+(2\lambda_{k}-8)b^{-2}+2 \lambda_{k}]\int_{\mathbb{R}^{N+2k}}\frac{|Z_{k,i}|^{2}}{|y|^{4}}\mathrm{d}y.\]
Note that
\[\|u\|^{2}_{\mathcal{D}_{0}^{2,2}(\mathbb{R}^{N+2k})}\geq \Gamma_{N+2k}\int_{\mathbb{R}^{N+2k}}(1+|y|^{2})^{-4}|u(y)|^{2} \mathrm{d}y,\quad\text{for all}\quad u\in\mathcal{D}_{0}^{2,2}(\mathbb{R}^{N+2 k}), \tag{3.16}\]
see [4, (2.10)], then combining with (3.15) and (3.16) we deduce
\[[(2^{**}-1)\Gamma_{N}-\Gamma_{N+2k}]\int_{\mathbb{R}^{N+2k}}(1+|y |^{2})^{-4}|Z_{k,i}|^{2}\mathrm{d}y\geq\] \[\frac{1}{2}(b^{-2}-1)\lambda_{k}[N(N-4)+(2\lambda_{k}-8)b^{-2}+2 \lambda_{k}]\int_{\mathbb{R}^{N+2k}}\frac{|Z_{k,i}|^{2}}{|y|^{4}}\mathrm{d}y.\]
Since
\[(2^{**}-1)\Gamma_{N}\leq\Gamma_{N+2k},\quad\text{for all}\quad k\geq 1,\]
where \(\Gamma_{N}\) is defined in (3.12), then it holds that
\[(b^{-2}-1)\lambda_{k}[N(N-4)+(2\lambda_{k}-8)b^{-2}+2\lambda_{k}]\int_{ \mathbb{R}^{N+2k}}\frac{|Z_{k,i}|^{2}}{|y|^{4}}\mathrm{d}y\leq 0. \tag{3.17}\]
Note that
\[(b^{-2}-1)\lambda_{k}[N(N-4)+(2\lambda_{k}-8)b^{-2}+2\lambda_{k}]>0,\quad \text{for all}\quad k\geq 1,\]
due to \(0<b<1\), \(\lambda_{k}=k(N-2+k)\) and \(N\geq 5\), then we conclude from (3.17) that \(Z_{k,i}\equiv 0\) for all \(k\geq 1\), that is, (3.11) does not exist nontrivial solutions for all \(k\geq 1\) which proves the claim.
From the definition of eigenvalues, we deduce \(\nu_{3}>\nu_{2}\). The proof is completed.
**Proof of Theorem 1.2.** The proof directly follows by Theorem 3.2.
## 4. **Stability of Rellich-Sobolev inequality**
In this section, we will prove the stability of Rellich-Sobolev inequality (1.4) and give the proof of Theorem 1.3, inspired by Bianchi and Egnell [1].
By a simple scaling argument, the result of spectral analysis, Theorem 3.2 indicates
\[T_{U_{\mu,\lambda}}\mathcal{M}_{\mu}=\operatorname{Span}\left\{U_{\mu, \lambda},\ \frac{\partial U_{\mu,\lambda}}{\partial\lambda}\right\},\]
where \(\mathcal{M}_{\mu}:=\{cU_{\mu,\lambda}:c\in\mathbb{R},\lambda>0\}\) is the set of extremals for (1.4). This shows that for any \(u\) orthogonal to \(T_{U_{\mu,\lambda}}\mathcal{M}_{\mu}\),
\[\nu_{3}\int_{\mathbb{R}^{N}}U_{\mu,\lambda}^{2^{**}-2}u^{2}\mathrm{d}x\leq\|u \|_{\mu}^{2}, \tag{4.1}\]
where \(\nu_{3}>2^{**}-1\) is independent of \(\lambda\) given as in Theorem 3.2. The main ingredient in the proof of Theorem 1.3 is contained in the lemma below, where the behavior near extremals set \(\mathcal{M}_{\mu}\) is studied.
**Lemma 4.1**.: _Suppose \(N\geq 5\) and \(0<\mu<N-4\). Then for any sequence \(\{u_{n}\}\subset\mathcal{E}_{\mu}\backslash\mathcal{M}_{\mu}\) satisfying \(\inf\limits_{n}\|u_{n}\|_{\mu}>0\) and \(\inf\limits_{v\in\mathcal{M}_{\mu}}\|u_{n}-v\|_{\mu}\to 0\), it holds that_
\[\liminf_{n\to\infty}\frac{\|u_{n}\|_{\mu}^{2}-\mathcal{S}_{\mu}\left(\int_{ \mathbb{R}^{N}}|u_{n}|^{2^{**}}\mathrm{d}x\right)^{\frac{2}{2^{**}}}}{\inf \limits_{v\in\mathcal{M}_{\mu}}\|u_{n}-v\|_{\mu}^{2}}\geq 1-\frac{\nu_{2}}{ \nu_{3}}, \tag{4.2}\]
_where \(\nu_{2}=2^{**}-1<\nu_{3}\) are given as in Theorem 3.2._
Proof.: Let \(d_{n}:=\inf\limits_{v\in\mathcal{M}_{\mu}}\|u_{n}-v\|_{\mu}=\inf\limits_{c\in \mathbb{R},\lambda>0}\|u_{n}-cU_{\mu,\lambda}\|_{\mu}\to 0\). We know that for each \(u_{n}\in\mathcal{E}_{\mu}\), there exist \(c_{n}\in\mathbb{R}\) and \(\lambda_{n}>0\) such that \(d_{n}=\|u_{n}-c_{n}U_{\mu,\lambda_{n}}\|_{\mu}\). In fact,
\[\begin{split}\|u_{n}-cU_{\mu,\lambda}\|_{\mu}^{2}=& \|u_{n}\|_{\mu}^{2}+c^{2}\|U_{\mu,\lambda}\|_{\mu}^{2}-2c\langle u _{n},U_{\mu,\lambda}\rangle_{\mu}\\ \geq&\|u_{n}\|_{\mu}^{2}+c^{2}\|U_{\mu}\|_{\mu}^{2}-2 |c|\|u_{n}\|_{\mu}\|U_{\mu}\|_{\mu}.\end{split} \tag{4.3}\]
Thus the minimizing sequence of \(d_{n}^{2}\), say \(\{c_{n,m},\lambda_{n,m}\}\), must satisfy \(|c_{n,m}|\leq C\) for some \(C\geq 1\) independent of \(m\) which means \(\{c_{n,m}\}\) is bounded. On the other hand,
\[\begin{split}\left|\int_{|\lambda x|\leq\rho}\Delta u_{n}\Delta U _{\mu,\lambda}\mathrm{d}x\right|\leq&\int_{|y|\leq\rho}|\Delta(u _{n})_{\frac{1}{\lambda}}(y)||\Delta U_{\mu}(y)|\mathrm{d}y\\ \leq&\|\Delta u_{n}\|_{L^{2}(\mathbb{R}^{N})}\left( \int_{|y|\leq\rho}|\Delta U_{\mu}|^{2}\mathrm{d}y\right)^{1/2}\\ =& o_{\rho}(1)\end{split}\]
as \(\rho\to 0\) which is uniform for \(\lambda>0\), where \((u_{n})_{\frac{1}{\lambda}}(y)=\lambda^{-\frac{N-4}{2}}u_{n}(\lambda^{-1}y)\), and
\[\begin{split}\left|\int_{|\lambda x|\geq\rho}\Delta u_{n}\Delta U _{\mu,\lambda}\mathrm{d}x\right|\leq&\|\Delta U_{\mu}\|_{L^{2}( \mathbb{R}^{N})}\left(\int_{|x|\geq\frac{\rho}{\lambda}}|\Delta u_{n}|^{2} \mathrm{d}x\right)^{1/2}=o_{\lambda}(1)\end{split}\]
as \(\lambda\to 0\) for any fixed \(\rho>0\). By taking \(\lambda\to 0\) and then \(\rho\to 0\), we obtain
\[\left|\int_{\mathbb{R}^{N}}\Delta u_{n}\Delta U_{\mu,\lambda}\mathrm{d}x\right| \to 0\quad\text{as}\quad\lambda\to 0.\]
Similarly, we deduce that
\[\left|\int_{\mathbb{R}^{N}}\frac{\nabla u_{n}\cdot\nabla U_{\mu,\lambda}}{|x|^{ 2}}\mathrm{d}x\right|\to 0\quad\text{and}\quad\left|\int_{\mathbb{R}^{N}} \frac{u_{n}U_{\mu,\lambda}}{|x|^{4}}\mathrm{d}x\right|\to 0\quad\text{as} \quad\lambda\to 0.\]
Therefore,
\[\begin{split}|\langle u_{n},U_{\mu,\lambda}\rangle_{\mu}|\leq& \left|\int_{\mathbb{R}^{N}}\Delta u_{n}\Delta U_{\mu,\lambda}\mathrm{d}x \right|+C_{\mu,1}\left|\int_{\mathbb{R}^{N}}\frac{\nabla u_{n}\cdot\nabla U_{ \mu,\lambda}}{|x|^{2}}\mathrm{d}x\right|+|C_{\mu,2}|\left|\int_{\mathbb{R}^{ N}}\frac{u_{n}U_{\mu,\lambda}}{|x|^{4}}\mathrm{d}x\right|\\ \to& 0,\end{split} \tag{4.4}\]
as \(\lambda\to 0\). Moreover, by the explicit from of \(U_{\mu,\lambda}\) we have
\[\left|\int_{|\lambda x|\leq R}\Delta u_{n}\Delta U_{\mu,\lambda}\mathrm{d}x\right|\leq \|\Delta U_{\mu}\|_{L^{2}(\mathbb{R}^{N})}\left(\int_{|x|\leq\frac{ R}{\lambda}}|\Delta u_{n}|^{2}\mathrm{d}x\right)^{1/2}=o_{\lambda}(1)\]
as \(\lambda\to+\infty\) for any fixed \(R>0\) and
\[\begin{split}\left|\int_{|\lambda x|\geq R}\Delta u_{n}\Delta U _{\mu,\lambda}\mathrm{d}x\right|\leq&\int_{|y|\geq R}|\Delta(u_{ n})_{\frac{1}{\lambda}}(y)||\Delta U_{\mu}(y)|\mathrm{d}y\\ \leq&\|\Delta u_{n}\|_{L^{2}(\mathbb{R}^{N})}\left( \int_{|y|\geq R}|\Delta U_{\mu}|^{2}\mathrm{d}y\right)^{1/2}=o_{R}(1)\end{split}\]
as \(R\to+\infty\) which is uniform for \(\lambda>0\). Thus, by taking first \(\lambda\to+\infty\) and then \(R\to+\infty\), we also obtain
\[\left|\int_{\mathbb{R}^{N}}\Delta u_{n}\Delta U_{\mu,\lambda}\mathrm{d}x \right|\to 0\quad\text{as}\quad\lambda\to+\infty.\]
Similarly, we deduce that
\[\left|\int_{\mathbb{R}^{N}}\frac{\nabla u_{n}\cdot\nabla U_{\mu,\lambda}}{|x|^ {2}}\mathrm{d}x\right|\to 0\quad\text{and}\quad\left|\int_{\mathbb{R}^{N}} \frac{u_{n}U_{\mu,\lambda}}{|x|^{4}}\mathrm{d}x\right|\to 0\quad\text{as} \quad\lambda\to+\infty.\]
Therefore,
\[|\langle u_{n},U_{\mu,\lambda}\rangle_{\mu}|\to 0\quad\text{as}\quad\lambda \to+\infty. \tag{4.5}\]
Combining with (4.4) and (4.5), it follows from (4.3) and \(d_{n}\to 0\), \(\inf_{n}\|u_{n}\|>0\) that the minimizing sequence \(\{c_{n,m},\lambda_{n,m}\}\) must satisfying \(1/C\leq|\lambda_{n,m}|\leq C\) for some \(C\geq 1\) independent of \(m\) which means \(\{\lambda_{n,m}\}\) is also bounded. Thus for each \(u_{n}\in\mathcal{E}_{\mu}\), \(d_{n}^{2}\) can be attained by some \(c_{n}\in\mathbb{R}\) and \(\lambda_{n}>0\).
Since \(\mathcal{M}_{\mu}\) is two-dimensional manifold embedded in \(\mathcal{E}_{\mu}\), that is
\[(c,\lambda)\in\mathbb{R}\times\mathbb{R}^{+}\to cU_{\mu,\lambda}\in\mathcal{E} _{\mu},\]
then from Theorem 3.2, under suitable transform, we deduce that the tangential space at \((c_{n},\lambda_{n})\) is given by
\[T_{c_{n}U_{\mu,\lambda_{n}}}\mathcal{M}_{\mu}=\mathrm{Span}\left\{U_{\mu, \lambda_{n}},\frac{\partial U_{\mu,\lambda}}{\partial\lambda}\Big{|}_{\lambda= \lambda_{n}}\right\},\]
and we must have that \((u_{n}-c_{n}U_{\mu,\lambda_{n}})\) is perpendicular to \(T_{c_{n}U_{\mu,\lambda_{n}}}\mathcal{M}_{\mu}\), in particular,
\[\langle U_{\mu,\lambda_{n}},u_{n}-c_{n}U_{\mu,\lambda_{n}}\rangle_{\mu}=\int_{ \mathbb{R}^{N}}U_{\mu,\lambda_{n}}^{2^{**}-1}(u_{n}-c_{n}U_{\mu,\lambda_{n}}) \mathrm{d}x=0.\]
Furthermore, same as in (4.1) we have
\[\nu_{3}\int_{\mathbb{R}^{N}}U_{\mu,\lambda_{n}}^{2^{**}-2}(u_{n}-c_{n}U_{\mu, \lambda_{n}})^{2}\mathrm{d}x\leq\|u_{n}-c_{n}U_{\mu,\lambda_{n}}\|_{\mu}^{2}. \tag{4.6}\]
Let \(u_{n}=c_{n}U_{\mu,\lambda_{n}}+d_{n}w_{n}\), then \(w_{n}\) is perpendicular to \(T_{c_{n}U_{\mu,\lambda_{n}}}\mathcal{M}_{\mu}\),
\[\|w_{n}\|_{\mu}=1\quad\text{and}\quad\|u_{n}\|_{\mu}^{2}=d_{n}^{2}+c_{n}^{2}\| U_{\mu}\|_{\mu}^{2},\]
in particular,
\[\langle U_{\mu,\lambda_{n}},w_{n}\rangle_{\mu}=\int_{\mathbb{R}^{N}}U_{\mu, \lambda_{n}}^{2^{**}-1}w_{n}\mathrm{d}x=0.\]
Then we can rewrite (4.6) as follows:
\[\int_{\mathbb{R}^{N}}U_{\mu,\lambda_{n}}^{2^{**}-2}w_{n}^{2}\mathrm{d}x\leq \frac{1}{\nu_{3}}. \tag{4.7}\]
By using Taylor's expansion, we deduce
\[\int_{\mathbb{R}^{N}}|u_{n}|^{2^{**}}\mathrm{d}x= \int_{\mathbb{R}^{N}}|c_{n}U_{\mu,\lambda_{n}}+d_{n}w_{n}|^{2^{**} }\mathrm{d}x\] \[= |c_{n}|^{2^{**}}\int_{\mathbb{R}^{N}}U_{\mu,\lambda_{n}}^{2^{**} }\mathrm{d}x+d_{n}2^{**}|c_{n}|^{2^{**}-1}\int_{\mathbb{R}^{N}}U_{\mu,\lambda_ {n}}^{2^{**}-1}w_{n}\mathrm{d}x\] \[+\frac{2^{**}(2^{**}-1)d_{n}^{2}|c_{n}|^{2^{**}-2}}{2}\int_{ \mathbb{R}^{N}}U_{\mu,\lambda_{n}}^{2^{**}-2}w_{n}^{2}\mathrm{d}x+o(d_{n}^{2}) \tag{4.8}\] \[= |c_{n}|^{2^{**}}\|U_{\mu}\|_{\mu}^{2}+\frac{2^{**}(2^{**}-1)d_{n} ^{2}|c_{n}|^{2^{**}-2}}{2}\int_{\mathbb{R}^{N}}U_{\mu,\lambda_{n}}^{2^{**}-2}w _{n}^{2}\mathrm{d}x+o(d_{n}^{2}).\]
Then combining with (4.7) and (4.8), by the concavity of \(t\mapsto t^{\frac{2}{2^{**}}}\), we obtain
\[\left(\int_{\mathbb{R}^{N}}|u_{n}|^{2^{**}}\mathrm{d}x\right)^{ \frac{2}{2^{**}}}\leq c_{n}^{2}\left(\|U_{\mu}\|_{\mu}^{2}+\frac{2^{**}(2^{**}-1)d_{n}^{2 }c_{n}^{-2}}{2\nu_{3}}+o(d_{n}^{2})\right)^{\frac{2}{2^{**}}}\] \[= c_{n}^{2}\left(\|U_{\mu}\|_{\mu}^{\frac{4}{2^{**}}}+\frac{2}{2^{ **}}\frac{2^{**}(2^{**}-1)d_{n}^{2}c_{n}^{-2}}{2\nu_{3}}\|U_{\mu}\|_{\mu}^{ \frac{4}{2^{**}}-2}+o(d_{n}^{2})\right) \tag{4.9}\] \[= c_{n}^{2}\|U_{\mu}\|_{\mu}^{\frac{4}{2^{**}}}+\frac{d_{n}^{2}(2^{ **}-1)}{\nu_{3}}\|U_{\mu}\|_{\mu}^{\frac{4}{2^{**}}-2}+o(d_{n}^{2}).\]
Therefore, for \(n\) sufficiently large,
\[\|u_{n}\|_{\mu}^{2}-\mathcal{S}_{\mu}\left(\int_{\mathbb{R}^{N}}| u_{n}|^{2^{**}}\mathrm{d}x\right)^{\frac{2}{2^{**}}}\geq d_{n}^{2}+c_{n}^{2}\|U_{\mu}\|_{\mu}^{2}\] \[-\mathcal{S}_{\mu}\left[c_{n}^{2}\|U_{\mu}\|_{\mu}^{\frac{4}{2^{**} }}+\frac{d_{n}^{2}(2^{**}-1)}{\nu_{3}}\|U_{\mu}\|_{\mu}^{\frac{4}{2^{**}}-2}+o (d_{n}^{2})\right]\] \[= d_{n}^{2}\left(1-\frac{2^{**}-1}{\nu_{3}}\mathcal{S}_{\mu}\|U_{\mu} \|_{\mu}^{\frac{4}{2^{**}}-2}\right)\]
\[\begin{split}&+c_{n}^{2}\left(\|U_{\mu}\|_{\mu}^{2}-\mathcal{S}_{\mu}\|U _{\mu}\|_{\mu}^{\frac{4}{2^{*}}}\right)+o(d_{n}^{2})\\ =& d_{n}^{2}\left(1-\frac{\nu_{2}}{\nu_{3}}\right)+o (d_{n}^{2}),\end{split} \tag{4.10}\]
since \(\|U_{\mu}\|_{\mu}^{2}=\mathcal{S}_{\mu}\left(\int_{\mathbb{R}^{N}}|U_{\mu}|^{ 2^{**}}\mathrm{d}x\right)^{\frac{2}{2^{*}}}\) and \(\|U_{\mu}\|_{\mu}^{2}=\int_{\mathbb{R}^{N}}|U_{\mu}|^{2^{**}}\mathrm{d}x\) imply \(\mathcal{S}_{\mu}=\|U_{\mu}\|_{\mu}^{2^{-\frac{4}{2^{*}}}}\), then (4.2) follows immediately.
Now, we are ready to prove our main result.
**Proof of Theorem 1.3.** We argue by contradiction. In fact, if the theorem is false then there exists a sequence \(\{u_{n}\}\subset\mathcal{E}_{\mu}\setminus\mathcal{M}_{\mu}\) satisfying \(\inf\limits_{n}\|u_{n}\|_{\mu}>0\) and \(\inf\limits_{v\in\mathcal{M}_{\mu}}\|u_{n}-v\|_{\mu}^{2}\to 0\), such that
\[\frac{\|u_{n}\|_{\mu}^{2}-\mathcal{S}_{\mu}\left(\int_{\mathbb{R}^{N}}|u_{n}| ^{2^{**}}\mathrm{d}x\right)^{\frac{2}{2^{**}}}}{\inf\limits_{v\in\mathcal{M}_{ \mu}}\|u_{n}-v\|_{\mu}^{2}}\to 0,\quad\text{as}\quad n\to\infty.\]
By homogeneity, we can assume that \(\|u_{n}\|_{\mu}=1\), and after selecting a subsequence we can assume that \(\inf\limits_{v\in\mathcal{M}_{\mu}}\|u_{n}-v\|_{\mu}\to\xi\in[0,1]\) since \(\inf\limits_{v\in\mathcal{M}_{\mu}}\|u_{n}-v\|_{\mu}\leq\|u_{n}\|_{\mu}\). If \(\xi=0\), then we dedcue a contradiction by Lemma 4.1.
The other possibility only is that \(\xi>0\), that is
\[\inf\limits_{v\in\mathcal{M}_{\mu}}\|u_{n}-v\|_{\mu}\to\xi>0\quad\text{as} \quad n\to\infty,\]
then we must have
\[\|u_{n}\|_{\mu}^{2}-\mathcal{S}_{\mu}\left(\int_{\mathbb{R}^{N}}|u_{n}|^{2^{** }}\mathrm{d}x\right)^{\frac{2}{2^{**}}}\to 0,\quad\|u_{n}\|_{\mu}=1. \tag{4.11}\]
Same as in Section 2, we make the change
\[u_{n}(x)=|x|^{-\frac{\mu}{2}}v_{n}(|x|^{-\frac{\mu}{N-4}}x), \tag{4.12}\]
then from [11, Lemmas 2.1, 2.2], we deduce that
\[\begin{split} 0\leq&\int_{\mathbb{R}^{N}}|\Delta v_{n}|^{2 }\mathrm{d}x-\mathcal{S}_{0}\left(\int_{\mathbb{R}^{N}}|v_{n}|^{2^{**}} \mathrm{d}x\right)^{\frac{2}{2^{**}}}\\ \leq&\left(1-\frac{\mu}{N-4}\right)^{-3}\left[\|u_{n }\|_{\mu}^{2}-\mathcal{S}_{\mu}\left(\int_{\mathbb{R}^{N}}|u_{n}|^{2^{**}} \mathrm{d}x\right)^{\frac{2}{2^{**}}}\right]\to 0.\end{split}\]
Note that (4.12) implies \(v_{n}\) does not invariant under translation, then by Lions' concentration and compactness principle (see [24, Theorem II.4]), there exists a sequence of positive numbers \(\{\tau_{n}\}\) such that
\[\tau_{n}^{\frac{N-4}{2}}v_{n}(\tau_{n}x)\to W\quad\text{in}\quad\mathcal{D}_{ 0}^{2,2}(\mathbb{R}^{N})\quad\text{as}\quad n\to\infty,\]
where \(W(x)=c(d+|x|^{2})^{-\frac{N-4}{2}}\) for some \(c\neq 0\) and \(d>0\), that is
\[\lambda_{n}^{\frac{N-4}{2}}u_{n}(\lambda_{n}x)\to U\quad\text{in}\quad \mathcal{E}_{\mu}\quad\text{as}\quad n\to\infty,\]
for some \(U\in\mathcal{M}_{\mu}\), where \(\lambda_{n}=\tau_{n}^{\frac{N-4}{N-4-\mu}}\), which implies
\[\inf_{v\in\mathcal{M}_{\mu}}\|u_{n}-v\|_{\mu}=\inf_{v\in\mathcal{M}_{\mu}}\| \lambda_{n}^{\frac{N-4}{2}}u_{n}(\lambda_{n}x)-v\|_{\mu}\to 0\quad\text{as} \quad n\to\infty,\]
this is a contradiction. Now the proof of Theorem 1.3 is completed.
**Acknowledgements**
The research has been supported by National Natural Science Foundation of China (No. 11971392).
|
2305.00175 | Clustering What Matters in Constrained Settings | Constrained clustering problems generalize classical clustering formulations,
e.g., $k$-median, $k$-means, by imposing additional constraints on the
feasibility of clustering. There has been significant recent progress in
obtaining approximation algorithms for these problems, both in the metric and
the Euclidean settings. However, the outlier version of these problems, where
the solution is allowed to leave out $m$ points from the clustering, is not
well understood. In this work, we give a general framework for reducing the
outlier version of a constrained $k$-median or $k$-means problem to the
corresponding outlier-free version with only $(1+\varepsilon)$-loss in the
approximation ratio. The reduction is obtained by mapping the original instance
of the problem to $f(k,m, \varepsilon)$ instances of the outlier-free version,
where $f(k, m, \varepsilon) = \left( \frac{k+m}{\varepsilon}\right)^{O(m)}$. As
specific applications, we get the following results:
- First FPT (in the parameters $k$ and $m$) $(1+\varepsilon)$-approximation
algorithm for the outlier version of capacitated $k$-median and $k$-means in
Euclidean spaces with hard capacities.
- First FPT (in the parameters $k$ and $m$) $(3+\varepsilon)$ and
$(9+\varepsilon)$ approximation algorithms for the outlier version of
capacitated $k$-median and $k$-means, respectively, in general metric spaces
with hard capacities.
- First FPT (in the parameters $k$ and $m$) $(2-\delta)$-approximation
algorithm for the outlier version of the $k$-median problem under the Ulam
metric. Our work generalizes the known results to a larger class of constrained
clustering problems. Further, our reduction works for arbitrary metric spaces
and so can extend clustering algorithms for outlier-free versions in both
Euclidean and arbitrary metric spaces. | Ragesh Jaiswal, Amit Kumar | 2023-04-29T05:25:04Z | http://arxiv.org/abs/2305.00175v1 | # Clustering What Matters in Constrained Settings
###### Abstract
Constrained clustering problems generalize classical clustering formulations, e.g., \(k\)-median, \(k\)-means, by imposing additional constraints on the feasibility of a clustering. There has been significant recent progress in obtaining approximation algorithms for these problems, both in the metric and the Euclidean settings. However, the outlier version of these problems, where the solution is allowed to leave out \(m\) points from the clustering, is not well understood. In this work, we give a general framework for reducing the outlier version of a constrained \(k\)-median or \(k\)-means problem to the corresponding outlier-free version with only \((1+\varepsilon)\)-loss in the approximation ratio. The reduction is obtained by mapping the original instance of the problem to \(f(k,m,\varepsilon)\) instances of the outlier-free version, where \(f(k,m,\varepsilon)=\big{(}\frac{k+m}{\varepsilon}\big{)}^{O(m)}\). As specific applications, we get the following results:
* First FPT (_in the parameters \(k\) and \(m\)_) \((1+\varepsilon)\)-approximation algorithm for the outlier version of capacitated \(k\)-median and \(k\)-means in Euclidean spaces with _hard_ capacities.
* First FPT (_in the parameters \(k\) and \(m\)_) \((3+\varepsilon)\) and \((9+\varepsilon)\) approximation algorithms for the outlier version of capacitated \(k\)-median and \(k\)-means, respectively, in general metric spaces with _hard_ capacities.
* First FPT (_in the parameters \(k\) and \(m\)_) \((2-\delta)\)-approximation algorithm for the outlier version of the \(k\)-median problem under the Ulam metric.
Our work generalizes the results of [1] and [1] to a larger class of constrained clustering problems. Further, our reduction works for arbitrary metric spaces and so can extend clustering algorithms for outlier-free versions in both Euclidean and arbitrary metric spaces.
## 1 Introduction
Center-based clustering problems such as \(k\)-median and the \(k\)-means are important data processing tasks. Given a metric \(D\) on a set of \(n\)points \(\mathcal{X}\) and a parameter \(k\), the goal here is to partition the set of points into \(k\)_clusters_, say \(C_{1},\ldots,C_{k}\), and assign the points in each cluster to a corresponding _cluster center_, say \(c_{1},\ldots,c_{k}\) respectively, such that the objective \(\sum_{i=1}^{k}\sum_{x\in C_{i}}D(x,c_{i})^{z}\) is minimized. Here \(z\) is a parameter which is \(1\) for \(k\)-median and \(2\) for \(k\)-means. The _outlier_ version of these problems is specified by another parameter \(m\), where a solution is allowed to leave out up to \(m\) points from the clusters. Outlier versions capture settings where the input may contain a few highly erroneous data points. Both the outlier and the outlier-free versions have been well-studied in the literature with constant factor approximations known for both the \(k\)-means and the \(k\)-median problem [1, 2, 3, 4]. In addition, fixed parameter tractable (FPT) \((1+\varepsilon)\)-approximation algorithms are known for these problems in the Euclidean setting [12, 13, 14]: the running time of such algorithms is of the form \(f(k,m,\varepsilon)\cdot poly(n,d)\), where \(f()\) is an exponential function of the parameters \(k,m,\varepsilon\) and \(d\) denotes the dimensionality of the points.
A more recent development in clustering problems has been the notion of _constrained clustering_. A constrained clustering problem specifies additional conditions on a feasible partitioning of the input points into \(k\) clusters. For example, the \(r\)-gathering problem requires that each cluster in a feasible partitioning must contain at least \(r\) data points. Similarly, the well-known _capacitated_ clustering problem specifies an upper bound on the size of each cluster. Constrained clustering formulations can also capture various types of _fairness_ constraints: each data point has a _label_ assigned to it, and we may require upper or lower bounds on the number (or fraction) of points with a certain label in each cluster. Table 1 gives a list of some of these problems. FPT (in the parameter \(k\)) constant factor approximation algorithms are known for a large class of these problems (see Table 2).
It is worth noting that constrained clustering problems are distinct from outlier clustering: the former restricts the set of feasible partitioning of input points, whereas the latter allows us to reduce the set of points that need to be partitioned into clusters. There has not been much progress on constrained clustering problems in the outlier setting (also see [10] for unbounded integrality gap for the natural LP relaxation for the outlier clustering versions). In this work, we bridge this lag between the outlier and the outlier-free versions of constrained clustering problems by giving an _almost approximation-preserving_ reduction from the former to the latter. As long as the parameters of interest (i.e., \(k,m\)) are small, the reduction works in polynomial time. Using our reduction, an FPT \(\alpha\)-approximation algorithm for the outlier-free version of a constrained clustering problem leads to an FPT \((\alpha+\varepsilon)\)-approximation algorithm for the outlier version of the same problem. For general metric spaces, this implies the first FPT constant-approximation for outlier versions of several constrained clustering problems; and similarly, we get new FPT \((1+\varepsilon)\)-approximation algorithms for several outlier constrained clustering problems -see Table 2 for the precise details.
This kind of FPT approximation preserving reduction in the context of Euclidean \(k\)-means was first given by [1] using a sampling-based approach. [1] extended the sampling ideas of [1] to general metric spaces but did not give an approximation-preserving reduction. [1] gave a reduction for general metric spaces using a coreset construction. In this work, we use the sampling-based ideas of [1] to obtain an approximation-preserving reduction from the outlier version to the outlier-free version with improved parameters over [1]. Moreover, our reduction works for most known constrained clustering settings as well.
### Preliminaries
We give a general definition of a constrained clustering problem. For a positive integer \(k\), we shall use \([k]\) to denote the set \(\{1,\ldots,k\}\). Let \((\mathcal{X},D)\) denote the metric space with distance function \(D\). For a point \(x\) and a subset \(S\) of points, we shall use \(D(x,S)\) to denote \(\min_{y\in S}D(x,y)\).
The set \(\mathcal{X}\) contains subsets \(F\) and \(X\): here \(X\) denotes the set of input points and \(F\) the set of points where a center can be located. An outlier constrained clustering problem is specified by the following parameters and functions:
* \(k\): the number of clusters.
* \(m\): the number of points which can be left out from the clusters.
* a function check: given a partitioning \(X_{0},X_{1},\ldots,X_{k}\) of \(X\) (here \(X_{0}\) is the set of outliers) and centers \(f_{1},\ldots,f_{k}\), each lying in the set \(F\), the function \(\mathsf{check}(X_{1},\ldots,X_{k},f_{1},\ldots,f_{k})\) outputs \(1\) iff this is a feasible clustering. For example, in the \(r\)-gathering problem, the \(\mathsf{check}(X_{0},X_{1},\ldots,X_{k},f_{1},\ldots,f_{k})\) outputs \(1\) iff \(|X_{i}|\geq r\) for each \(i\in[k]\). The check function depends only on the cardinality of the sets \(X_{1},\ldots,X_{k}\) and the locations \(f_{1},\ldots,f_{k}\). This already captures many of the constrained clustering problems. Our framework also applies to the more general labelled version (see details below).
* a cost function cost: given a partitioning \(X_{0},X_{1},\ldots,X_{k}\) of \(X\) and centers \(f_{1},\ldots,f_{k}\), \[\mathsf{cost}(X_{1},\ldots,X_{k},f_{1},\ldots,f_{k}):=\sum_{i\in[k]}\sum_{x \in X_{i}}D^{z}(x,f_{i}),\] where \(z\) is either \(1\) (the outlier constrained \(k\)-median problem) or \(2\) (the outlier constrained \(k\)-means problem).
Given an instance \(\mathcal{I}=(X,F,k,m,\mathsf{check},\mathsf{cost})\) of an outlier constrained clustering problem as above, the goal is to find a partitioning \(X_{0},X_{1},\ldots,X_{k}\) of \(X\) and centers \(f_{1},\ldots,f_{k}\in F\) such that \(|X_{0}|\leq m\),
\(\mathsf{check}(X_{1},\ldots,X_{k},f_{1},\ldots,f_{k})\) is \(1\) and \(\mathsf{cost}(X_{0},X_{1},\ldots,X_{k},f_{1},\ldots,f_{k})\) is minimized. The outlier-free constrained clustering problem is specified as above, except that the parameter \(m\) is \(0\). For sake of brevity, we leave out the parameter \(m\) and the set \(X_{0}\) while defining the instance \(\mathcal{I}\), and functions \(\mathsf{check}\) and \(\mathsf{cost}\).
We shall also consider a more general class of constrained clustering problems, where each input point is assigned a _label_. In other words, an instance \(\mathcal{I}\) of such a problem is specified by a tuple \((X,F,k,m,\sigma,\mathsf{check},\mathsf{cost})\), where \(\sigma:X\to L\) for a finite set \(L\). Note that the check function may depend on the function \(\sigma\). For example, \(\sigma\) could assign a label "red" or "blue" to each point in \(X\) and the check function would require that each cluster \(X_{i}\) should have an equal number of red and blue points. In addition to the locations \(f_{1},\ldots,f_{k}\), the
\(\mathsf{check}(X_{1},\ldots,X_{k},f_{1},\ldots,f_{k},\sigma)\) function also depends on \(|\sigma^{-1}(l)\cap X_{j}|\) for each \(l\in L,j\in[k]\), i.e., the number of points with a particular label in each of the clusters. Indirectly, this also implies that the \(\mathsf{check}\) function can impose conditions on the labels of the outliers points. For example, the colourful \(k\)-median problem discussed in [1] has the constraint that \(m_{i}\) clients from the label type \(i\) should be designated as outliers, given that every client has a unique label. Table 1 gives a description of some of these problems.
We shall use the approximate triangle inequality, which states that for \(z\in\{1,2\}\) and any three points \(x_{1},x_{2},x_{3}\in\mathcal{X}\),
\[D^{z}(x_{1},x_{3})\leq z\left(D^{z}(x_{1},x_{2})+D^{z}(x_{2},x_{3})\right). \tag{1}\]
### Our results
Our main result reduces the outlier constrained clustering problem to the outlier-free version. In our reduction, we shall also use approximation algorithms for the (unconstrained) \(k\)-\(\mathsf{median}\) and \(k\)-\(\mathsf{means}\) problems. We assume we have a constant factor approximation algorithm for these problems1: let \(\mathcal{C}\) denote such an algorithm with running time \(T_{\mathcal{C}}(n)\) on an input of size \(n\). Note that \(\mathcal{C}\) would be an algorithm for the \(k\)-\(\mathsf{means}\) or the \(k\)-\(\mathsf{median}\) problem depending on whether \(z=1\) or \(2\) in the definition of the \(\mathsf{cost}\) function.
Footnote 1: Several such constant factor approximation algorithms exist [1, 13, 14].
Theorem 1.1 (Main Theorem): _Consider an instance \(\mathcal{I}=(X,F,k,m,\mathsf{check},\mathsf{cost})\) of an outlier constrained clustering problem. Let \(\mathcal{A}\) be an \(\alpha\)-approximation algorithm for the corresponding outlier-free constrained clustering problem; let \(T_{\mathcal{A}}(n)\) be the running time of \(\mathcal{A}\) on an input of size \(n\). Given a positive \(\varepsilon>0\), there is an \(\alpha(1+\varepsilon)\)-approximation algorithm for \(\mathcal{I}\) with running time \(T_{\mathcal{C}}(n)+q\cdot T_{\mathcal{A}}(n)+O\left(n\cdot(k+\frac{m^{ \varepsilon+1}\log m}{\varepsilon^{\varepsilon}})\right)+O\left(qm^{2}(k+m)^{3}\right)\), where \(n\) is the size of \(\mathcal{I}\) and \(q=f(k,m,\varepsilon)=\left(\frac{k+m}{\varepsilon}\right)^{O(m)}\), and \(z=1\) or \(2\) depending on the \(\mathsf{cost}\) function (i.e., \(z=1\) for \(k\)-\(\mathsf{median}\) objection and \(z=2\) for \(k\)-\(\mathsf{means}\) objective)._
The above theorem implies that as long as there is an FPT or polynomial-time approximation algorithm for the constrained, outlier-free \(k\)-\(\mathsf{median}\) or \(k\)-\(\mathsf{means}\) clustering problem, there is an FPT approximation algorithm (with almost the same approximation ratio) for the corresponding outlier version. We prove this result by creating \(q\) instances of the outlier-free version of \(\mathcal{I}\) and picking the best solution on these instances using the algorithm \(\mathcal{A}\). We also extend the above result to the labelled version:
Theorem 1.2 (Main Theorem: labelled version): _Consider an instance \(\mathcal{I}=(X,F,k,m,\mu,\mathsf{check},\mathsf{cost})\) of an outlier constrained clustering problem with labels on input points. Let \(\mathcal{A}\) be an \(\alpha\)-approximation algorithm for the corresponding outlier-free constrained clustering problem; let \(T_{\mathcal{A}}(n)\) be the running time of \(\mathcal{A}\) on an input of size \(n\). Given a positive \(\varepsilon>0\), there is an \(\alpha(1+\varepsilon)\)-approximation algorithm for \(\mathcal{I}\) with running time \(T_{\mathcal{C}}(n)+q\cdot T_{\mathcal{A}}(n)+O\left(n\cdot(k+\frac{m^{ \varepsilon+1}\log m}{\varepsilon^{\varepsilon}})\right)+O\left(q\ell m^{2}(k+m )^{3}\right)\), where \(n\) is the size of \(\mathcal{I}\), \(q=f(k,m,\varepsilon)=\left(\frac{(k+m)\ell}{\varepsilon}\right)^{O(m)}\) with \(\ell\) being the number of distinct labels, and \(z=1\) or \(2\) depending on the \(\mathsf{cost}\) function (i.e., \(z=1\) for \(k\)-\(\mathsf{median}\) objection and \(z=2\) for \(k\)-\(\mathsf{means}\) objective)._
The consequences of our results for specific constrained clustering problems is summarized in Table 2. We give the results of related works [1, 2, 13] in the same table to see the contributions of this work. Our contributions can be divided into two main categories:
1. _Matching the best-known result_: This can be further divided into two categories: 1. _Matching results of_ _[_1_]__;_ [_1_]__gives an outlier to outlier-free reduction. We also give such a reduction using a different technique with slightly better parameters. This means that we match all the results of [1], which includes problems such as the classical \(k\)-median/means problems, the Matroid \(k\)-median problem, the colorful \(k\)-median problem, and \(k\)-median in certain special metrics. See rows 2-6 in Table 2.
\begin{table}
\begin{tabular}{|l|l|l|} \hline \multicolumn{1}{|c|}{**Problem**} & **Description** \\ \hline \multirow{7}{*}{Unconstrained \(k\)-median} & _Input:_\((F,X,k)\) & _Input_: \((F,X,k)\) \\ & _Output:_\((X_{1},...,X_{k},f_{1},...,f_{k})\) \\ & _Constraints_: None, i.e., \(\text{check}(X_{1},...,X_{k},f_{1},...,f_{k})\) always equals 1. \\ \cline{1-1} \cline{2-3} (_Constraint type: unconstrained_) & _Objective:_ Minimise \(\sum_{i}\sum_{x\in X_{i}}D(x,f_{i})\). \\ \cline{1-1} \cline{2-3} & (This includes various versions corresponding to specific metrics such as \\ & \multicolumn{1}{c}{Ulam metric on permutations, metric spaces with constant doubling dimension etc.} \\ \hline \multirow{3}{*}{Fault-tolerant \(k\)-median} & _Input_: \((F,X,k)\) and a number \(h(x)\leq k\) for every facility \(x\in X\) \\ & _Output_: \((f_{1},...,f_{k})\) \\ \cline{1-1} \cline{2-3} (_Constraint type: unconstrained_ but labelled) & _Objective_: Nonimise \(\sum_{x\in X}\sum_{j=1}^{h(x)}D(x,f_{\pi_{x}(j)})\), \\ \cline{1-1} \cline{2-3} & where \(\pi_{x}(j)\) is the index of \(j^{th}\) nearest center to \(x\) in \((f_{1},...,f_{k})\) \\ \cline{1-1} \cline{2-3} & (_Label_\(h(x)\) may be regarded as the label of the client \(x\). So, the number of distinct labels \(\ell\leq k\).) \\ \hline \multirow{7}{*}{Balanced \(k\)-median (_Constraint type: size_)} & _Input_: \((F,X,k)\) and integers \((r_{1},...,r_{k})\), \((l_{1},...,l_{k})\), \\ & _Output_: \((X_{1},...,X_{k},f_{1},...,f_{k})\) \\ \cline{1-1} \cline{2-3} & _Constraints_: \(X_{i}\) should have at least \(r_{i}\) and at most \(l_{i}\) clients, \\ \cline{1-1} \cline{2-3} (_Constraint type: size_) & i.e., \(\text{check}(X_{1},...,X_{k},f_{1},...,f_{k})=1\) iff \(\forall i,r_{i}\leq|X_{i}|\leq l_{i}\). \\ \cline{1-1} \cline{2-3} & _Objective_: Minimise \(\sum_{i}\sum_{x\in X_{i}}D(x,f_{i})\). \\ \cline{1-1} \cline{
2. _Matching results of_[10]:_[11] gives FPT approximation algorithms for certain constrained problems on which the coreset-based approach of [1] is not known to work. See the last row of Table 2. [11] gives algorithms for outlier and outlier-free versions with the same approximation guarantee. Since the best outlier-free approximation is also from [11], our results currently only match the approximation guarantees of [11]. However, if there is an improvement in any of these problems, our results will immediately beat the known outlier results of [11].
2. _Best known results_: Since our results hold for a larger class of constrained problems than earlier works, there are certain problems for which our results give the best-known FPT approximation algorithm. The list includes capacitated \(k\)-median/\(k\)-means with hard capacities in general metric and Euclidean spaces. It also includes the \(k\)-median problem in the Ulam metric. A recent development in the Ulam \(k\)-median problem [1] has broken the \(2\)-approximation barrier. Our reduction allows us to take this development to the outlier setting as well. The outlier-free results from which our best results are derived using our reduction are given in Table 2 (see rows 7-9).
### Comparison with earlier work
As discussed earlier, the idea of a reduction from a outlier clustering problem to the corresponding outlier-free version in the context of the Euclidean \(k\)-means problem was suggested by [1] using a \(D^{2}\)-sampling based idea. [11] used the sampling ideas to design approximation algorithms for the outlier versions of various constrained clustering problems. However, the approximation guarantee obtained by [11] was limited to \((3+\varepsilon)\) for a large class of constrained \(k\)-median and \((9+\varepsilon)\) for the constrained \(k\)-means problems, and it was not clear how to extend these techniques to get improved guarantees. As a result, their techniques could not exploit the recent developments by [1] in the design of \((1+2/e+\varepsilon)\) and \((1+8/e+\varepsilon)\) FPT approximation algorithms for the classical outlier-free \(k\)-median and \(k\)-means problems respectively in general metric spaces. [1] gave an outlier-to-outlier-free reduction, making it possible to extend the above-mentioned FPT approximation guarantees for the outlier-free setting to the outlier setting.
The reduction of [1] is based on the coreset construction by [1] using uniform sampling. A coreset for a dataset is a weighted set of points such that the clustering of the coreset points with respect to any set of \(k\) centers is the same (within a \(1\pm\varepsilon\) factor) as that of the original set points. The coreset construction in [1] starts with a set \(C\) of centers that give constant factor approximation. They consider \(O(\log n)\) "rings" around these centers, uniformly sample points from each of these rings, and set the weight of the sampled points appropriately. The number of sampled points, and hence the size of the coreset, is \(\left(\frac{|C|\log n}{\varepsilon}\right)^{2}\). [1] showed that when starting with \((k+m)\) centers that give a constant approximation to the classical \((k+m)\)-median problem, the coreset obtained as above has the following additional property: for any set of \(k\) centers, the clustering cost of the original set of points excluding \(m\) outliers is same (again, within \(1\pm\varepsilon\) factor) as that of the coreset, again allowing for exclusion of a subset of \(m\) points from it. This means that by trying out all \(m\) subsets from the coreset, we ensure that at least one subset acts as a good outlier set. Since the coreset size is \(\left(\frac{(k+m)\log n}{\varepsilon}\right)^{2}\), the number of outlier-free instances that we construct is \(\left(\frac{(k+m)\log n}{\varepsilon}\right)^{O(m)}\). Using \((\log n)^{O(m)}=\max\{m^{O(m)},n^{O(1)}\}\), this is of the form \(f(k,m,\varepsilon)\cdot n^{O(1)}\) for a suitable function \(f\). At this point, we note the first quantitative difference from our result. In our algorithm, we save the \((\log n)^{O(m)}\) factor, which also means that the number of instances does not depend on the problem size \(n\). Further, a coreset based construction restricts the kind of problems it can be applied to. The coreset property that the cost of original points is the same as that of the weighted cost of coreset points holds when points are assigned to the closest center (_i.e., the entire weight of the coreset goes to the closest center_).2 This works for the classical unconstrained \(k\)-median and \(k\)-means problems (as well as the few other settings considered in [1]). However, for several constrained clustering problems, it may not hold that every point is assigned to the closest center. There have been some recent developments [1, 2] in designing coresets for constrained clustering settings. However, they have not been shown to apply to the outlier setting. Another recent work [14] designs coresets for the outlier setting, but like [1], it has limited scope and has not been shown to extend for most constrained settings. Our \(D^{z}\)-sampling-based
\begin{table}
\begin{tabular}{|l|c|c|c|c|} \hline \multirow{2}{*}{**Problem**} & \multirow{2}{*}{**Outlier-free**} & \multicolumn{3}{c|}{**Outlier version**} \\ \cline{3-5} & & [GJK20] & [AISX23] & **This work** \\ \hline Euclidean \(k\)-means (i.e., \(F=\mathbb{R}^{d},X\subset\mathbb{R}^{d}\)) & \(\begin{array}{c}(1+\varepsilon)\\ \@@LTX@noalign{\vskip 6.0pt plus 2.
technique has the advantage that instead of running the outlier-free algorithm on a coreset as in [1], it works directly with the dataset. That is, we run the outlier-free algorithm on the dataset (after removing outlier candidates). This also makes our results helpful in weighted settings (e.g., see [1]) where the outlier-free algorithm is known to work only for unweighted datasets - note that a coreset is a weighted set).
### Our Techniques
In this section, we give a high-level description of our algorithm. Let \(\mathcal{I}\) denote an instance of outlier constrained clustering on a set of points \(X\) and \(\mathcal{O}\) denote an optimal solution to \(\mathcal{I}\). The first observation is that the optimal cost of the outlier-free and unconstrained clustering with \(k+m\) centers on \(X\) is a lower bound on the cost of \(\mathcal{O}\) (Claim 1). 1 Let \(C\) denote the set of these \((k+m)\) centers (we can use any constant factor approximation for the unconstrained version to find \(C\)). The intuition behind choosing \(C\) is that the centers in \(\mathcal{O}\) should be close to \(C\).
Footnote 1: This observation was used in both [1] and [1].
Now we divide the set of \(m\) outliers in \(\mathcal{O}\) into two subsets: those which are far from \(C\) and the remaining ones close to \(C\) ("near" outliers). Our first idea is to randomly sample a subset \(S\) of \(O(m\log m)\) points from \(X\) with sampling probability proportional to distance (or square of distance) from the set \(C\). This sampling ensures that \(S\) contains the far outliers with high probability (Claim 2). We can then cycle through all subsets of \(S\) to guess the exact subset of far outliers. Handling the near outliers is more challenging and forms the heart of the technical contribution of this paper.
We "assign" each near outlier to its closest point in \(C\) - let \(X^{\mathsf{opt}}_{N,j}\) be the set of outliers assigned to \(c_{j}\). By cycling over all choices, we can guess the cardinality \(t_{j}\) of each of the sets \(X^{\mathsf{opt}}_{N,j}\). We now set up a suitable minimum cost bipartite \(b\)-matching instance which assigns a set of \(t_{j}\) points to each center \(c_{j}\). Let \(\widehat{X}_{j}\) be the set of points assigned to \(c_{j}\). Our algorithm uses \(\cup_{j}\widehat{X}_{j}\) as the set of near outliers. In the analysis, we need to argue that there is a way of matching the points in \(X^{\mathsf{opt}}_{N,j}\) to \(\widehat{X}_{j}\) whose total cost (sum of distances or squared distances between matched points) is small (Lemma 1). The hope is that we can go from the optimal set of outliers in \(\mathcal{O}\) to the ones in the algorithm and argue that the increase in cost is small. Since we are dealing with constrained clustering, we need to ensure that this process does not change the size of each of the clusters. To achieve this, we need to further modify the matching between the two sets of outliers (Lemma 2). Finally, with this modified matching, we are able to argue that the cost of the solution produced by the algorithm is close to that of the optimal solution. The extension to the labelled version follows along similar lines.
In the remaining paper, we prove our two main results, Theorem 1 and Theorem 2. The main discussion will be for Theorem 1 since Theorem 2 is an extension of Theorem 1 that uses the same proof ideas. In the following sections, we give the details of our algorithm (Section 2) and its analysis (Section 3). In Section 3.1, we discuss the extension to the labelled version.
## 2 Algorithm
In this section, we describe the algorithm for the outlier constrained clustering problem. Consider an instance \(\mathcal{I}=(X,F,k,m,\mathsf{check},\mathsf{cost})\) of this problem. Recall that the parameter \(z=1\) or \(2\) depends on whether the \(\mathsf{cost}\) function is like the \(k\)-\(\mathsf{median}\) or the \(k\)-\(\mathsf{means}\) objective respectively. In addition, we assume the existence of the following algorithms:
* A constant factor algorithm for the \(k\)-\(\mathsf{median}\) or the \(k\)-\(\mathsf{means}\) problem (depending on \(z=1\) or \(z=2\) respectively): an instance here is specified by a tuple \((X^{\prime},F^{\prime},k^{\prime})\) only, where \(X^{\prime}\) is the set of input points, \(F^{\prime}\) is the set of potential locations for a center, and \(k^{\prime}\) denotes the number of clusters. We shall use \(\mathcal{C}\) to denote this algorithm.
* An algorithm \(\mathcal{A}\) for the outlier-free version of this problem. An instance here is given by a tuple \((X^{\prime},F^{\prime},k,\mathsf{check},\mathsf{cost})\) where the \(\mathsf{check}\) and the \(\mathsf{cost}\) functions are same as those in \(\mathcal{I}\).
* An algorithm \(\mathcal{M}\) for the \(b\)-\(\mathsf{matching}\) problem: an instance of the \(b\)-\(\mathsf{matching}\) problem is specified by a weighted bi-partite graph \(G=(L,R=\{v_{1},\ldots,v_{r}\},E)\), with edge \(e\) having weight \(e\); and a tuple \((t_{1},\ldots,t_{r})\), where \(t_{i},i\in[r]\), are non-negative integers. A solution needs to find a subset of \(E^{\prime}\) of \(E\) such
each vertex of \(L\) is incident with at most one edge of \(E^{\prime}\), and each vertex \(v_{j}\in R\) is incident with _exactly_\(t_{j}\) edges of \(E^{\prime}\). The goal is to find such a set \(E^{\prime}\) of minimum total weight.
We now define \(D^{z}\)-sampling:
Definition 3: Given sets \(C\) and \(X\) of points, \(D^{z}\)-sampling from \(X\) w.r.t. \(C\) samples a point \(x\in X\), where the probability of sampling \(x\) is proportional to \(D^{z}(x,C)\).
The algorithm is described in Algorithm 1. It first runs the algorithm \(\mathcal{C}\) to obtain a set of \((k+m)\) centers \(C\) in line 1.2. In line 1.3, we sample a subset \(S\) where each point in \(S\) is sampled independently using \(D^{z}\)-sampling w.r.t. \(C\). Given a subset \(Y\), we say that a tuple \(\mathbf{\tau}=(t_{1},\ldots,t_{k+m})\) is _valid_ if \(t_{j}\geq 0\) for all \(j\in[k+m]\), and \(\sum_{j}t_{j}+|Y|=m\). For each subset \(Y\) of size \(\leq m\) of \(S\) and for each valid tuple \(\mathbf{\tau}\), the algorithm constructs a solution \((X_{0}^{(Y,\mathbf{\tau})},X_{1}^{(Y,\mathbf{\tau})},\ldots,X_{k}^{(Y,\mathbf{\tau})})\), where \(X_{0}^{(Y,\mathbf{\tau})}\) denotes the set of outlier points. This is done by first computing the set \(X_{0}^{(Y,\mathbf{\tau})}\), and then using the algorithm \(\mathcal{A}\) on the remaining points (line 1.8). To find the set \(X_{0}^{(Y,\mathbf{\tau})}\), we construct an instance \(\mathcal{I}^{(Y,\mathbf{\tau})}\) of \(b\)-matching first (line 1.5). This instance is defined as follows: the bipartite graph has the set of \((k+m)\) centers \(C\) on the left side and the set of points \(X\) on the right side. The weight of an edge between a vertex \(v\in C\) and \(w\in X\) is equal to \(D^{z}(v,w)\). For each vertex \(v_{j}\in C\), we require that it is matched to exactly \(t_{j}\) points of \(X\). We run the algorithm \(\mathcal{M}\) on this instance of \(b\)-matching (line 1.7). We define \(X_{0}^{(Y,\mathbf{\tau})}\) as the set of points of \(X\) matched by this algorithm. Finally, we output the solution of minimum cost (line 1.10).
```
1.1Input:\(\mathcal{I}:=(X,F,k,m,\mathsf{check},\mathsf{cost})\)
1.2Execute \(\mathcal{C}\) on the instance \(\mathcal{I}^{\prime}:=(X,F,k+m)\) to obtain a set \(C\) of \(k+m\) centers.
1.3Sample a set \(S\) of \(\frac{4\beta m\log m}{\xi}\) points with replacement, each using \(D^{z}\)-sampling from \(X\) w.r.t. \(C\).
1.4foreach subset \(Y\subset\bar{S},|Y|\leq m\)do
1.5foreach valid tuple \(\mathbf{\tau}=(t_{1},\ldots,t_{k+m})\)do
1.6 Construct the instance \(\mathcal{I}^{(F,\tau)}\)
1.7 Run \(\mathcal{M}\) on \(\mathcal{I}^{(Y,\tau)}\) and let \(X_{0}^{(Y,\mathbf{\tau})}\) be the set of matched points in \(X\).
1.8 Run the algorithm \(\mathcal{A}\) on the instance \((X\setminus(X_{0}^{(Y,\mathbf{\tau})}\cup Y),F,k,\mathsf{check},\mathsf{cost})\).
1.9 Let \((X_{1}^{(Y,\mathbf{\tau})},\ldots,X_{k}^{(Y,\mathbf{\tau})})\) be the clustering produced by \(\mathcal{A}\).
20
21.10
22.11
23.12
24.13
25.14
26.15
27.16
28.17
29.18
30.19
310.10
32.110
33.12
344.14
35.15
36.16
37.17
38.18
39.19
40.19
410.10
420.110
439.110
44.111
42.112
43.113
44.114
45.115
46.116
47.117
48.118
49.119
50.112
512.12
513.12
514.122
515.12
516.12
517.12
52.122
53.123
54.124
54.125
55.126
55.127
55.128
5.129
60.120
613.12
614.129
615.120
61.212
62.123
63.124
64.125
64.126
64.127
65.128
65.129
66.129
66.121
66.122
6.123
6.124
6.125.12
6.126
6.127
6.128
6.129
7.129
7.121
8.121
9.122
9.230
131.25
13.26
14.27
15.28
16.29
16.214
6.215
6.216
6.217
6.218
6.219
6.222
6.223
6.232
6.240
6.241
6.242
6.243
6.244
6.245
6.246
6.247
6.248
6.249
6.251
6.252
6.253
6.26
6.26
6.26
6.26
26.27
6.28
6.281
6.282
6.290
6.291
6.292
7.293
7.294
8.295
8.296
9.297
9.298
9.299
9.299
10.20
11.21
12.212
13.213
14.214
15.215
16.216
17.217
18.219
19.219
19.219
19.222
19.230
19.24
19.251
29.26
29.27
29.28
29.29
30.299
4.299
50.29
6.299
7.299
8.299
9.299
9.299
9.299
10.299
11.20
12.212
13.214
14.215
15.216
16.217
17.218
19.219
19.22
19.231
19.24
19.251
19.26
19.27
19.28
19.299
2.299
29.299
29.29
30.29
31.219
32.219
33.219
34.219
35.219
36.219
37.219
37.219
38.219
39.219
39.219
39.219
40.219
41.229
41.221
42.229
41.229
42.229
43.229
44.23
44.24
45.25
46.26
47.27
48.28
49.29
51.29
49.29
52.29
53.219
54.219
55.219
56.219
57.219
6.219
6.222
6.223
6.224
6.232
6.242
6.242
6.252
6.26
6.27
6.282
6.29
7.299
7.299
8.299
9.299
9.299
9.299
9.299
9.299
9.299
9.299
9.299
9.299
10.299
10.299
11.20
11.219
11.229
12.23
11.24
11.25
11.26
11.26
11.27
11.28
11.29
12.29
12.24
12.25
12.26
12.27
12.28
12.29
13.29
14.22
14.22
15.22
15.22
16.22
17.22
16.22
17.22
18.22
19.23
19.24
19.25
19.26
19.27
19.28
19.29
19.29
29.29
29.29
29.29
29.29
29.29
29.29
30.29
31.29
32.29
33.29
34.29
35.29
36.29
37.29
38.29
39.29
40.29
39.29
39.29
41.29
42.29
43.29
44.29
44.29
45.29
46.29
47.29
48.29
We now consider an optimal solution for the instance \(\mathcal{I}\): let \(X_{0}^{\mathsf{opt}},X_{2}^{\mathsf{opt}},\ldots,X_{k}^{\mathsf{opt}}\) be the partition of the input points \(X\) in this solution, with \(X_{0}^{\mathsf{opt}}\) being the set of \(m\) outliers. Depending on the distance from \(C\), we divide the set \(X_{0}^{\mathsf{opt}}\) into two subsets \(-X_{F}^{\mathsf{opt}}\) ("far" points) and \(X_{N}^{\mathsf{opt}}\) ("near" points) as follows:
\[X_{F}^{\mathsf{opt}}:=\left\{x\in X_{0}^{\mathsf{opt}}|D^{z}(x,C)\geq\frac{ \varepsilon\,\mathsf{cost}_{\mathcal{I}^{\prime}}(C)}{2\beta m}\right\},\quad X _{N}^{\mathsf{opt}}:=X\setminus X_{F}^{\mathsf{opt}}.\]
Recall that we sample a set \(S\) of \(\frac{4\beta m\log m}{\varepsilon}\) clients using \(D^{z}\)-sampling with respect to center set \(C\) (line 1 in Algorithm 1). Note that the probability of sampling a point \(x\) is given by
\[\frac{D^{z}(x,C)}{\sum_{x^{\prime}\in X}D^{z}(x,C)}=\frac{D^{z}(x,C)}{\mathsf{ cost}_{\mathcal{I}^{\prime}}(C)}. \tag{2}\]
We first show that \(S\) contains all the points in \(X_{F}^{\mathsf{opt}}\) with high probability.
**Claim 2**: \(\operatorname{\mathbf{Pr}}[X_{F}^{\mathsf{opt}}\subseteq S]\geq 1-1/m\)_._
Proof: Inequality Equation (2) shows that the probability of sampling a point \(x\in X_{F}^{\mathsf{opt}}\) is \(\frac{D^{z}(x,C)}{\mathsf{cost}_{\mathcal{I}^{\prime}}(C)}\geq\frac{ \varepsilon}{2\beta m}\). So the probability that \(x\) is not present in \(S\) is at most \(\left(1-\frac{\varepsilon}{2\beta m}\right)^{\frac{4\beta m\log m}{\varepsilon} }\leq\frac{1}{m^{2}}\). The desired result now follows union bound.
For rest of the analysis, we assume that the event in Claim 2 holds. We now note that the sum of the cost of assigning \(X_{N}^{\mathsf{opt}}\) to \(C\) is at most \(\varepsilon\cdot\mathsf{opt}(\mathcal{I})\).
**Claim 3**: \(\sum_{x\in X_{N}^{\mathsf{opt}}}D^{z}(x,C)\leq\frac{\varepsilon}{2}\cdot \mathsf{opt}(\mathcal{I})\)_._
Proof: The claim follows from the following sequence of inequalities:
\[\sum_{x\in X_{N}^{\mathsf{opt}}}D^{z}(x,C)<\sum_{x\in X_{N}^{\mathsf{opt}}} \frac{\varepsilon\,\mathsf{cost}_{\mathcal{I}^{\prime}}(C)}{2\beta m}\leq \sum_{x\in X_{N}^{\mathsf{opt}}}\frac{\varepsilon\cdot\mathsf{opt}(\mathcal{I })}{2m}\leq\frac{\varepsilon}{2}\cdot\mathsf{opt}(\mathcal{I}),\]
where the first inequality follows from the definition of \(X_{N}^{\mathsf{opt}}\) and the second inequality follows from Claim 1.
For every point in \(X_{N}^{\mathsf{opt}}\), we identify the closest center in \(C=\{c_{1},\ldots,c_{m+k}\}\) (breaking ties arbitrarily). For each \(j\in[k+m]\), let \(X_{N,j}^{\mathsf{opt}}\) be the set of points in \(X_{N}^{\mathsf{opt}}\) which are closest to \(c_{j}\). Let \(\hat{t}_{j}\) denote \(|X_{N,j}^{\mathsf{opt}}|\). Consider an iteration of line 1.7-1.9 where \(Y=X_{F}^{\mathsf{opt}},\boldsymbol{\tau}=(\hat{t}_{1},\ldots,\hat{t}_{k+m})\). Observe that \(\boldsymbol{\tau}\) is valid with respect to \(Y\) because \(\sum_{j\in[m+k]}|\hat{t}_{j}|+|Y|=m\). Let \(\widehat{X}_{1},\ldots,\widehat{X}_{m+k}\) be the set of points assigned to \(c_{1},\ldots,c_{m+k}\) respectively by the algorithm \(\mathcal{M}\). Intuitively, we will like to construct a solution where the set of outliers is given by \(\widehat{X}:=X_{F}^{\mathsf{opt}}\cup\widehat{X}_{1}\cup\cdots\cup\widehat{X }_{m+k}\). We now show that the set \(\widehat{X}\) is "close" to \(X_{0}^{\mathsf{opt}}\), the set of outliers in the optimal solution. In order to do this, we set up a bijection \(\mu:X_{0}^{\mathsf{opt}}\rightarrow\widehat{X}\), where \(\mu\) restricted to \(X_{F}^{\mathsf{opt}}\) is identity, and \(\mu\) restricted to any of the sets \(X_{N,j}^{\mathsf{opt}}\) is a bijection from \(X_{N,j}^{\mathsf{opt}}\) to \(\widehat{X}_{j}\). Such a function \(\mu\) is possible because for each \(j\in[m+k]\), \(|X_{N,j}^{\mathsf{opt}}|=|\widehat{X}_{j}|=\hat{t}_{j}\). We now prove this closeness property.
**Lemma 1**: \[\sum_{x\in X_{0}^{\mathsf{opt}}}D^{z}(x,\mu(x))\leq\varepsilon\cdot z\cdot \mathsf{opt}(\mathcal{I}).\]
Proof: We first note a useful property of the solution given by the algorithm \(\mathcal{M}\). One of the possible solutions for the instance \(\mathcal{I}^{(Y,\boldsymbol{\tau})}\) could have been assigning \(X_{N,j}^{\mathsf{opt}}\) to the center \(c_{j}\). Since \(\mathcal{M}\) is an optimal algorithm for \(b\)-matching, we get
\[\sum_{j\in[k+m]}\sum_{x\in\widehat{X}_{j}}D^{z}(x,c_{j})\leq\sum_{j\in[k+m]} \sum_{x\in X_{N,j}^{\mathsf{opt}}}D^{z}(x,c_{j})=\sum_{x\in X_{N}^{\mathsf{opt }}}D^{z}(x,C)\leq\frac{\varepsilon}{2}\cdot\mathsf{opt}(\mathcal{I}), \tag{3}\]
where the last inequality follows from Claim 3. Now,
\[\sum_{x\in X_{0}^{\mathsf{opt}}}D^{z}(x,\mu(x)) =\sum_{x\in X_{N}^{\mathsf{opt}}}D^{z}(x,\mu(x))=\sum_{j\in[k+m]} \sum_{x\in X_{N,j}^{\mathsf{opt}}}D^{z}(x,\mu(x))\] \[\stackrel{{(1)}}{{\leq}}z\cdot\sum_{j\in[k+m]}\sum_{ x\in X_{N,j}^{\mathsf{opt}}}\left(D^{z}(x,c_{j})+D^{z}(c_{j},\mu(x))\right), \tag{4}\]
where the first equality follows from the fact that \(\mu\) is identity on \(X_{F}^{\mathsf{opt}}\). Since \(\mu\) is a bijection from \(X_{N,j}^{\mathsf{opt}}\) to \(\widehat{X}_{j}\), the above can also be written as
\[z\cdot\sum_{j\in[k+m]}\sum_{x\in X_{N,j}^{\mathsf{opt}}}D^{z}(x,c_{j})+z\cdot \sum_{j\in[k+m]}\sum_{x\in\widehat{X}_{j}}D^{z}(x,c_{j})\leq z\cdot\varepsilon \,\mathsf{opt}(\mathcal{I}),\]
where the last inequality follows from Claim 3 and (3). This proves the desired result.
The mapping \(\mu\) described above may have the following undesirable property: there could be a point \(x\in X_{0}^{\mathsf{opt}}\cap\widehat{X}\) such that \(\mu(x)\neq x\). This could happen if \(x\in X_{N,j}^{\mathsf{opt}}\) and \(x\in\widehat{X}_{i}\) where \(i\neq j\). We now show that \(\mu\) can be modified to another bijection \(\widehat{\mu}:X_{0}^{\mathsf{opt}}\to\widehat{X}\) which is identity on \(X_{0}^{\mathsf{opt}}\cap\widehat{X}\). Note that the mapping \(\widehat{\mu}\) is only needed for the analysis of the algorithm.
Lemma 2: _There is a bijection \(\widehat{\mu}:X_{0}^{\mathsf{opt}}\to\widehat{X}\) such that \(\widehat{\mu}(x)=x\) for all \(x\in X_{0}^{\mathsf{opt}}\cap\widehat{X}\) and_
\[\sum_{x\in X_{0}^{\mathsf{opt}}}D^{z}(x,\widehat{\mu}(x))\leq m^{z-1}\, \varepsilon\cdot z\cdot\mathsf{opt}(\mathcal{I}).\]
Proof: We construct a directed graph \(H=(V_{1},E_{1})\) where \(V_{1}=X_{0}^{\mathsf{opt}}\cup\widehat{X}\). For every \(x\in X_{0}^{\mathsf{opt}}\), we add the directed arc \((x,\mu(X))\) to \(E_{1}\). Observe that a self loop in \(H\) implies that \(\mu(x)=x\). Every vertex in \(X_{0}^{\mathsf{opt}}\setminus\widehat{X}\) has \(0\) in-degree and out-degree \(1\); whereas a vertex in \(\widehat{X}\setminus X_{0}^{\mathsf{opt}}\) has in-degree \(1\) and \(0\) out-degree. Vertices in \(\widehat{X}\cap X_{0}^{\mathsf{opt}}\) have exactly one incoming and outgoing arc (in case of a self-loop, it counts towards both the in-degree and the out-degree of the corresponding vertex).
The desired bijection \(\widehat{\mu}\) is initialized to \(\mu\). Let \(\mathsf{cost}(\widehat{\mu})\) denote \(\sum_{x\in X_{0}^{\mathsf{opt}}}D^{z}(x,\widehat{\mu}(x))\); define \(\mathsf{cost}(\mu)\) similarly. It is easy to check \(H\) is vertex disjoint union of directed cycles and paths. In case of a directed cycle \(C\) on more than \(1\) vertex, it must be the case that each of the vertices in \(C\) belong to \(\widehat{X}\cap X_{0}^{\mathsf{opt}}\). In this case, we update \(\widehat{\mu}\) be defining \(\widehat{\mu}(x)=x\) for each \(x\in C\). Clearly this can only decrease \(\mathsf{cost}(\widehat{\mu})\). Let \(P_{1},\ldots,P_{l}\) be the set of directed paths in \(H\). For each path \(P_{j}\), we perform the following update: let \(P_{j}\) be a path from \(a_{j}\) to \(b_{j}\). We know that \(a_{j}\in X^{\mathsf{opt}}\setminus\widehat{X}\), \(b_{j}\in\widehat{X}\setminus X_{0}^{\mathsf{opt}}\) and each internal vertex of \(P_{j}\) lies in \(\widehat{X}\cap X_{0}^{\mathsf{opt}}\). We update \(\widehat{\mu}\) as follows; \(\widehat{\mu}(a_{j})=b_{j}\) and \(\widehat{\mu}(v)=v\) for each internal vertex \(v\) of \(P_{j}\). The overall increase in \(\mathsf{cost}(\widehat{\mu})\) is equal to
\[\sum_{j\in[l]}\left(D^{z}(a_{j},b_{j})-\sum_{i=1}^{n_{j}}D^{z}(v_{j}^{i},v_{j}^{ i-1})\right), \tag{5}\]
where \(a_{j}=v_{j}^{0},v_{j}^{1},\ldots,v_{j}^{n_{j}}=b_{j}\) denotes the sequence of vertices in \(P_{j}\). If \(z=1\), triangle inequality shows that the above quantity is at most \(0\). In case \(z=2\),
\[D^{2}(a_{j},b_{j})\leq n_{j}\left(\sum_{i=1}^{n_{j}}D^{2}(v_{j}^{i},v_{j}^{i-1} )\right),\]
and so the quantity in (5) is at most \((n_{j}-1)\sum_{i=1}^{n_{j}}D^{2}(v_{j}^{i},v_{j}^{i-1})\).
It follows that \(\mathsf{cost}(\widehat{\mu})\leq m^{z-1}\mathsf{cost}(\mu)\). The desired result now follows from Lemma 1.
We run the algorithm \(\mathcal{A}\) on the outlier-free constrained clustering instance \(\mathcal{I}^{\prime\prime}=(X\setminus\widehat{X},F,k,\mathsf{check},\mathsf{ cost})\) (line 1 in Algorithm 1). Let \(\mathsf{opt}(\mathcal{I}^{\prime\prime})\) be the optimal cost of a solution for this instance. The following key lemma shows that \(\mathsf{opt}(\mathcal{I}^{\prime\prime})\) is close to \(\mathsf{opt}(\mathcal{I})\).
**Lemma 3**.: \(\textsf{opt}(\mathcal{I}^{\prime\prime})\leq(1+\varepsilon^{\frac{1}{\varepsilon}}(2 m+1)^{z-1})\textsf{opt}(\mathcal{I})\)_._
Proof: We shall use the solution \((X_{0}^{\textsf{opt}},\ldots,X_{k}^{\textsf{opt}})\) to construct a feasible solution for \(\mathcal{I}^{\prime\prime}\). For each \(j\in[k]\), let \(Z_{j}\) denote \(X_{j}^{\textsf{opt}}\cap\widehat{X}\). Let \(\widehat{\mu}^{-1}(Z_{j})\) denote the pre-image under \(\widehat{\mu}\) of \(Z_{j}\). Since \(Z_{j}\subseteq\widehat{X}\setminus X_{0}^{\textsf{opt}},\widehat{\mu}^{-1}(Z_ {j})\subseteq X_{0}^{\textsf{opt}}\setminus\widehat{X}\). For each \(j\in[k]\), define
\[X_{j}^{\prime}:=(X_{j}^{\textsf{opt}}\setminus Z_{j})\cup\widehat{\mu}^{-1}(Z _{j}).\]
**Claim 4**: \[\bigcup_{j=1}^{k}X_{j}^{\prime}=X\setminus\widehat{X}.\]
Proof: For any \(j\in[k]\), we have already argued that \(\widehat{\mu}^{-1}(Z_{j})\subseteq X_{0}^{\textsf{opt}}\setminus\widehat{X} \subseteq X\setminus\widehat{X}\). Clearly, \(X_{j}^{\textsf{opt}}\setminus Z_{j}\subseteq X\setminus\widehat{X}\). Therefore, \(X_{j}^{\prime}\subseteq X\setminus\widehat{X}\). Therefore, \(\cup_{j\in[k]}X_{j}^{\prime}\subseteq X\setminus\widehat{X}\). Since \(|X_{j}^{\prime}|=|X_{j}^{\textsf{opt}}|\),
\[\sum_{j\in[k]}|X_{j}^{\prime}|=n-m=|X\setminus\widehat{X}|.\]
This proves the claim.
The above claim implies that \((X_{1}^{\prime},\ldots,X_{k}^{\prime})\) is a partition of \(X\setminus\widehat{X}\). Since \(|X_{j}^{\prime}|=|X_{j}^{\textsf{opt}}|\) for all \(j\in[k]\) and the function check only depends on the cardinality of the sets in the partition, \((X_{1}^{\prime},\ldots,X_{k}^{\prime})\) is a feasible partition (under check) of \(X\setminus\widehat{X}\). In the optimal solution for \(\mathcal{I}\), let \(f_{1}^{\textsf{opt}},\ldots,f_{k}^{\textsf{opt}}\) be the \(k\) centers corresponding to the clusters \(X_{1}^{\textsf{opt}},\ldots,X_{k}^{\textsf{opt}}\) respectively. Now,
\[\textsf{opt}(\mathcal{I}^{\prime\prime})\leq\textsf{cost}(X_{1}^{\prime}, \ldots,X_{k}^{\prime})\leq\sum_{j\in[k]}\sum_{x\in X_{j}^{\prime}}D^{z}(x,f_{j} ^{\textsf{opt}}) \tag{6}\]
For each \(j\in[k]\), we estimate the quantity \(\sum_{x\in X_{j}^{\prime}}D^{z}(x,f_{j}^{\textsf{opt}})\). Using the definition of \(X_{j}^{\prime}\) and triangle inequality, this quantity can be expressed as
\[\sum_{x\in X_{j}^{\textsf{opt}}\setminus Z_{j}}D^{z}(x,f_{j}^{\textsf{opt}})+ \sum_{x\in\widehat{\mu}^{-1}(Z_{j})}D^{z}(x,f_{j}^{\textsf{opt}})\leq\sum_{x \in X_{j}^{\textsf{opt}}\setminus Z_{j}}D^{z}(x,f_{j}^{\textsf{opt}})+\sum_{ x\in\widehat{\mu}^{-1}(Z_{j})}\left(D(x,\widehat{\mu}(x))+D(\widehat{\mu}(x),f_{j}^{ \textsf{opt}})\right)^{z} \tag{7}\]
When \(z=1\), the above is at most (replacing \(x\) by \(\widehat{\mu}(x)\) in the second expression on RHS)
\[\sum_{x\in X_{j}^{\textsf{opt}}}D(x,f_{j}^{\textsf{opt}})+\sum_{x\in Z_{j}}D(x,\widehat{\mu}(x)).\]
Using this bound in (6), we see that
\[\textsf{opt}(\mathcal{I}^{\prime\prime})\leq\textsf{opt}(\mathcal{I})+\sum_{ x\in X_{0}^{\textsf{opt}}}D(x,\widehat{\mu}(x))\leq(1+\varepsilon)\textsf{opt}( \mathcal{I}),\]
where the last inequality follows from Lemma 2. This proves the desired result for \(z=1\). When \(z=2\), we use the fact that for any two reals \(a,b\),
\[(a+b)^{2}\leq(1+\sqrt{\varepsilon})a^{2}+b^{2}\left(1+\frac{1}{\sqrt{ \varepsilon}}\right).\]
Using this fact, the expression in the RHS of (7) can be upper bounded by
\[(1+\sqrt{\varepsilon})\sum_{x\in X_{j}^{\textsf{opt}}}D^{2}(x,f_{j}^{\textsf{ opt}})+\left(1+\frac{1}{\sqrt{\varepsilon}}\right)\sum_{x\in Z_{j}}D^{2}(x, \widehat{\mu}(x)).\]
Substituting this expression in (6) and using Lemma 2, we see that
\[\mathsf{opt}(\mathcal{I}^{\prime\prime})\leq(1+\sqrt{\varepsilon})\mathsf{opt}( \mathcal{I})+2m\sqrt{\varepsilon}\mathsf{opt}(\mathcal{I}).\]
This proves the desired result for \(z=2\).
The approximation preserving properties of Theorem 2.1 follow from the above analysis. For the \(k\)-means problem, since the approximation term is \((1+\sqrt{\varepsilon}(2m+1))\), we can replace \(\varepsilon\) with \(\varepsilon^{2}/(2m+1)^{2}\) in the algorithm and analysis to obtain a \((1+\varepsilon)\) factor. Let us quickly check the running time of the algorithm. The algorithm first runs \(\mathcal{C}\) that takes \(T_{\mathcal{C}}(n)\) time. This is followed by \(D^{z}\)-sampling \(O(\frac{m^{z+1}\log m}{\varepsilon^{z}})\) points, which takes \(O(n\cdot(k+\frac{m^{z+1}\log m}{\varepsilon^{z}}))\) time. The number of iterations of the for-loops is determined by the number of subsets of \(S\), which is \(\sum_{i=0}^{\lfloor S\rceil}\binom{|S|}{i}=\big{(}\frac{m}{\varepsilon} \big{)}^{O(m)}\), and the number of possibilities for \(\tau\), which is at most \(\binom{2m+k-1}{m}=(m+k)^{O(m)}\). This gives the number of iterations \(q=f(k,m,\varepsilon)=\big{(}\frac{k+m}{\varepsilon}\big{)}^{O(m)}\). In every iteration, in addition to running \(\mathcal{A}\), we solve a weighted b-matching problem on a bipartite graph \((L\cup R,E)\) where \(R\) has \((k+m)\) vertices (corresponding to the \(k+m\) centers in the center set \(C\)) and \(L\) has at most \((k+m)\cdot m\) vertices (considering \(m\) closest clients for every center is sufficient). So, every iteration costs \(T_{\mathcal{A}}(n)+O((k+m)^{3}m^{2})\) time. This gives the running time expression in Theorem 2.1.
### Extension to labelled version
In this section, we extend Algorithm 1 to the setting where points in \(X\) have labels from a finite set \(L\) and the \(\mathsf{check}()\)() function can also depend on the number of points with a certain label in a cluster. The overall structure of Algorithm 1 remains unchanged; we just indicate the changes needed in this algorithm.
Given a non-negative integer \(p\), a label partition of \(p\) is defined as a tuple \(\psi=(q_{1},\ldots,q_{|L|})\) such that \(\sum_{i}q_{i}=p\). The intuition is that given a set \(S\) of size \(p\), \(q_{1}\) points get the first label in \(L\), \(q_{2}\) points in \(S\) get the second label in \(L\), and so on. Now, given a subset \(Y\), define a valid tuple \(\boldsymbol{\tau}\) w.r.t. \(Y\) as a tuple \(((t_{1},\psi_{1}),\ldots,(t_{k+m},\psi_{k+m}))\), where (i) \(\sum_{j}t_{j}+|Y|=m\), and (ii) \(\psi_{j}\) is a label partition of \(t_{j}\). As in line 1.5 in Algorithm 1, we cycle over all such valid tuples. The definition of a solution to the \(b\)-matching instance \(\mathcal{I}^{(Y,\tau)}\) changes as follows. Let \(\psi_{j}=(n_{j}^{1},\ldots,n_{j}^{\ell})\), where \(\ell=|L|\). Then a solution to \(\mathcal{I}^{(Y,\tau)}\) needs to satisfy the condition that for each point \(c_{j}\in C\) and each label \(l\in L\), exactly \(n_{j}^{l}\) points in \(X\) are matched to \(c_{j}\). Note that this also implies that exactly \(t_{j}\) points are matched to \(c_{j}\). This matching problem can be easily reduced to weighted bipartite matching by making \(t_{j}\) copies of each point \(c_{j}\), and for each label \(l\), adding edges between \(n_{j}^{l}\) distinct copies of \(c_{j}\) to vertices of label \(l\) only. The rest of the details of Algorithm 1 remain unchanged. Note that the running time of the algorithm changes because we now have to cycle over all partitions of each of the numbers \(t_{j}\).
The analysis of the algorithm proceeds in an analogous manner as that of Algorithm 1. We just need to consider the iteration of the algorithm, where we correctly guess the size of each of the sets \(X_{N,j}^{\mathsf{opt}}\) and the number of points of each label in this set.
|
2304.07533 | ALiSNet: Accurate and Lightweight Human Segmentation Network for Fashion
E-Commerce | Accurately estimating human body shape from photos can enable innovative
applications in fashion, from mass customization, to size and fit
recommendations and virtual try-on. Body silhouettes calculated from user
pictures are effective representations of the body shape for downstream tasks.
Smartphones provide a convenient way for users to capture images of their body,
and on-device image processing allows predicting body segmentation while
protecting users privacy. Existing off-the-shelf methods for human segmentation
are closed source and cannot be specialized for our application of body shape
and measurement estimation. Therefore, we create a new segmentation model by
simplifying Semantic FPN with PointRend, an existing accurate model. We
finetune this model on a high-quality dataset of humans in a restricted set of
poses relevant for our application. We obtain our final model, ALiSNet, with a
size of 4MB and 97.6$\pm$1.0$\%$ mIoU, compared to Apple Person Segmentation,
which has an accuracy of 94.4$\pm$5.7$\%$ mIoU on our dataset. | Amrollah Seifoddini, Koen Vernooij, Timon Künzle, Alessandro Canopoli, Malte Alf, Anna Volokitin, Reza Shirvany | 2023-04-15T11:06:32Z | http://arxiv.org/abs/2304.07533v1 | # ALiSNet: Accurate and Lightweight Human Segmentation Network for Fashion E-Commerce
###### Abstract
Accurately estimating human body shape from photos can enable innovative applications in fashion, from mass customization, to size and fit recommendations and virtual try-on. Body silhouettes calculated from user pictures are effective representations of the body shape for downstream tasks. Smartphones provide a convenient way for users to capture images of their body, and on-device image processing allows predicting body segmentation while protecting users' privacy. Existing off-the-shelf methods for human segmentation are closed source and cannot be specialized for our application of body shape and measurement estimation. Therefore, we create a new segmentation model by simplifying Semantic FPN with PointRend, an existing accurate model. We finetune this model on a high-quality dataset of humans in a restricted set of poses relevant for our application. We obtain our final model, ALiSNet, with a size of 4MB and 97.6 \(\pm\) 1.0% mIoU, compared to Apple Person Segmentation, which has an accuracy of 94.4 \(\pm\) 5.7% mIoU on our dataset.
On-device, Human-Segmentation, Privacy-Preserving, Fashion, E-commerce
## 1 Introduction
Human segmentation has emerged as foundational to applications across a diverse range from autonomous driving to social media, virtual and augmented reality, and online fashion. In the case of online fashion, giving users a way to easily capture their body shape is valuable, since it can be used to recommend appropriate clothing sizes or to enable virtual try-on. However, to determine the right size and fit of clothing, the body shape needs to be determined with very high accuracy in order to be of value. For example, in an image of 2k resolution in height, a segmentation error of two pixels on the boundary can change a measurement such as chest circumference by 10mm. Users' body shape can be more accurately determined if they wear tight-fitting clothes, making it even more important than in other applications to preserve privacy. Hence, mobile human segmentation is a good fit for fashion applications, as images can be both captured and processed on-device.
In this paper we propose an approach to achieve an accurate and lightweight human segmentation method for these applications. Although off-the-shelf mobile human segmentation methods are available, such as Apple Person Segmentation (Apple, 2022) and Google MLKit's BlazePose (Bazarevsky et al., 2020), these methods are closed-source and cannot be adapted to our task to achieve the required accuracy. Instead, we design a model based on Semantic FPN with PointRend for our task.
Crucial to the success of our method is finetuning on a task specific dataset of user-taken photos in front and side views, as shown in Figure 1. Orthogonal views such as this are commonly used in various anthropometry setups, e.g. (Smith et al., 2019). Such silhouettes can be used to model the 3D body shape of users, as proposed in (Dibra et al., 2016; Dibra et al., 2017; Smith et al., 2019) or to directly predict mea
Figure 1: Ground truth body annotations. The boundary in particular is critical for body shape prediction.
surements (Yan et al., 2021) for fashion applications. While relying on the large body of publicly available data for the segmentation task, we augment it by using a small yet specific dataset of 6147 high resolution images with highly accurate annotations to overcome the limitations of publicly available data.
Our main contributions are thus two-fold: First, we demonstrate that a relatively small set of high-quality annotations can boost segmentation accuracy. Second, we simplify a large and high quality baseline method, Semantic FPN (Kirillov et al., 2019) with PointRend refinement (Kirillov et al., 2020) with a few steps to achieve almost the same performance with 100\(\times\) model size. The main changes to the original model are: exchanging the backbone with a modified version of the mobile-optimized MnasNet (Tan et al., 2019), using quantization-aware training, and removing network components that we found not to be contributing to segmentation accuracy. Our final **A**ccurate and **L**ightweight mobile human **S**egmentation **N**etwork (ALiSNet), achieves 97.6% mIoU and is 4MB in size, where an off-the-shelf method such as BlazePose-Segmentation achieves 93.7% mIoU on our data, with a 6MB model, and Apple Person Segmentation achieves 94.4% mIoU. It was not possible to fine-tune either of these models to our data as they are closed-sourced. Additionally, ALiSNet's accuracy is only marginally lower than the 97.8% mIoU achieved by the 350 MB baseline.
## 2 Related Work
The categories of methods most relevant to our work in the domain of on-device human segmentation are portrait editing, video call background effects, and general-purpose real time whole body segmentation methods.
Many portrait editing predict alpha mattes, which are masks that allow blending foreground and background regions. In this application, having accurate segmentation of textures such as wisps of hair is very important. Google Pixel's alpha matting method (Orts-Escolano and Ehman, 2022) relies on data collected using a custom volumetric lighting setup. Apple Person Segmentation (Apple, 2022) in _accurate_ mode also belongs to this category of methods. However, such accuracy on the texture level is not necessary for our application. Besides, most existing alpha matting methods are trained only on faces.
Real-time portrait segmentation methods for video calls focus on segmenting the human upper body. ExtremeC3Net (Park et al., 2019) and SiNet (Li et al., 2020) are examples of models that achieve very good performance under a parameter count of 200K.
There are also several methods focused on real-time segmentation of the whole body. One example is Google MLKit BlazePose-Segmentation (Bazarevsky et al., 2020) which relies on correct prediction of body bounding box. (Strohmayer et al., 2021) focuses on reducing latency for general purpose human segmentation. (Liang et al., 2022) introduces Multi-domain TriSeNet Networks for the real-time single person segmentation for photo-editing applications. (Xi et al., 2019) uses saliency map derived from accurate pose information to improve segmentation accuracy especially in multi-person scenes.
(Han et al., 2020) categorizes the set of techniques for reducing model size into _model compression_ and _compact model design_ methods. Although we make use of quantization-aware training (Wu et al., 2015) in this paper, which is a compression technique, we mostly take advantage of compact model components. These include compact networks that can be used as feature extractors, such as MnasNet (Tan et al., 2019), FBNet (Wu et al., 2019) and MobileNetv3 (Howard et al., 2019) which have been found using Neural Architecture Search.
We recommend the related work section of (Knapp, 2021) for a more extensive review of works related to mobile person segmentation.
Finally, our work can be used in downstream applications for estimating body shape. This is an active research area, with several approaches of estimating body shape from silhouette, such as (Song et al., 2018; Song et al., 2016; Ji et al., 2019; Dibra et al., 2016).
DatasetsWe review several datasets with permissive licenses that include person segmentation labels. These include MS COCO (Lin et al., 2014), shown in Figure 2, LVIS (Gupta et al., 2019) (contains higher quality annotations for COCO images), Google Open Images (Kuznetsova et al., 2018). There are limitations with each of these datasets. COCO annotations are based on polygons and therefore not accurate around the object boundaries. LVIS annotations are very accurate and dense in each image but they are not yet available for the entire COCO dataset, especially in the human category where only around 1.8k images are annotated. Finally, the Google Open Images dataset is sparse in segmentation coverage in each image compared to COCO, as many object instances are not segmented yet.
## 3 Method
### Model
Our model, ALiSNet, is a version of Semantic FPN with PointRend (Kirillov et al., 2020), simplified for on-device use. We chose Semantic FPN with PointRend as a baseline because of its high accuracy in segmenting object boundaries. In theory, other baseline methods could also be used to show the effectiveness of our approach.
Semantic FPN with PointRend:Semantic FPN first extracts features using a backbone and further process them with a Feature Pyramid Network (FPN) (Lin et al., 2017). Then, a coarse segmentation map is computed from the aggregated coarse features. PointRend then samples uncertain points on the coarse segmentation and concatenates fine-grained features from the FPN with the coarse predictions at each location and uses this as an input to a classifier to refine the prediction at this location.
Changes to Baseline for On-device Use:In this work, we use three approaches to address the problem of reducing the model size while preserving segmentation accuracy. First, we take advantage of a mobile feature extraction backbone to replace the ResNeXt101 (Xie et al., 2017) feature extractor in our baseline. Second, we quantize our model using quantization-aware training, allowing us to replace 32bit floating point parameters with equivalent int8 representations. We choose to use quantization aware training as opposed to post-training quantization as it is known to lead to more accurate results. Third, we replace the feature pyramid network used in the original model with a simpler aggregation step, skipping the FPN top-down path. The final ALiSNet architecture is shown in Figure 3.
Training Loss:Our training loss includes the segmentation loss between the predicted and ground-truth labels, and the PointRend loss. For the segmentation loss, we experimented with cross-entropy and focal loss (Lin et al., 2017), and found cross-entropy to be more stable during training. We therefore used the latter for further trainings. The PointRend loss is taken from the reference implementation of PointRend in detectron2 and contains the sum of cross-entropies between predicted and ground-truth labels of all points refined during the refinement process for each sample in the mini-batch.
### Data
An important element in making our approach successful is to pre-train on a large-scale coarsely annotated dataset and fine-tune on a high-quality specific dataset.
Pretraining on COCO:Following other segmentation methods (Kirillov et al., 2020) we base our work on MS COCO. Out of all images in COCO, we only use those containing at least one person (around 60k images). The COCO default annotation format is designed for instance segmentation. Thus, we merge the segmentation masks of human instances in each image to create corresponding segmentation masks for semantic segmentation.
Small scale in-house dataset:We observe that even though object instance coverage in COCO is very high, there are disadvantages when using only that data for our task: Segmentation annotations are not pixel accurate since the objects are annotated using polygons. The scale of objects can be extremely diverse, ranging from objects of 10 pixels in height to objects covering almost the entire image. The diversity of human poses and occlusions is extreme and most images contain only a small body part or crowds of people. This is helpful for general segmentation tasks but our experiments show that it limits the accuracy of on-device models in more controlled tasks such as body shape estimation where pose, viewing angle and scale of the body does not vary much.
To overcome these limitations of large-scale general purpose datasets, we make use of a small high-quality dataset focused on enabling accurate human body segmentation for our task. To build this complementary dataset, we use an in-house mobile app with interactive video features to guide the users to stand in the correct front and side view poses at the right distance to be fully in the frame. The app was made available for both iOS and Android, powered
Figure 2: An example image and annotation from the COCO dataset. Note the low accuracy of the polygon annotation.
by real-time pose estimation models native to the OS. Images are taken with portrait (vertical) framing format. The calculated pose-keypoints are available for the captured images and can be used in downstream tasks too. To protect participants' privacy, the images were cropped around the bounding box containing the person. The bounding boxes are calculated from predicted pose-keypoints in the app and enlarging the box by 10% margin on each side. This dataset includes 6147 images which are randomly distributed in train/validation/test splits with 60/20/20% ratios respectively. The number of front and side view images is almost balanced, and the ratio of male/female participants is 45/55%. The annotation is performed by an expert annotation team, and the quality of segmentation masks was checked by two quality expert groups. An example of this data is shown in Figure 4.
## 4 Experimental Setup
### Implementation details
For this paper, we used the detectron2 framework Wu et al. (2019) built on top of PyTorch, which includes the reference implementation of PointRend.
On the mobile side, the model is executed by the PyTorch Mobile interpreter to facilitate the deployment of developed models. To that end, the model is first converted by the TorchScript compiler and the resulting model graph is loaded by the mobile interpreter. Because the iterative PointRend head contains control flows based on the input, we have to use scripting instead of tracing for computation graph generation. Scripted models are not fully optimized for runtime, therefore the performance sometimes is lower than traced models. For the quantization of the model we use the QNNPACK Dukhan et al. (2020) backend which is optimized for ARM CPUs available in mobile devices.
### Model Training
As described in subsection 3.2, we augment training our models on COCO with fine-tuning on our in-house dataset. We also experiment with CNN backbones that are pre-trained on ImageNet Deng et al. (2009) classification tasks. All experiments are done in machines in Amazon AWS with 4 V100 GPUs totalling 128 GB graphic memory. The mini-batch size of training is set to 8 for large models (e.g. ResNeXt101 backbone) and 16 for smaller backbones (i.e. MnasNet, MobileNet, FBNet). The base learning rate is set to 0.01 for the case of batch-size = 8 and 0.02 for batch-size = 16 following the recommendation of a linearly scaling learning rate Goyal et al. (2017). During training we augment the data
Figure 4: left: Examples of front and side view images in the target poses. Ground-truth annotation masks are overlaid with green color onto the pictures.
Figure 3: ALiSNet Architecture. The first stage of our method is the feature extractor backbone which produces features at 5 resolution levels. These features are convolved and added together to form the coarse features. The coarse features are projected to a coarse segmentation using a \(1\times 1\) convolution. Uncertain points on the coarse segmentation mask are selected and refined using PointRend. During a PointRend refinement step, coarse predictions and finegrained features from these locations are concatenated and fed to an MLP to obtain refined segmentation predictions. This refinement is repeated two times. In the diagram, _ConvBlock_ is a conv, batch-norm, ReLu sequence and _2up_ is a 2x bi-linear upsampling.
using default augmentation tools provided by detectron2. This includes random resizing, horizontal flip, color jitter and brightness and saturation change. The range of sizes for the shorter side of image in resizing is set randomly from a predefined list (between 120 and 800) while keeping the longer side under 1024 and the scale-factor between 0.5 and 4. This prevents too much down-sampling of our high-resolution images during training.
For the fine-tuning step, as a standard practice we experimented with freezing the first \(N\) (\(N\leq 2\)) stages of the backbone to improve generalization of the model and avoid over-fitting but we have observed that not freezing any layer can improve the generalization results in our case. We also experimented with reducing the learning-rate by \(\times 10\) for the fine-tuning step compared to the pre-training learning rate. However, we found that the model converged quicker and to better results when we did not reduce the learning rate.
### Evaluation
We evaluate our models with mean Intersection Over Union (mIoU) which is defined in Equation 1. This metric is in the \([0,1]\) range and then reported as percentage.
\[\textit{mIoU}=\frac{1}{N}\sum_{i=1}^{N}\frac{|\textit{pred}_{i}\cap\textit{ GT}_{i}|}{|\textit{pred}_{i}\cup\textit{GT}_{i}|}\times 100 \tag{1}\]
where _pred_ and _GT_ are the prediction and ground-truth segmentation masks of sample \(i\) respectively. During evaluation, images are sized to 1024 on height after cropping them to the person bounding box.
## 5 Experimental Results
In this section, first we explore the effect of different aspects of our model. Then we do a quantitative and qualitative comparison of our method with two other related person-segmentation methods.
### Effect of Model Design Choices
**Reduction of Size of Components:** As shown in Table 1, starting from the baseline model (351.7MB), we first obtain a model size of 52.0MB by replacing the ResNeXt101 feature backbone with MnasNet, saving around 300 MB. Then we applied Quantization Aware Training, which further shrinks the model size by \(\times 4\), resulting in 12.9MB, as we replace 32 bit floating point with int8 representation.
We then show that the top-down branch of FPN which combines high-level semantic features with low-level features can be removed from the model with only 0.1% reduction in accuracy. We argue that in our model, PointRend carries the job of merging high-level and low-level features, thus making the top-down path of FPN mostly redundant. Furthermore, scale of persons in our data does not vary enough to require FPN-top-down features.
\begin{table}
\begin{tabular}{|l|c|c|} \hline Model configuration & Size (MB) & mIoU \\ \hline \hline ResNeXt101 + FPN-SemSeg + PointRend & 351.7 & 97.8 \(\pm\) 1.0 \\ Replace ResNeXt101 with MnasNet-B1 & 52.0 & 97.7 \(\pm\) 0.9 \\ Quantization-Aware-Training & 12.9 & 97.7 \(\pm\) 0.9 \\ Remove FPN-top-down path (ALiSNet) & **4.0** & 97.6 \(\pm\) 1.0 \\ \hline Google MLKit (BlazePose-Segmentation)-accurate & 27.7 & 93.9 \(\pm\) 5.3 \\ Google MLKit (BlazePose-Segmentation)-balanced & 6.4 & 93.7 \(\pm\) 5.9 \\ Apple Person Segmentation (accurate) & - & 94.3 \(\pm\) 5.9 \\ Apple Person Segmentation (balanced) & - & 94.4 \(\pm\) 5.7 \\ \hline \end{tabular}
\end{table}
Table 1: Effect of each model change step on the accuracy and size of model. mIoU values are reported as mean \(\pm\) std in percent.
\begin{table}
\begin{tabular}{|l|c|c|c|} \hline Model & Size & mIoU & mIoU \\ & (MB) & with COCO & with fine-tuning \\ \hline \hline ResNeXt101 + FPN-SemSeg + PointRend & 351.7 & 94.0 \(\pm\) 3.8 & 97.8 \(\pm\) 1.0 \\ MobileNetV3 + FPN-SemSeg + PointRend & 35.1 & 91.2 \(\pm\) 6.2 & 97.7 \(\pm\) 1.1 \\ (Q) MnasNet-B1 + SemSeg + PointRend (ALiSNet) & **4.0** & 90.0 \(\pm\) 6.8 & 97.6 \(\pm\) 0.9 \\ \hline \end{tabular}
\end{table}
Table 2: Effect of fine-tuning on our dataset. First mIoU column: results after training on COCO only. second mIoU column: result after fine-tuning on our dataset. (Q) indicates the model is quantized using quantization-aware-training.
Fine-tuning:We first train all the models on the COCO person class. In Table 2, we show the effect of fine-tuning these models on our high-quality task-specific dataset. It is clear that the fine-tuning significantly improves the mIoU, and the effect is greater for smaller models.
PointRend:Although the PointRend module adds to the 0.2MB size and 20% to the runtime of our method, we use it, as Table 3 shows that it adds around 0.3% mIoU to the model accuracy.
CNN backbones choice:We compared the effect using MobileNetV3 Howard et al. (2019), MnasNet Tan et al. (2019), FBNetV3 Dai et al. (2021) as mobile-friendly backbone feature extractors. Table 4 shows that all of the mobile feature extractors have around the same performance in our task, which is only 0.1% lower than the much larger ResNeXt101. We chose to use MnasNet due to lower variance and its availability in the the torchvision framework.
### Runtime on mobile devices
We evaluated our model on a set of real mobile devices provided by the AWS Device Farm1. The distribution of runtime is shown in Figure 5. For this evaluation, 90 high-resolution images from the dataset are processed using our in-house evaluation mobile app. Images are resized to 2k resolution in height while preserving the aspect ratio and then cropped to the person bounding box. The cropped images are passed to the model, where they are resized to 1024 in height internally before segmentation. As the bounding box of the person varies between images, a significant variance of the runtime on a single device can be noticed. On recent iPhones the model runs well below 1s and for an older Android phone like the Moto G4 the mean is around 8s. In iPhone SE and Galaxy S21 there are some outliers in runtime, for reasons we were not able to determine. The model is running on CPU mode due to limited support of PyTorch for mobile-GPU in quantized models.
Footnote 1: [https://aws.amazon.com/device-farm/](https://aws.amazon.com/device-farm/)
### Comparison to BlazePose and Apple Person Segmentation
We compare with two on-device person segmentation methods.
BlazePose (Bazarevsky et al., 2020) is a on-device real-time body pose tracking method, which provides a segmentation prediction option. We compare two settings of the BlazePose model, balanced and accurate, which have model sizes of 6.4MB and 27.7MB respectively.
We also compare to Apple Person Segmentation which was made available in iOS15. Information about the details of this model is not available.
The output segmentation maps of these methods are probabilistic, and need to be thresholded to compute the final binary silhouette maps. Thresholds were determined using a sweep of values and were set to 0.5 for BlazePose and 0.3 for Apple.
It is not possible to fine-tune either of these methods on our data.
As seen in Table 1, both BlazePose and Apple Segmentations have a much lower mIoU than AliSNet on our dataset, while having a much higher standard deviation. This indicates that fine-tuning on our data allows our model to avoid certain mistakes in segmentation. Neither of the compared methods are designed for the use-case of segmentation for accurate human body measurement. BlazePose is optimized for real-time pose estimation, which means it lies on a different point of the performance-accuracy trade-off. Apple Person Segmentation is designed to power Portrait Mode. Segmentation examples from all three methods are shown in Figure 6. In the front view, both our method and Apple produce more accurate segmentations than BlazePose. In the side view, we see that all three methods have difficulty with background objects, and that the Apple method produces artifacts in the lamp.
Figure 7 shows overlays from the test set of our dataset. In this figure we show both "bad" and "good" segmentations from our model, however, we see that our segmentation has a higher IoU than the other methods, as we have trained our model on this dataset.
## 6 Discussion
We presented a method for accurate mobile human segmentation along with a set of general steps that can be used to simplify existing large-scale models for on-device applications.
Although our model handles most images well, there are cases with confusing background textures where our method and other methods fail, as shown
Figure 8: Examples of failure cases for segmentation algorithms. In our user collected dataset, people take pictures at home, and sometimes have clothes (top) or mirrors (bottom) in the background, which cause the segmentation method to not work correctly. This is a shortcoming of all the methods we evaluate here.
Figure 7: Overlays of predicted segmentations over the ground truth annotations, blue intersection, from our in-house dataset, from our method, BlazePose (BP) and Apple, with the IoU. We ranked the images in the test set by their their mIoUs using AlisNet, and displayed the 5th, 10th, 90th and 95th-% images in this ranking. As the photos are confidential, we show only the silhouettes here. After data collection, all images were anonymized using face-blur, which is seen in the Apple Segmentation in the first image. In one of the BP segmentations you can see the issue of bounding box prediction cutting out part of the feet.
in Figure 8. Other challenging conditions include dim lighting, dark shadows or other image distortions. Improving the performance under these conditions would be an important future direction.
In the future we will be experimenting with on-device segmentation models for accurate body shape and measurement estimation.
## Acknowledgements
We would like to thank Julia Friberg for her contributions in evaluation of the models on real mobile phones.
|
2306.09367 | On asymptotic normality of the total progeny in the positive recurrent
Q-processes | We examine the population growth system called Q-processes. This is defined
by the Galton-Watson Branching system conditioned on non-extinction of its
trajectory in the remote future. In this paper we observe the total progeny up
to time $n$ in the Q-process. By analogy with branching systems, this variable
is of great interest in studying the deep properties of the Q-process. We find
that the sum total progeny as a random variable approximates the standard
normal distribution function under a second moment assumption for the initial
Galton-Watson system offspring law. We estimate the speed rate of this
approximation. | Azam A. Imomov, Zuhriddin A. Nazarov | 2023-06-14T12:48:12Z | http://arxiv.org/abs/2306.09367v1 | # On asymptotic normality of the total progeny in the positive recurrent Q-processes
###### Abstract.
We examine the population growth system called Q-processes. This is defined by the Galton-Watson Branching system conditioned on non-extinction of its trajectory in the remote future. In this paper we observe the total progeny up to time \(n\) in the Q-process. By analogy with branching systems, this variable is of great interest in studying the deep properties of the Q-process. We find that the sum total progeny as a random variable approximates the standard normal distribution function under a second moment assumption for the initial Galton-Watson system offspring law. We estimate the speed rate of this approximation.
Key words and phrases:Branching system, Q-process, Markov chain, generating function, transition probabilities, invariant distribution, extinction time, total progeny, positive recurrent, central limit theorem, law of large numbers. 2010 Mathematics Subject Classification: Primary 60J80; Secondary 60J85
## 1. Introduction and main results
In the general theory of random processes models of stochastic branching systems are particularly important. Nowadays, there is great interest in these models. The creation of the theory of branching models is related to the possibility of estimating the survival probability of the population of monotypic individuals. The discrete-time simple branching process model was introduced by Francis Galton in 1889 as a mathematical model for the population family growth is now called the Galton-Watson Branching (GWB) system; see [1], [2], [4], [7], [8], [9] and [12]. GWB models play a fundamental role in both the theory and applications of stochastic processes. Among the random trajectories of branching systems, there are those that continue a long time. In the case of the GWB model, the class of such trajectories forms another stochastic model called Q-process; see [2] and [6]. In the case of continuous-time Markov branching systems, an analogous model called the _Markov Q-process_, was first introduced in [5].
Let \(\{Z(n),n\in\mathbb{N}_{0}\}\) GWB system with branching rates \(\{p_{k},k\in\mathbb{N}_{0}\}\), where \(\mathbb{N}_{0}=\{0\}\cup\mathbb{N}\) and \(\mathbb{N}=\{1,2,\ldots\}\), the variable \(Z(n)\) denote the population size at the moment \(n\) in the system. The evolution of the system occurs according to the following mechanism. Each individual lives a unit length life time and then gives \(k\in\mathbb{N}_{0}\) descendants with probability \(p_{k}\). This process is a reducible, homogeneous-discrete-time Markov chain with a state space consisting of two classes: \(\mathcal{S}_{0}=\{0\}\cup\mathcal{S}\), where \(\{0\}\) is absorbing state, and \(\mathcal{S}\subset\mathbb{N}\) is the class of possible essential communicating states. Throughout the paper assume that \(p_{0}>0\) and \(p_{0}+p_{1}>0\) which called the Schroder case. We suppose that \(p_{0}+p_{1}<1\) and \(m:=\sum_{k\in\mathcal{S}}kp_{k}<\infty\).
Considering transition probabilities
\[P_{ij}(n):=\mathbb{P}\left\{Z(n+k)=j\,\big{|}\,Z(k)=i\right\}\qquad for\ \text{any}\quad k\in\mathbb{N}_{0}\]
we observe that the corresponding probability generating function (GF)
\[\sum_{k\in\mathcal{S}_{0}}P_{ij}(n)s^{k}=\big{[}f_{n}(s)\big{]}^{i}, \tag{1.1}\]
where \(f_{n}(s):=\sum_{k\in\mathcal{S}_{0}}\mathsf{p}_{k}(n)s^{k}\), therein \(\mathsf{p}_{k}(n):=P_{1k}(n)\) and, in the same time \(f_{n}(s)\) is \(n\)-fold iteration of the offspring GF
\[f(s):=\sum_{k\in\mathcal{S}_{0}}p_{k}s^{k}.\]
Needless to say that \(f_{n}(0)=\mathsf{p}_{0}(n)\) is a vanishing probability of the system initiated by one individual. Note that this probability tends as \(n\to\infty\) monotonously to \(q\), which called an extinction probability of the system, i.e. \(\lim_{n\to\infty}\mathsf{p}_{0}(n)=q\); see [2].
The extinction probability
* \(q=1\) if \(m\leq 1\);
* \(q<1\) if \(m>1\).
Based on this, according to the values of the parameter \(m\), the system is called
* _sub-critical_ if \(m<1\);
* _critical_ if \(m=1\);
* _super-critical_ if \(m>1\).
Further we are dealing with the GWB system conditioned on the event \(\{n<\mathcal{H}<\infty\}\), where \(\mathcal{H}\) is an extinction time of the system, i.e. \(\mathcal{H}:=\min\left\{n\in\mathbb{N}:Z(n)=0\right\}\). Let \(\mathsf{P}_{i}\big{\{}*\big{\}}:=\mathsf{P}\left\{*\big{|}\ Z(0)=i\right\}\) and define conditioned probability measure
\[\mathsf{P}_{i}^{\mathcal{H}(n+k)}\{*\}:=\mathsf{P}_{i}\left\{*\big{|}\ n+k< \mathcal{H}<\infty\right\}\qquad\text{ for any }\quad k\in\mathbb{N}.\]
In [2, p. 58] proved, that
\[\mathcal{Q}_{ij}(n):=\lim_{k\to\infty}\mathsf{P}_{i}^{\mathcal{H}(n+k)}\big{\{} Z(n)=j\big{\}}=\frac{jq^{j-i}}{i\beta^{n}}P_{ij}(n), \tag{1.2}\]
where \(\beta:=f^{\prime}(q)\). Observe that \(\sum_{j\in\mathbb{N}}\mathcal{Q}_{ij}(n)=1\) for each \(i\in\mathbb{N}\). Thus, the probability measure \(\mathcal{Q}_{ij}(n)\) can determine a new population growth system with the state space \(\mathcal{E}\subset\mathbb{N}\) which we denote by \(\{W(n),n\in\mathbb{N}_{0}\}\). This is a discrete-homogeneous-time irreducible Markov chain defined in the book [2, p. 58] and called _the Q-process_. Undoubtedly \(W(0)\,{\buildrel d\over{=}}\,Z(0)\) and transition probabilities
\[\mathcal{Q}_{ij}(n):=\mathsf{P}\left\{W(n)=j\ \Big{|}\ W(0)=i\right\}=\mathsf{P}_ {i}\left\{Z(n)=j\ \Big{|}\ \mathcal{H}=\infty\right\},\]
so that the Q-process can be interpreted as a "long-living" GWB system.
Put into consideration a GF
\[w_{n}^{(i)}(s):=\sum_{j\in\mathcal{E}}\mathcal{Q}_{ij}(n)s^{j}.\]
Then from (1.1) and (1.2) we obtain
\[w_{n}^{(i)}(s)=\left[\frac{f_{n}(qs)}{q}\right]^{i-1}\cdot w_{n}(s), \tag{1.3}\]
where the GF \(w_{n}(s):=w_{n}^{(1)}(s)=\mathsf{E}\left[s^{W(n)}\ \big{|}\ W(0)=1\right]\) has a form of
\[w_{n}(s)=s\frac{f_{n}^{\prime}(qs)}{\beta^{n}}\quad\text{ for all }\quad n\in\mathbb{N}. \tag{1.4}\]
Using iterations for \(f(s)\) in (1.3) leads to the following functional equation:
\[w_{n+1}^{(i)}(s)=\frac{w(s)}{f_{q}(s)}w_{n}^{(i)}\big{(}f_{q}(s)\big{)}, \tag{1.5}\]
where \(w(s):=w_{1}(s)\) and \(f_{q}(s)=f(qs)\big{/}q\). Thus, Q-process is completely defined by setting the GF
\[w(s)=s\frac{f^{\prime}(qs)}{\beta}. \tag{1.6}\]
An evolution of the Q-process is in essentially regulated by the structural parameter \(\beta>0\). In fact, as it has been shown in [2, p. 59, Theorem 2], that
* \(\mathcal{E}\) _is positive recurrent_ if \(\beta<1\);
* \(\mathcal{E}\) _is transient_ if \(\beta=1\).
On the other hand, it is easy to be convinced that positive recurrent case \(\beta<1\) of Q-process is in a definition character of the non-critical case \(m\neq 1\) of the initial GWB system. Note that \(\beta\leq 1\) and nothing but.
In this paper we deal with the positive recurrent case assuming that first moment \(\alpha:=w^{\prime}(1-)\) be finite. Then differentiating (1.6) on the point \(s=1\) we obtain \(\alpha=1+\gamma_{q}\cdot(1-\beta)\), where
\[\gamma_{q}:=\frac{qf^{\prime\prime}(q)}{\beta\left(1-\beta\right)}.\]
It follows from (1.3) and (1.4) that
\[\mathsf{E}_{i}W(n):=\mathsf{E}\left[W(n)\;\middle|\;W(0)=i\right]=\left(i-1 \right)\beta^{n}+\mathsf{E}W(n),\]
where \(\mathsf{E}W(n)=1+\gamma_{q}\cdot\left(1-\beta^{n}\right)\).
It is obvious, that when initial GWB system is sub-critical, then the condition \(\alpha<\infty\) is this is equivalent to that \(f^{\prime\prime}(1-)<\infty\). Further we everywhere will be accompanied by this condition by default.
Our purpose is to investigate asymptotic properties of a random variable
\[S_{n}=W(0)+W(1)+\cdots+W(n-1),\]
denoting the total number of individuals that have existed up to the \(n\)-th generation in Q-process. By analogy with branching systems, this variable is of great interest in studying the deep properties of the Q-process. For details on the total progeny in GWB systems and related models results, see e.g. [8], [9], [10], [11].
Throughout this paper we will use famous Landau symbols \(o\), \(\mathcal{O}\) and \(\mathcal{O}^{*}\) to describe kinds of bounds on asymptotic varying rates of positive functions \(f(x)\) and \(g(x)\). for for all large enough values of at infinity. So, \(f=o(g)\) means that \(\lim_{x}f(x)\big{/}g(x)=0\), and we write \(f=\mathcal{O}(g)\) if \(\limsup_{x}f(x)\big{/}g(x)<\infty\) and also we write \(f=\mathcal{O}^{*}(g)\) if the ratio \(f(x)\big{/}g(x)\) has a positive explicit limit. i.e. \(\lim_{x}f(x)\big{/}g(x)=C<\infty\). Moreover, \(f\sim(g)\) means that \(\lim_{x}f(x)\big{/}g(x)=1\).
Our main results are analogues of Central Limit Theorem and Law of Large Numbers for \(S_{n}\). Let \(\mathcal{N}\left(0,\sigma^{2}\right)\) be a normal distributed random variable with the zero mean and the finite variance \(\sigma^{2}\) and \(\Phi_{0,\sigma^{2}}(x)\) is its distribution function.
**Theorem 1**.: _Let \(\beta<1\) and \(\alpha<\infty\). Then there exists a positive real-valued sequence \(\mathcal{K}_{n}\) such that \(\mathcal{K}_{n}=\mathcal{O}^{*}(\sqrt{n})\) and_
\[\frac{S_{n}-\mathsf{E}S_{n}}{\mathcal{K}_{n}}\stackrel{{ P}}{{ \longrightarrow}}\mathcal{N}\left(0,\sigma^{2}\right)\qquad\text{as}\quad n \rightarrow\infty,\]
_where the symbol "\(\stackrel{{ P}}{{\longrightarrow}}\)" means the convergence in probability._
**Theorem 2**.: _Let \(\beta<1\) and \(\alpha<\infty\). Then there exists slowly varying function at infinity \(\mathcal{L}(*)\) such that_
\[\left|P\left\{\frac{S_{n}-ES_{n}}{\mathcal{K}_{n}}<x\right\}-\Phi_{0,\sigma^{2} }(x)\right|\leq\frac{\mathcal{L}(n)}{n^{1/4}}\]
_uniformly in \(x\)._
Let \(I_{a}\) be a degenerate distribution concentrated at the point \(a\), i.e.
\[I_{a}(B)=\left\{\begin{array}{ll}1&\quad\text{if}\quad x\in B,\\ \\ 0&\quad\text{if}\quad x\notin B.\end{array}\right.\]
**Theorem 3**.: _Let \(\beta<1\) and \(\alpha<\infty\). Then_
\[\frac{S_{n}}{n}\overset{P}{\longrightarrow}1+\gamma_{q}\qquad\text{as}\quad n \to\infty.\]
_Moreover there exists slowly varying function at infinity \(\mathcal{L}_{\gamma}(*)\) such that_
\[\left|P\left\{\frac{S_{n}}{n}<x\right\}-I_{1+\gamma_{q}}(x)\right|\leq\frac{ \mathcal{L}_{\gamma}(n)}{\sqrt{n}}\]
_uniformly in \(x\), where_
\[I_{1+\gamma_{q}}(x)=\left\{\begin{array}{ll}0&\quad\text{if}\quad x\leq 1+ \gamma_{q},\\ \\ 1&\quad\text{if}\quad x>1+\gamma_{q}.\end{array}\right.\]
The rest of this paper is organized as follows. Section 2 provides auxiliary statements that will be essentially used in the proof of our theorems. Section 3 is devoted to the proof of main results.
## 2. Preliminaries
Further we need the joint GF of the variables \(W(n)\) and \(S_{n}\)
\[J_{n}(s;x)=\sum_{j\in\mathcal{E}}\sum_{I\in\mathbb{N}}\mathsf{P}\left\{W(n)=j, S_{n}=l\right\}s^{j}x^{l}\]
on a two-dimensional domain
\[\mathbb{K}=\left\{(s;x)\in\mathbb{R}^{2}:\;s\in[0,1],\;x\in[0,1],\;\sqrt{(s-1 )^{2}+(x-1)^{2}}>0\right\}.\]
Due to the Markov nature of the Q-process, we see that the two-dimensional one-step joint-transition probabilities
\[\mathsf{P}\left\{W(n+1)=j,S_{n+1}=l\;\Big{|}\;W(n)=i,S_{n}=k\right\}=\mathsf{ P}_{i}\left\{W(1)=j,S_{1}=l\right\}\delta_{l,i+k},\]
where \(\delta_{ij}\) is the Kronecker's delta function:
\[\delta_{ij}=\left\{\begin{array}{ll}1&\quad\text{if}\quad i=j,\\ \\ 0&\quad\text{if}\quad i\neq j.\end{array}\right.\]
Therefore, we have
\[\mathsf{E}_{i}\left[s^{W(n+1)}x^{S_{n+1}}\;\Big{|}\;S_{n}=k\right] = \sum_{j\in\mathcal{E}}\sum_{I\in\mathbb{N}}\mathsf{P}_{i}\left\{W (1)=j,S_{1}=l\right\}\delta_{l,i+k}s^{j}x^{l}\] \[= \sum_{j\in\mathcal{E}}\mathsf{P}_{i}\left\{W(1)=j\right\}s^{j}x^ {j+k}=w^{(i)}(s)x^{j+k}.\]
Next, using the formula of total probabilities, we obtain
\[J_{n+1}(s;x) = \mathsf{E}\left[\mathsf{E}\left[s^{W(n+1)}x^{S_{n+1}}\ \Big{|}\ W(n),S_{n}\right]\right]= \mathsf{E}\left[w^{W(n)}(s)x^{W(n)+S_{n}}\right]\] \[= \mathsf{E}\left[\big{(}w(s)f_{q}(s)\big{)}^{W(n)-1}x^{W(n)+S_{n}} \right]=\frac{w(s)}{f_{q}(s)}\mathsf{E}\left[\big{(}xf_{q}(s)\big{)}^{W(n)}x^{ S_{n}}\right].\]
In the last line we used formula (1.3). Thus we have
\[J_{n+1}(s;x)=\frac{w(s)}{f_{q}(s)}J_{n}\big{(}xf_{q}(s);x\big{)} \tag{2.1}\]
for \((s,x)\in\mathbb{K}\) and any \(n\in\mathbb{N}\).
Using relation (2.1), we can now obtain an explicit expression for the GF \(J_{n}(s;x)\). Indeed, applying it consistently, taking into account (1.6) and, after standard transformations, we have
\[J_{n}(s;x)=\frac{s}{\beta^{n}}\frac{\partial H_{n}(s;x)}{\partial s}, \tag{2.2}\]
where the function \(H_{n}(s;x)\) is defined for any \((s;x)\in\mathbb{K}\) by the following recursive relations:
\[\left\{\begin{array}{l}H_{0}(s;x)=s;\\ \\ H_{n+1}(s;x)=xf_{q}\big{(}H_{n}(s;x)\big{)}.\end{array}\right. \tag{2.3}\]
Since \(\partial J_{n}(s;x)\big{/}\partial x\Big{|}_{(s;x)=(1;1)}=\mathsf{E}S_{n}\), from (2.2) and (2.3), we find that
\[\mathsf{E}S_{n}=(1+\gamma_{q})n-\gamma_{q}\frac{1-\beta^{n}}{1-\beta}. \tag{2.4}\]
**Remark 1**.: Needles to say that the GF \(f_{q}(s)=f(qs)\big{/}q\) generates a sub-critical GWB system. Denoting the population in this system as \(Z_{q}(n)\), we define the sum \(V_{n}=\sum_{k=0}^{n-1}Z_{q}(k)\) which is a total progeny of individuals that participated in the evolution of the system \(\big{\{}Z_{q}(n),n\in\mathbb{N}_{0}\big{\}}\), up to the \(n\)-th generation. It is known that the GF of the joint distribution \(\big{(}Z_{q}(n),V_{n}\big{)}\) satisfies the recursive equation (2.3); see [10, p. 126]. Thus, the function \(H_{n}(s;x)\) is a two-dimensional GF for all \(n\in\mathbb{N}\) and \((s;x)\in\mathbb{K}\) and obeys to all properties of the GF \(\mathsf{E}\left[s^{Z_{q}(n)}x^{V_{n}}\right]\).
By virtue of what said in Remark 1, in studying \(H_{k}(s;x)\) we use the properties of the GF \(\mathsf{E}\left[s^{Z_{q}(n)}x^{V_{n}}\right]\). Since the system \(\big{\{}Z_{q}(n)\big{\}}\) is sub-critical, it goes extinct with probability 1. Therefore, there exists a proper random variable \(V=\lim_{n\to\infty}V_{n}\), which means the total number of individuals participated in the whole evolution of the system. So
\[h(x):=\mathsf{E}x^{V}=\lim_{n\to\infty}\mathsf{E}x^{V_{n}}=\lim_{n\to\infty}H _{n}(1;x)\]
and, according to (2.3) it satisfies the functional equation
\[h(x)=xf_{q}\big{(}h(x)\big{)}. \tag{2.5}\]
Further, we note that
\[\mathsf{P}\big{\{}Z_{q}(n)=0,V_{n}=k\big{\}}=\mathsf{P}\big{\{}Z_{q}(n)=0,V=k \big{\}}.\]
Then, due to the monotonicity of the probabilistic GF, we find
\[\mathsf{P}\big{\{}V=k\big{\}}-\sum_{i\in\mathbb{N}}\mathsf{P}\big{\{}Z_{q}(n)= i,V_{n}=k\big{\}}s^{i}\leq\mathsf{P}\big{\{}V=k,Z_{q}(n)>0\big{\}}.\]
Therefore, denoting
\[R_{n}(s;x):=h(x)-H_{n}(s;x)\]
for \((s;x)\in\mathbb{K}\), we have
\[R_{n}(s;x)\leq\sum_{k\in\mathbb{N}}\mathsf{P}\left\{V=k,Z_{q}(n)>0\right\}x^{k}= R_{n}(0;x).\]
It is easy to see \(R_{n}(0;x)\leq R_{n}(0;1)=\mathsf{P}\left\{Z_{q}(n)>0\right\}\). Then
\[\left|R_{n}(s;x)\right|\leq\mathsf{P}\left\{Z_{q}(n)>0\right\}\longrightarrow 0 \qquad as\quad n\to\infty. \tag{2.6}\]
On the other hand, due to the fact that \(\left|h(x)\right|\leq 1\) and \(\left|H_{n}(s;x)\right|\leq 1\) we have
\[R_{n}(s;x) = x\left[f_{q}\big{(}h(x)\big{)}-f_{q}\big{(}H_{n-1}(s;x)\big{)}\right]\] \[= x\mathsf{E}\big{[}h(x)-H_{n-1}(s;x)\big{]}^{Z_{q}(n)}\leq\beta R _{n-1}(s;x)\]
for all \((s;x)\in\mathbb{K}\). This implies that
\[\left|R_{n}(s;x)\right|\leq\beta^{n-k}\big{|}R_{k}(s;x)\right| \tag{2.7}\]
for any \(n\in\mathbb{N}\) and \(k=0,1,\ldots,n\).
In what follows, where the function \(R_{n}(s;x)\) will be used, we deal with the domain \(\mathbb{K}\), where this function does not vanish. By virtue of (2.6), taking into account (2.3), (2.5), we obtain the asymptotic formula
\[R_{n+1}(s;x)=xf_{q}^{\prime}\big{(}h(x)\big{)}R_{n}(s;x)-x\frac{f_{q}^{\prime \prime}\big{(}h(x)\big{)}+\eta_{n}(s;x)}{2}R_{n}^{2}(s;x), \tag{2.8}\]
where \(\left|\eta_{n}(s;x)\right|\to 0\) as \(n\to\infty\) uniformly in \((s;x)\in\mathbb{K}\). Since \(R_{n}(s;x)\to 0\), it follows from (2.8) that
\[R_{n}(s;x)=\frac{R_{n+1}(s;x)}{xf_{q}^{\prime}\big{(}h(x)\big{)}}(1+o(1)) \qquad as\quad n\to\infty.\]
Using last equality, we transform (2.8) to the form
\[R_{n+1}(s;x)=xf_{q}^{\prime}\big{(}h(x)\big{)}R_{n}(s;x)-\left[\frac{f_{q}^{ \prime\prime}\big{(}h(x)\big{)}}{2f_{q}^{\prime}\big{(}h(x)\big{)}}+\varepsilon _{n}(s;x)\right]R_{n}(s;x)R_{n+1}(s;x)\]
and, therefore
\[\frac{u(x)}{R_{n+1}(s;x)}=\frac{1}{R_{n}(s;x)}+\upsilon(x)+\varepsilon_{n}(s; x), \tag{2.9}\]
where
\[u(x)=xf_{q}^{\prime}\big{(}h(x)\big{)}\qquad and\qquad\upsilon(x)=x\frac{f_{q}^ {\prime\prime}\big{(}h(x)\big{)}}{2u(x)}\]
and \(\sup_{(s;x)\in\mathbb{K}}\big{|}\varepsilon_{n}(s;x)\big{|}\leq\varepsilon_{ n}\to 0\) as \(n\to\infty\). By successively applying (2.9), we find the following representation for \(R_{n}(s;x)\):
\[\frac{u^{n}(x)}{R_{n}(s;x)}=\frac{1}{R_{0}(s;x)}+\frac{\upsilon(x)\big{[}1-u^ {n}(x)\big{]}}{1-u(x)}+\sum_{k=1}^{n}\varepsilon_{k}(s;x)u^{k}(x). \tag{2.10}\]
In what follows, our discussions will essentially be based on formula (2.10). Note that in the monograph [10, p. 136] this formula was stated for the critical GWB system.
Now, for convenience, we write
\[J_{n}(s;x)=s\prod_{k=0}^{n-1}\frac{xf_{q}^{\prime}\big{(}H_{k}(s;x)\big{)}}{\beta}\]
which is a direct consequence of formulas (2.2) and (2.3). In our notation, it is almost obvious that \(T_{n}(x):=\mathbb{E}x^{S_{n}}=J_{n}(1;x)\). Then it follows that
\[T_{n}(x)=\prod_{k=0}^{n-1}u_{k}(x), \tag{2.11}\]
where
\[u_{n}(x)=\frac{xf_{q}^{\prime}\big{(}h_{n}(x)\big{)}}{\beta},\]
at that \(h_{n}(x)=\mathbb{E}x^{V_{n}}\) which satisfies a recurrence equation \(h_{n+1}(x)=xf_{q}\big{(}h_{n}(x)\big{)}\). Accordingly, the function \(\Delta_{n}(x):=h(x)-h_{n}(x)\) satisfies the inequality
\[\big{|}\Delta_{n}(x)\big{|}\leq\beta^{n-k}\big{|}\Delta_{k}(x)\big{|} \tag{2.12}\]
which is a consequence of (2.7). Successive application of the inequality (2.12) gives
\[\big{|}\Delta_{n}(x)\big{|}=\mathcal{O}\big{(}\beta^{n}\big{)}\to 0\qquad as \quad n\to\infty \tag{2.13}\]
uniformly in \(x\in\mathbb{K}\). Similarly to the case \(R_{n}(s;x)\), taking into account (2.13) we find the following representation:
\[\frac{u^{n}(x)}{\Delta_{n}(x)}=\frac{1}{h(x)-1}+\frac{v(x)\left[1-u^{n}(x) \right]}{1-u(x)}+\sum_{k=1}^{n}\varepsilon_{k}(x)u^{k}(x), \tag{2.14}\]
where \(\sup_{x\in\mathbb{K}}\big{|}\varepsilon_{n}(x)\big{|}\leq\varepsilon_{n}\to 0\) as \(n\to\infty\).
In our further discussion we will also need expansions functions \(h(x)\) and \(u(x)\) in the left neighborhood of the point \(x=1\).
**Lemma 1**.: _Let \(\beta<1\) and \(\alpha<\infty\). Then for GF \(h(x)=\mathbb{E}x^{V}\) the following local expansion holds:_
\[1-h(x)\sim\frac{1}{1-\beta}(1-x)-\frac{2\beta(1-\beta)+b_{q}}{2(1-\beta)^{3} }(1-x)^{2}\qquad as\quad x\uparrow 1, \tag{2.15}\]
_where \(b_{q}:=f_{q}^{\prime\prime}(1-)\)._
Proof.: We write the Peano's form Taylor expansion for \(h(x)=\mathbb{E}x^{V}\):
\[h(x)=1+h^{\prime}(1-)(x-1)+\frac{h^{\prime\prime}(1-)}{2}(x-1)^{2}+o(x-1)^{2} \qquad as\quad x\uparrow 1. \tag{2.16}\]
Formula (2.5) and standard calculations produce that
\[h^{\prime}(1-)=\frac{1}{1-\beta}\qquad and\qquad h^{\prime\prime}(1-)=\frac{ 2\beta(1-\beta)+b_{q}}{(1-\beta)^{3}}.\]
Substituting these expressions in the expansion (2.16), entails (2.15).
The lemma is proved.
Similar arguments can be used to verify the validity of the following lemma.
**Lemma 2**.: _Let \(\beta<1\) and \(\alpha<\infty\). Then_
\[u(x)=\beta x\big{[}1-\gamma_{q}(1-x)\big{]}+\rho(x), \tag{2.17}\]
_where_
\[\frac{\rho(x)}{(1-x)^{2}}\to const\qquad as\quad x\uparrow 1.\]
Proof.: Write the Taylor expansion with Lagrange error bound for \(f_{q}^{\prime}(y)\):
\[f_{q}^{\prime}(y)=\beta+f_{q}^{\prime\prime}(1)(y-1)+r(y),\]
where \(r(y)\leq A\cdot(y-1)^{2}\) as \(y\uparrow 1\) and \(A=const\). Since \(u(x)=xf_{q}^{\prime}\big{(}h(x)\big{)}\), taking herein \(y=h(x)\) and using (2.15) leads to (2.17).
The lemma is proved.
The following two results directly follow from Lemma 1 and Lemma 2 respectively.
**Lemma 3**.: _Let \(\beta<1\), \(\alpha<\infty\). Then_
\[h\big{(}e^{\theta}\big{)}-1\sim\frac{\theta}{1-\beta}+\frac{2+\beta\gamma_{q}} {2(1-\beta)^{2}}\theta^{2}\qquad as\quad\theta\to 0. \tag{2.18}\]
**Lemma 4**.: _Let \(\beta<1\), \(\alpha<\infty\). Then_
\[\frac{u\big{(}e^{\theta}\big{)}}{\beta}-1=(1+\gamma_{q})\theta+\rho(\theta), \tag{2.19}\]
_where \(\rho(\theta)=\mathcal{O}^{*}\left(\theta^{2}\right)\) as \(\theta\to 0\)._
Next Lemma follows from combination of (2.14), (2.18) and (2.19).
**Lemma 5**.: _Let \(\beta<1\), \(\alpha<\infty\). Then_
\[\frac{\Delta_{n}\big{(}e^{\theta}\big{)}}{u^{n}(e^{\theta})}=\frac{1}{1-\beta }\theta+\mathcal{O}^{*}\big{(}\theta^{2}\big{)}\qquad as\quad\theta\to 0 \tag{2.20}\]
_for any fixed \(n\in\mathbb{N}\)._
Now we prove the following lemma.
**Lemma 6**.: _Let \(\beta<1\), \(\alpha<\infty\). Then_
\[\ln\prod_{k=0}^{n-1}u_{k}\big{(}e^{\theta}\big{)}\sim-\left(1-\frac{u\big{(}e ^{\theta}\big{)}}{\beta}\right)n-\gamma_{q}\theta\cdot\sum_{k=0}^{n-1}u^{k} \big{(}e^{\theta}\big{)}\qquad as\quad\theta\to 0 \tag{2.21}\]
_for any fixed \(n\in\mathbb{N}\)._
Proof.: Using the inequality \(\ln(1-y)\geq-y-y^{2}\big{/}(1-y)\), which is valid for \(0\leq y<1\), we have
\[\ln\prod_{k=0}^{n-1}u_{k}\big{(}e^{\theta}\big{)} = \sum_{k=0}^{n-1}\ln\Big{\{}1-\Big{[}1-u_{k}\big{(}e^{\theta} \big{)}\Big{]}\Big{\}} \tag{2.22}\] \[= \sum_{k=0}^{n-1}\Big{[}u_{k}\big{(}e^{\theta}\big{)}-1\Big{]}+ \rho_{n}^{(1)}(\theta)=:I_{n}(\theta)+\rho_{n}^{(1)}(\theta),\]
where
\[I_{n}(\theta)=-\sum_{k=0}^{n-1}\Big{[}1-u_{k}\big{(}e^{\theta}\big{)}\Big{]}\,, \tag{2.23}\]
and
\[-\sum_{k=0}^{n-1}\frac{\big{[}1-u_{k}\big{(}e^{\theta}\big{)}\big{]}^{2}}{u_{k }\big{(}e^{\theta}\big{)}}\leq\rho_{n}^{(1)}(\theta)\leq 0.\]
It is easy to see that the sequence of functions \(\{h_{k}(x)\}\) does not decrease in \(k\in\mathbb{N}\). Then, by the property of the GF, and the function \(u_{k}\big{(}e^{\theta}\big{)}\) is non-decreasing in \(k\), for any fixed \(n\in\mathbb{N}\) and \(\theta\in\mathbb{R}\). Therefore,
\[\frac{1-u_{0}\big{(}e^{\theta}\big{)}}{u_{0}\big{(}e^{\theta}\big{)}}I_{n}( \theta)\leq\rho_{n}^{(1)}(\theta)\leq 0. \tag{2.24}\]
According to the GF property, we will also verify that under our conditions \(1-u_{0}\big{(}e^{\theta}\big{)}\to 0\) as \(\theta\to 0\). Then, according to (2.24), \(\rho_{n}^{(1)}(\theta)\to 0\) if only \(I_{n}(\theta)\) has a finite limit as \(\theta\to 0\).
Using the Taylor formula, we write
\[f_{q}^{\prime}(t)=f_{q}^{\prime}(t_{0})-f_{q}^{\prime\prime}(t_{0})(t_{0}-t)+ (t_{0}-t)g(t_{0};t),\]
where \(g(t_{0};t)=(t_{0}-t)f_{q}^{\prime\prime\prime}(\tau)\big{/}2\) and \(t_{0}<\tau<t\). Hence, at \(t_{0}=h(x)\) and \(t=h_{k}(x)\) we have the following relation:
\[u_{k}(x)=\frac{u(x)}{\beta}-\frac{xf_{q}^{\prime\prime}\left(h(x)\right)}{ \beta}\Delta_{k}(x)+\Delta_{k}(x)g_{k}(x),\]
where \(g_{k}(x)=x\Delta_{k}(x)f_{q}^{\prime\prime\prime}(\tau)\big{/}2\beta\) and \(h_{k}(x)<\tau<h(x)\). Therefore,
\[u_{k}\big{(}e^{\theta}\big{)}=\frac{u\big{(}e^{\theta}\big{)}}{\beta}-\frac{ e^{\theta}f_{q}^{\prime\prime}\left(h\big{(}e^{\theta}\big{)}\right)}{\beta} \Delta_{k}\big{(}e^{\theta}\big{)}+\Delta_{k}\big{(}e^{\theta}\big{)}g_{k} \big{(}e^{\theta}\big{)}.\]
Then (2.23) becomes
\[I_{n}(\theta)=-\left[1-\frac{u\big{(}e^{\theta}\big{)}}{\beta} \right]n-\frac{e^{\theta}f_{q}^{\prime\prime}\left(h\big{(}e^{\theta}\big{)} \right)}{\beta}\sum_{k=0}^{n-1}\Delta_{k}\big{(}e^{\theta}\big{)}+\rho_{n}^{( 2)}(\theta), \tag{2.25}\]
where
\[0\leq\rho_{n}^{(2)}(\theta)\leq\Delta_{0}\big{(}e^{\theta}\big{)}\sum_{k=0}^{ n-1}g_{k}\big{(}e^{\theta}\big{)}.\]
In the last step we used the fact that \(\big{|}\Delta_{n}(x)\big{|}\leq\beta^{n}\big{|}\Delta_{0}(x)\big{|}\) which follows from inequality (2.12). It follows from (2.18) that \(\Delta_{0}\big{(}e^{\theta}\big{)}=\mathcal{O}(\theta)\) as \(\theta\to 0\). And also the asymptotic estimation (2.13) implies that \(g_{k}\big{(}e^{\theta}\big{)}=\mathcal{O}\big{(}\beta^{k}\big{)}\) as \(k\to\infty\) and hence the functional series \(\sum_{k=0}^{\infty}g_{k}\big{(}e^{\theta}\big{)}\) converges for all \(\theta\in\mathbb{R}\). Therefore,
\[\Delta_{0}\big{(}e^{\theta}\big{)}\sum_{k=0}^{n-1}g_{k}\big{(}e^{\theta}\big{)} =\mathcal{O}(\theta)\to 0\qquad\text{as}\quad\theta\to 0.\]
Then the remainder term in (2.25)
\[\rho_{n}^{(2)}(\theta)\to 0\qquad\text{as}\quad\theta\to 0. \tag{2.26}\]
Assertion (2.20) implies that
\[\sum_{k=0}^{n-1}\Delta_{k}\big{(}e^{\theta}\big{)}=\frac{\theta}{1-\beta} \sum_{k=0}^{n-1}u^{k}\big{(}e^{\theta}\big{)}\left(1+\mathcal{O}^{*}\big{(} \theta\big{)}\right)\qquad\text{as}\quad\theta\to 0. \tag{2.27}\]
Since \(e^{\theta}f_{q}^{\prime\prime}\left(h\big{(}e^{\theta}\big{)}\right)\to f_{q}^ {\prime\prime}(1)\) as \(\theta\to 0\), combining relations (2.22), (2.25)-(2.27) and, after some calculations, we will come to (2.21).
The lemma is proved.
## 3. Proof of Theorems
Proof of Theorem 1.: Define a sequence of variables
\[\zeta_{n}:=\frac{S_{n}-\mathsf{E}S_{n}}{\mathcal{K}_{n}}\]
for some positive real-valued sequence \(\mathcal{K}_{n}\) such that \(\mathcal{K}_{n}\to\infty\) as \(n\to\infty\) and then an appropriate characteristic function
\[\varphi_{\zeta_{n}}\left(\theta\right):=\mathsf{E}\left[\exp\bigl{\{}i\theta \zeta_{n}\bigr{\}}\right]=\mathsf{E}\left[\theta_{n}^{S_{n}}\cdot\exp\left\{ \frac{-i\theta\mathsf{E}S_{n}}{\mathcal{K}_{n}}\right\}\right],\]
where \(\theta_{n}:=\exp\big{\{}i\theta\big{/}\mathcal{K}_{n}\big{\}}\) and \(\theta\in\mathbb{R}\). Using (2.4) we write
\[\ln\varphi_{\zeta_{n}}\left(\theta\right)\sim-\left(1+\gamma_{q}\right)\frac{ i\theta}{\mathcal{K}_{n}}n+\ln T_{n}\left(\theta_{n}\right)\qquad as\quad n\to\infty, \tag{3.1}\]
where \(T_{n}(x)=\mathsf{E}x^{S_{n}}\). Simultaneously according to (2.11) and Lemma 6,
\[\ln T_{n}\left(\theta_{n}\right)\sim-\left(1-\frac{u\left(\theta_{n}\right)} {\beta}\right)n-\frac{i\theta\gamma_{q}}{\mathcal{K}_{n}}\cdot\sum_{k=0}^{n-1 }u^{k}\left(\theta_{n}\right) \tag{3.2}\]
as \(n\to\infty\). In turn, (2.19) implies
\[-\left(1-\frac{u\left(\theta_{n}\right)}{\beta}\right)n=\left(1+\gamma_{q} \right)\frac{i\theta}{\mathcal{K}_{n}}n+n\rho\left(\frac{i\theta}{\mathcal{K}_ {n}}\right), \tag{3.3}\]
where \(0<\lim_{\theta\to 0}\rho\left(\theta\right)\bigr{/}\theta^{2}=:C_{\rho}<\infty\). Now we readily choose
\[\mathcal{K}_{n}=\mathcal{O}^{*}\left(\sqrt{n}\right)\qquad as\quad n\to\infty \tag{3.4}\]
which is equivalent to \(\mathcal{K}_{n}\big{/}\sqrt{n}\to C_{\mathcal{K}}>0\). Hence we see that
\[n\rho\left(\frac{i\theta}{\mathcal{K}_{n}}\right)\to-K\theta^{2}\qquad as \quad n\to\infty, \tag{3.5}\]
where \(K:=C_{\rho}\big{/}C_{\mathcal{K}}^{2}>0\). At the same time, since \(u(x)=xf_{q}^{\prime}\big{(}h(x)\big{)}\), in our assumptions we observe that \(u(x)\leq\beta\) uniformly in \(x\in[0,1]\). Therefore, one can choose \(\varepsilon>0\) so desirably small that
\[\left|u^{k}\left(\theta_{n}\right)-\beta^{k}\right|\leq\varepsilon\]
for large enough \(n\). This entails that \(\lim_{n\to\infty}\sum_{k=0}^{n-1}u^{k}\left(\theta_{n}\right)\) converges uniformly in \(\theta\in\mathbb{R}\). Eventually, after combination of asymptotic estimations (3.2)-(3.5), and denoting \(\sigma^{2}:=2C_{\rho}\), the relation (3.1) becomes
\[\ln\varphi_{\zeta_{n}}\left(\theta\right)=-\frac{\sigma^{2}\theta^{2}}{2}+ \mathcal{K}_{n}(\theta), \tag{3.6}\]
where \(\mathcal{K}_{n}(\theta)=\mathcal{O}^{*}\left(i\theta\big{/}\mathcal{K}_{n}\right)\) as \(n\to\infty\). Finally, we conclude that
\[\varphi_{\zeta_{n}}\left(\theta\right)\longrightarrow\exp\left\{-\frac{ \sigma^{2}\theta^{2}}{2}\right\}\qquad as\quad n\to\infty\]
for any fixed \(\theta\in\mathbb{R}\). The assertion follows now from the continuity theorem for characteristic functions.
Theorem 1 is proved.
Proof of Theorem 2.: The relation (3.6) and formal use of inequalities
\[\left|e^{jy}\right|\leq 1\qquad\text{and}\qquad\left|e^{jy}-1-y\right|\leq\frac{ \left|y\right|^{2}}{2}\]
imply
\[\left|\varphi_{\zeta_{n}}(\theta)-e^{-\sigma^{2}\theta^{2}/2}\right| \leq \left|e^{-\sigma^{2}\theta^{2}/2}\right|\left|e^{\mathcal{K}_{n} (\theta)}-1\right|\] \[\leq \left|e^{\mathcal{K}_{n}(\theta)}-1-\mathcal{K}_{n}(\theta) \right|+\left|\mathcal{K}_{n}(\theta)\right|\] \[\leq \frac{\left[\mathcal{K}_{n}(\theta)\right]^{2}}{2}+\left| \mathcal{K}_{n}(\theta)\right| \tag{3.7}\]
for all \(n\). By definition we write
\[\mathcal{K}_{n}(\theta)=C(n)\frac{i\theta}{\mathcal{K}_{n}},\]
where \(\lim_{n\to\infty}C(n)=C<\infty\). Then, denoting
\[F_{n}(x):=\mathsf{P}\big{\{}\zeta_{n}<x\big{\}},\]
and using the estimation (3.7), we obtain the Berry-Esseen approximation bound [3, p. 538] as follows:
\[\left|F_{n}(x)-\Phi_{0,\sigma^{2}}(x)\right| \leq \frac{1}{\pi}\int\limits_{-T}^{T}\left|\frac{\varphi_{\zeta_{n}} (\theta)-e^{-\sigma^{2}\theta^{2}/2}}{\theta}\right|d\theta+\frac{24M}{\pi T} \tag{3.8}\] \[\leq \frac{2}{\pi}\frac{C(n)}{\mathcal{K}_{n}}T+\frac{24M}{\pi T}\]
for all \(x\) and \(T>0\), where \(M\) is such that \(\Phi_{0,\sigma^{2}}^{\prime}(x)\leq M\). It can be decidedly taken that \(M=1/\sigma\sqrt{2\pi}\).
We let \(T\to\infty\) and in the same time it is necessary to be \(T=o\left(\sqrt{n}\right)\) since \(\mathcal{K}_{n}=\mathcal{O}^{*}\left(\sqrt{n}\right)\). We can choose \(T\) in general, in the form of \(T=n^{\delta}\mathcal{L}_{T}(n)\), where \(0<\delta<1/2\) and \(\mathcal{L}_{T}(n)\) slowly varies at infinity in the sense of Karamata. Then we reform (3.8) as follows:
\[\left|F_{n}(x)-\Phi_{0,\sigma^{2}}(x)\right|\leq\frac{\mathcal{L}_{C}(n)}{n^{ 1/2-\delta}}+\frac{\mathcal{L}_{M}(n)}{n^{\delta}}, \tag{3.9}\]
where
\[\mathcal{L}_{C}(n):=\frac{2C(n)}{\pi}\mathcal{L}_{T}(n)\qquad\text{and} \qquad\mathcal{L}_{M}(n):=\frac{24M}{\pi}\frac{1}{\mathcal{L}_{T}(n)}.\]
To come up to optimum degree of an estimation of approximation in (3.9), we would choose value of \(\delta\) such that \((1/2-\delta)\delta\) has reached the maximum value for \(\delta\in(0,1/2)\). It happens only in a unique case when \(\delta=1/2-\delta\) or \(\delta=1/4\). Thus (3.9) becomes
\[\left|F_{n}(x)-\Phi_{0,\sigma^{2}}(x)\right|\leq\frac{\mathcal{L}(n)}{n^{1/4}},\]
where \(\mathcal{L}(n)=\mathcal{L}_{C}(n)+\mathcal{L}_{M}(n)\) slowly varies at infinity.
The theorem proof is completed.
Proof of Theorem 3.: First we will show that
\[\frac{S_{n}}{n}\stackrel{{\mathsf{P}}}{{\longrightarrow}}1+\gamma _{q}\qquad\text{as}\quad n\to\infty. \tag{3.10}\]
Writing
\[\eta_{n}:=\frac{S_{n}}{n}=\frac{\operatorname{ES}_{n}}{n}+\frac{\mathcal{K}_{n}}{n }\zeta_{n},\]
and considering (2.4), we have
\[\varphi_{\eta_{n}}(\theta) := \operatorname{\mathsf{E}}\left[\exp\bigl{\{}i\theta\eta_{n}\bigr{\}}\right] \tag{3.11}\] \[= e^{i\theta\bigl{(}1+\gamma_{q}\bigr{)}}\left[\varphi_{\zeta_{n}} (\theta)\right]^{\mathcal{K}_{n}/n}\left(1-\frac{i\theta\gamma_{q}}{1-\beta} \frac{1}{n}\bigl{(}1-\beta^{n}\bigr{)}\right),\]
where \(\varphi_{\zeta_{n}}(\theta)=\operatorname{\mathsf{E}}\left[\exp\bigl{\{}i \theta\zeta_{n}\bigr{\}}\right]\). Relation (3.6) implies
\[\varphi_{\zeta_{n}}(\theta)=e^{-\sigma^{2}\theta^{2}/2}\left(1+\mathcal{O}^{*} \left(i\theta\bigl{/}\mathcal{K}_{n}\bigr{)}\right)\qquad\text{as}\quad n \to\infty\]
and hence \(\bigl{[}\varphi_{\zeta_{n}}(\theta)\bigr{]}^{\mathcal{K}_{n}/n}\to 0\) as \(n\to\infty\). Thus (3.11) entails
\[\varphi_{\eta_{n}}(\theta)\to e^{i\theta\bigl{(}1+\gamma_{q}\bigr{)}}\qquad \text{as}\quad n\to\infty.\]
According to the continuity theorem, this is sufficient for being of (3.10).
From (3.11) we obtain
\[\left|\varphi_{\eta_{n}}(\theta)-e^{i\theta\bigl{(}1+\gamma_{q} \bigr{)}}\right| \leq \left|\bigl{[}\varphi_{\zeta_{n}}(\theta)\bigr{]}^{\mathcal{K}_{ n}/n}\left(1-\frac{i\theta\gamma_{q}}{1-\beta}\frac{1}{n}\bigl{(}1-\beta^{n} \bigr{)}\right)-1\right|\] \[\leq \left|\frac{i\theta\gamma_{q}}{1-\beta}\frac{1}{n}\bigl{(}1- \beta^{n}\bigr{)}\right|.\]
We accounted in the last step that \(\left|\varphi_{*}(\theta)\right|\leq 1\) for any characteristic function. Now we can write the Berry-Esseen bound as follows:
\[\left|\operatorname{\mathsf{P}}\bigl{\{}\eta_{n}<x\bigr{\}}-I_{1 +\gamma_{q}}(x)\right| \leq \frac{1}{\pi}\int\limits_{-T}^{T}\left|\frac{\varphi_{\eta_{n}}( \theta)-e^{i\theta\bigl{(}1+\gamma_{q}\bigr{)}}}{\theta}\right|d\theta+\frac {24M_{\eta}}{\pi T}\] \[\leq \frac{\gamma_{q}}{\pi}\frac{\bigl{(}1-\beta^{n}\bigr{)}}{1-\beta }\frac{2T}{n}+\frac{24}{\pi T}\leq\frac{1}{\pi}\frac{2\gamma_{q}}{1-\beta} \frac{T}{n}+\frac{24}{\pi T},\]
where we put \(M_{\eta}=1\) which is suitable for the degenerate distribution function.
In this case we choose \(T=n^{\delta}\mathcal{L}_{T}(n)\), where \(0<\delta<1\) and \(\mathcal{L}_{T}(n)\) slowly varies at infinity. Therefore
\[\left|\operatorname{\mathsf{P}}\bigl{\{}\eta_{n}<x\bigr{\}}-I_{1+\gamma_{q}}( x)\right|\leq\frac{\mathcal{L}_{\beta}(n)}{n^{1-\delta}}+\frac{\mathcal{L}_{1}(n)}{n^{ \delta}}, \tag{3.12}\]
where
\[\mathcal{L}_{\beta}(n):=\frac{1}{\pi}\frac{2\gamma_{q}}{1-\beta}\mathcal{L}_{T }(n)\qquad\text{and}\qquad\mathcal{L}_{1}(n):=\frac{24}{\pi}\frac{1}{\mathcal{ L}_{T}(n)}.\]
We find \(\delta=1/2\) and (3.12) becomes
\[\left|\operatorname{\mathsf{P}}\bigl{\{}\eta_{n}<x\bigr{\}}-I_{1+\gamma_{q}}( x)\right|\leq\frac{\mathcal{L}_{\gamma}(n)}{n^{1/2}},\]
where \(\mathcal{L}_{\gamma}(n)=\mathcal{L}_{\beta}(n)+\mathcal{L}_{1}(n)\) slowly varies at infinity.
The proof is completed. |
2308.01297 | The Star Formation Across Cosmic Time (SFACT) Survey. III. Spectroscopy
of the Initial Catalog of Emission-Line Objects | The Star Formation Across Cosmic Time (SFACT) survey is a new narrowband
survey designed to detect emission-line galaxies (ELGs) and quasi-stellar
objects (QSOs) over a wide range of redshifts in discrete redshift windows. The
survey utilizes the WIYN 3.5m telescope and the Hydra multifiber positioner to
perform efficient follow-up spectroscopy on galaxies identified in the imaging
part of the survey. Since the objects in the SFACT survey are selected by their
strong emission lines, it is possible to obtain useful spectra for even the
faintest of our sources (r ~ 25). Here we present the 453 objects that have
spectroscopic data from the three SFACT pilot-study fields, 415 of which are
confirmed ELGs. The methodology for processing and measuring these data is
outlined in this paper and example spectra are displayed for each of the three
primary emission lines used to detect objects in the survey (H-alpha, [O
III]5007, and [O II]3727). Spectra of additional QSOs and non-primary
emission-line detections are also shown as examples. The redshift distribution
of the pilot-study sample is examined and the ELGs are placed in different
emission-line diagnostic diagrams in order to distinguish the star-forming
galaxies from the active galactic nuclei. | David J. Carr, Jennifer Sieben, John J. Salzer, Samantha W. Brunker, Bryce Cousins | 2023-08-02T17:33:43Z | http://arxiv.org/abs/2308.01297v1 | The Star Formation Across Cosmic Time (SFACT) Survey. III. Spectroscopy of the Initial Catalog of Emission-Line Objects
###### Abstract
The Star Formation Across Cosmic Time (SFACT) survey is a new narrow-band survey designed to detect emission-line galaxies (ELGs) and quasi-stellar objects (QSOs) over a wide range of redshifts in discrete redshift windows. The survey utilizes the WIYN 3.5m telescope and the Hydra multi-fiber positioner to perform efficient follow-up spectroscopy on galaxies identified in the imaging part of the survey. Since the objects in the SFACT survey are selected by their strong emission lines, it is possible to obtain useful spectra for even the faintest of our sources (r \(\sim\) 25). Here we present the 453 objects that have spectroscopic data from the three SFACT pilot-study fields, 415 of which are confirmed ELGs. The methodology for processing and measuring these data is outlined in this paper and example spectra are displayed for each of the three primary emission lines used to detect objects in the survey (H\(\alpha\), [O iii]\(\lambda\)5007, and [O ii]\(\lambda\)3727). Spectra of additional QSOs and non-primary emission-line detections are also shown as examples. The redshift distribution of the pilot-study sample is examined and the ELGs are placed in different emission-line diagnostic diagrams in order to distinguish the star-forming galaxies from the active galactic nuclei.
galaxies: high-redshift -- galaxies: star formation -- galaxies: abundances -- galaxies: evolution -- techniques: spectroscopic -- astronomical databases: surveys 0000-0002-4070-2880]David J. Carr
0000-0002-8070-7880]Jennifer Sieben
0000-0002-4070-3870]John J. Salzer
0000-0002-4070-3870]Samantha W. Brunker
0000-0002-4070-3870]Bryce Cousins
0000-0002-4070-3870]
## 1 Introduction
The Star Formation Across Cosmic Time (SFACT) survey is a new imaging and spectroscopic survey which uses narrow-band (NB) filters to detect large numbers of emission-line sources at a wide range of redshifts. The survey utilizes three primary emission lines to detect objects in its target fields: H\(\alpha\), [O iii]\(\lambda\)5007, and [O ii]\(\lambda\)3727. For these primary lines, the survey collects a diverse sample of galaxies that spans the redshift range from the local universe out to z = 1. Quasi-stellar object (QSO) detections using UV emission lines push that redshift range to z = 5 and beyond.
The survey's methodology draws from the various emission-line galaxy (ELG) surveys that have come before it. Early ELG surveys utilized objective-prism spectroscopy to select candidates (e.g., Smith, 1975; MacAlpine et al., 1977; MacAlpine and Williams, 1981; Sanduleak and Pesch, 1982; Pesch and Sanduleak, 1983; Wasilewski, 1983; Markarian et al., 1983; Zamorano et al., 1994, 1996; Ugryumov et al., 1999; Hopp et al., 2000; Salzer et al., 2000, 2001, 2002) and more recent surveys utilize narrow-band imaging data (e.g., Boroson et al., 1993; Ryan-Weber et al., 2004; Kakazu et al., 2007; Werk et al., 2010; Ly et al., 2011; Kellar et al., 2012; Sobral et al., 2012, 2013; Stroe and Sobral, 2015; Cook et al., 2019; Salzer et al., 2020; Khostovan et al., 2020; Watkins et al., 2021). SFACT expands and complements these existing surveys by using three custom narrow-band filters that enable the detection of ELGs out to cosmologically interesting redshifts. The goal of the survey is to discover a statistically complete sample of ELGs that is useful for a broad range of science applications (see Salzer et al., 2023, hereafter SFACT1).
SFACT is being introduced in a series of three papers that focus on the initial release of the survey data from three pilot-study fields. The first of the three introductory papers is SFACT1, which presents an overview of SFACT's goals, motivations, and the planned scope of the overall survey. It also discusses the different types
of ELGs that SFACT is designed to discover. SFACT1 presents early results from the survey, describing the properties of the sample of ELGs detected in the pilot-study fields. In addition, SFACT1 details some of the planned science applications the survey data can be used for. Imaging and spectroscopic data for several newly discovered objects are presented to illustrate the nature of the survey constituents.
The second SFACT paper is Sieben et al. (2023) (hereafter SFACT2). It presents the imaging portion of the survey, discussing the methodology for acquiring and processing the imaging data and the details of target selection. It presents catalogs of ELGs detected in the pilot-study fields, as well as images for a set of example objects. It analyzes the photometric properties of the pilot-study sample as well as how the survey's selection parameters relate to narrow-band flux.
In the current paper we present the spectroscopic portion of the survey for the pilot-study fields. SFACT was conceived from the start as a dual NB imaging plus spectroscopic survey (see SFACT1). While the detection of the ELG candidates comes entirely from the NB imaging portion of the survey, the spectroscopic follow-up provides the information necessary for carrying out many of the proposed science applications described in SFACT1. Fundamentally, the spectra allow us to confirm the ELG nature of our sources, identify the emission line that the survey has detected, and provide an accurate redshift for determining distant-dependent quantities such as luminosities and star-formation rates. The nebular spectra also allow us to determine accurate absorption corrections for each galaxy based on their observed Balmer decrements. For galaxies detected via the H\(\alpha\) and [O iii]\(\lambda\)5007 lines, we are also able to measure metal abundances for many sources. Finally, the spectra obtained for our pilot-study candidates have been extremely valuable for evaluating the survey selection method, allowing us to modify our procedures to improve the accuracy and efficiency of the survey.
The contents of SFACT3 are presented as follows. First, we provide a brief overview of the imaging portion of the survey, in order to help place the rest of this paper into context. In Section 3, the instrumentation and procedures used to complete the spectroscopic observations are illustrated. The details of the spectral reduction and line measurement software are explained in Section 4. Finally, Section 5 presents the tabulated spectroscopic pilot-study data as well as example spectra from the survey to illustrate the variety of objects detected. The properties of the ELG sample are displayed in a redshift histogram and emission-line diagnostic diagrams.
For all of the SFACT papers, a standard \(\Lambda\)CDM cosmology with \(\Omega_{m}\) = 0.27, \(\Omega_{\Lambda}\) = 0.73, and H\({}_{0}\) = 70 kms\({}^{-1}\) Mpc\({}^{-1}\) is assumed.
## 2 Overview of SFACT Imaging Survey
The narrowband imaging portion of SFACT is summarized in SFACT1 and described in great detail in SFACT2. Here we provide an overview of the observations and object selection method, to help frame the contents of the current paper.
All imaging observations for SFACT are obtained using the One-Degree Imager (ODI; Harbeck et al., 2010) camera on the WIYN 3.5m telescope1. ODI was a field-of-view of 40 by 48 arcmin, with a native pixel scale of 0.11 arcsec pixel\({}^{-1}\). Observations are obtained for each survey field through three broadband filters (SDSS _gri_) and three narrowband filters. In order to eliminate the many chip gaps present in the ODI focal plane, all observations are acquired using a 9-point dither pattern. The exposure times used for each image in the dither sequence is 120 s for the broadband images and 600 s for the narrowband images. For more information concerning the data acquisition and imaging processing, see SFACT2.
Footnote 1: The WIYN Observatory is a joint facility of the University of Wisconsin–Madison, Indiana University, NSF’s NOIRLab, the Pennsylvania State University, Purdue University, and the University of California, Irvine.
The three SFACT narrow-band filters used to detect potential ELGs are custom filters designed and fabricated for the survey. The filters all have a bandwidth of \(\sim\)90 A, and have central wavelengths of 6950 A (referred to as NB1 throughout the remainder of this paper), 6590 A (NB2), and 7460 A (NB3). The redshift ranges of the detected ELGs depends on both which filter the signal is seen in and the specific emission line present in the filter (e.g., H\(\alpha\), [O iii]\(\lambda\)5007, [O ii]\(\lambda\)3727, etc.): SFACT1 tabulates these ranges for the primary emission lines detected in our survey. Future expansion of the survey will add additional NB filters at \(\sim\)8120 A and \(\sim\)9120 A.
The ELG candidates are selected from the images by comparing the fluxes measured in each NB filter with a corresponding measurement made in suitably scaled broadband images (the sum of the \(r\) and \(i\) filters for NB1 and NB2, and \(i\) for NB3). Objects are selected as SFACT candidates if they possess an excess of flux in the NB image amounting to a magnitude difference \(\Delta\)m = 0.4 mag, as long as the flux excess is statistically significant (greater than 5\(\sigma\)).
The survey is quite deep, with a median g-band value for candidate ELGs of 23.2. The limiting emission-line
flux level of the resulting ELG catalog is \(\sim\)1.0 \(\times\) 10\({}^{-16}\) erg s\({}^{-1}\) cm\({}^{-2}\).
## 3 Sfact Spectroscopic Observations
All spectroscopic data obtained for SFACT were acquired using the WIYN 3.5m telescope located at Kitt Peak National Observatory. This section will cover the instrumentation and observational procedures associated with the spectroscopic part of the survey. The spectroscopic data for the SFACT pilot-study fields were obtained during observing runs in November 2017, October 2018, August 2019, October 2019, and October 2021.
### Instrumentation
The spectroscopic data for SFACT are taken using Hydra and the Bench Spectrograph. Hydra is a multi-fiber positioner with a field of view of 1.0 degree in diameter. SFACT was developed with the intention of using Hydra on the WIYN 3.5m telescope to carry out follow-up spectroscopy. Hydra is able to place fibers on \(60-65\) targets per configuration, which allows SFACT to efficiently gather follow-up spectra for all of its potential targets. Its field of view is a good match to the footprint of the ODI camera. This makes it an excellent tool for acquiring follow-up spectra for the survey.
We use Hydra's red cables for our observations. Each red fiber subtends 2 arcseconds on the sky meaning that most of the light from our higher redshift sources should be within the diameter of the fiber. Light from each source flows down the fibers and is collected by the Bench Spectrograph which is isolated in a separate room in the lower level of the WIYN facility. For the SFACT survey, we chose to use the 600 @ 10.1 bench grating because it has the highest efficiency across our desired wavelength range. It has 600 grooves per mm, a spectral resolution of 3.35 A with the red cables, and a dispersion of 1.41 A/pixel after binning. Pixels are binned 2 x 2 during readout to increase signal-to-noise without losing any resolution.
With our chosen spectroscopic setup the observed wavelength range for our follow-up spectra is roughly 4760\(-\)7580 A. A primary criterion for our required wavelength coverage was that it includes the spectral ranges covered by our three NB filters. This guarantees that the emission line seen in our NB images will be present in our follow-up spectra.
### Observational Procedure
The set-up of the Hydra fiber positioner is controled by the use of "pointing files", which contain the coordinates and instructions used by the mechanical gripper to configure each field. Every SFACT field presented in this paper was observed on multiple nights, each time with different pointing files. Each pointing generally contained between \(20-60\) SFACT targets, \(12-20\) sky fibers, and \(3-7\) Field Orientation Probes (FOPs). FOPs are used to accurately align the telescope to each field and are also used as guide stars. All FOPs stars were selected to have \(g\)-band magnitudes between 10.5 and 14.0 in the Sloan Digital Sky Survey (SDSS; York et al., 2000; Aguado et al., 2019), with preference given to stars between 12 and 13 magnitudes.
Each pointing was generally observed for three, 30 minute exposures. Multiple exposures are taken so that cosmic rays and other artifacts are removed from the data when the images are combined. On nights of reduced transparency, pointings would be observed for four or five different exposures in order to achieve as much depth as possible in the final combined images. Due to the faintness of the objects in the SFACT catalog, observations were rarely carried out while the moon was above the horizon.
A series of bias, dome flat, comparison lamp, and dark-current images were acquired during each night of observing. Standard stars were generally observed in the beginning, middle, and end of each night of observing and are used to create a nightly sensitivity function as discussed in Section 4.1.
In ideal circumstances, the earliest spectra can be gathered for our fields is one year after the full set of imaging data are obtained. However, it generally takes longer than one year to get _complete_ spectroscopic follow-up of every ELG in these fields. This has been the case for a variety of factors including, but not limited to: the loss of telescope time due to the COVID-19 pandemic, bad weather rendering scheduled observing time useless, and fiber placement limitations in crowded regions of the field. Hence, not all SFACT candidates in the final catalog lists presented in SFACT2 have been observed spectroscopically. Despite the lack of completeness there are enough spectra to get a clear picture of the nature of the SFACT survey and of the various objects discovered through our efforts.
## 4 Spectral Processing
This section will discuss the details of how the spectroscopic data are processed and how we measure the lines in the spectra of our newly discovered ELG sample.
### Spectral Reductions
The first phase in the processing is carried out using the Image Reduction and Analysis Facility (IRAF). First, the overscan level is measured and subtracted
from each image. Then the biases, darks, and flats are averaged, and biases and darks are subtracted from the data. Next, the three or more science images for each field are median combined to remove cosmic rays and artifacts. The images are median scaled before being combined in order to place the continuum on a similar level between the images. This scaling is based on a skyline-clean region in a brighter object's spectrum. The second phase of the processing uses IRAF's HYDRA package and DOHYDRA task (Valdes, 1995). DOHYDRA carries out the following tasks on each multi-fiber spectral image: (1) identifies each spectrum and performs a spectral trace for each fiber; (2) corrects for scattered light; (3) extracts the flat field spectra, using the same extraction aperture as used on each source, and applies them to the science spectra; (4) extracts the comparison lamp spectra and derives a wavelength solution for each fiber; (5) creates a composite sky spectrum and subtracts it from each science spectrum.
Flux calibrations are performed using the standard stars observed as part of our program. A sensitivity function is generated for each night and applied to the galaxy spectra. While accurate spectrophotometry using fibers is notoriously unreliable, our final calibrated spectra should have accurate _relative_ fluxes, meaning that we can extract reliable emission-line ratios. Telluric absorption corrections are applied using a well exposed standard star spectrum as a template. Finally, regions around strong night sky lines are masked to prevent sky residuals from dominating the faint emission lines in our galaxy spectra. These masked sky lines include [O i]\(\lambda\)5577, \(\lambda\)6300, and \(\lambda\)6363, and NaD among other lines.
Figure 1 displays portions one of the final spectral images from a single pointing of field SFF15, processed as described above. The top panel shows the red half of the full image, where only SFACT objects are displayed (fibers dedicated to extra targets or sky measurements have been removed). Wavelength in the figure increases from left to right, and the spectral range shown covers rest wavelengths of \(\sim\)6350-7530 A. Each row shows the processed spectrum of a single SFACT galaxy. The short, dark horizontal lines represent the emission lines in the spectra. Features repeated across multiple rows (vertical) indicate residual flux from night sky lines. The red boxes in the top portion of the figure denote the wavelength ranges of the three SFACT narrow-band filters. NB2 is centered at 6590 A, NB1 is at 6950 A, and NB3 is at 7460 A.
The bottom panels of Figure 1 zoom in on the spectral regions covered by the three filters to display the emission-line detected galaxies in this pointing in more detail. The majority of the targets in this pointing happened to be detected in the NB2 filter (leftmost lower image), while NB1 (center) and NB3 (right) contain several detections each. Close inspection of the NB2 sub-image reveals a number of spectra with two emission lines present in the same row. These are cases where both [O iii] \(\lambda\)5007 and \(\lambda\)4959 are included within the filter.
Figure 1: Top: The redward half of the final processed multi-object spectral image from a pointing of SFF15. Each row in the image is a one-dimensional spectrum of an individual SFACT target. The red boxes show the wavelength coverage of the three narrow-band filters. The leftmost box is NB2, the middle box shows NB1, and the right box shows NB3. In this pointing many NB2 candidates were observed. Bottom: The spectral regions covered by SFACT’s filters are enlarged and presented in the same order from left to right as mentioned above.
### Automatic line measurement with Wraalf
Identification and measurement of the emission lines in the spectra is carried out using WRALF (WRapped Automated Line Fitting; Cousins, 2019). WRALF is a python wrapper for a customized version of ALFA (Automated Line Fitting Algorithm; Wesson, 2016) and operates on the type of multi-spec format files illustrated in Figure 1. WRALF displays each spectrum from the image in turn, and the user is responsible for identifying and marking a single strong emission line. The code then calls ALFA, passing a redshift estimate based on the identified line. ALFA fits the continuum of the spectrum by assigning the central pixel in a moving 100 pixel window to be the 25th percentile of flux values within the window. The code then uses a lookup table of possible emission lines, and attempts to identify and measure all lines that are present in the wavelength range of the spectrum, fitting a Gaussian at the locations of potential lines.
ALFA estimates the uncertainty by subtracting the best-fitting solution from the continuum-subtracted observed spectrum. These residuals are used to calculate the signal-to-noise of each line. If the ratio of signal-to-noise is less than three, ALFA does not consider that line to be a real measurement.
The output from ALFA is shown to the user who determines if the identified lines are real and, if they are, the spectrum is saved and the identified lines and their properties are recorded. Measured lines with a signal-to-noise greater than or equal to five are used to derive a series of redshift measurements and these measurements are averaged to give the final redshift for the object. The standard deviation of the individual redshift estimates serves as the redshift uncertainty. Equivalent width (EW) is calculated for each line by taking the flux returned by ALFA and dividing by the continuum measured over a small range determined by the full width at half maximum (FWHM).
All spectral information is merged back into the SFACT table databases for each field. At the end of the process, WRALF has measured the redshift of each source and the observed wavelength, flux, error in the flux, EW, and FWHM of each emission line in the spectra that it detects.
While ALFA is able to measure narrow emission lines accurately, it struggles to automatically identify lines in two cases. First, ALFA has trouble identifying broad emission lines in objects like quasars and Seyfert 1 AGN. These lines are broad enough that ALFA mistakes them to be a part of the continuum and the software often does not identify them as emission lines. Second, ALFA misses lines in spectra that are of lesser quality or that contain weak signal-to-noise emission lines.
To remedy this, some objects had to be re-examined using an auxiliary code and their information had to be entered into the data tables separately. The code operates on the two cases differently. In the first case, broad-line objects are flagged during the data processing. These objects are displayed for the user who then measures the missing emission lines manually.
In the second case, objects with weak signal-to-noise emission lines are identified by searching through the data tables and displaying the spectra of objects missing their [N ii], H\(\beta\), or [O ii] lines. These three emission lines are specifically targeted so that as many sources as possible can be included in emission-line diagnostic diagrams to separate them based on their activity class (see Section 5.3.2). Stronger lines like H\(\alpha\) and [O iii]\(\lambda\)5007 are rarely missed by WRALF. The user then makes a decision for each spectrum if any of these lines are present.
If a line is found in the expected location then it is categorized in one of two ways. In many cases, the re-examined line is clearly present at the right wavelength but its signal-to-noise is slightly below WRALF's retention limit. These emission lines are measured and the re-examination measurement is labeled as a Category 1 measurement. If a feature is present at the expected wavelength but the feature is weaker in nature and similar in strength to the surrounding noise, it is measured and then flagged as a Category 2 re-examination measurement. In some cases, Category 2 measurements are essentially an upper limit on the line measurement. In other cases, the measurement of the Category 2 lines are simply lines with low signal-to-noise that have measurements that are more dubious than the Category 1 measurements.
### Limitations of the SFACT Spectra
We point out two important limitations of our follow-up spectra. The first is the well-known issue that the measurement of accurate absolute fluxes is notoriously difficult when using a fiber-fed instrument like Hydra. This is due to a number of reasons, which include (1) the fact that each fiber will have a slightly different throughput, (2) observations of flux calibration standard stars are typically only done through one fiber rather than all fibers, and (3) the positioning of the fibers on the focal plain of the telescope is not perfect. The latter issue can result in imprecise alignment between the fiber and the astronomical source, resulting is the loss of measured flux through the fiber.
For the current project the situation is exacerbated by the faint nature of the sources and their small angular extent on the sky (often point-like in nature). A comparison of the emission-line fluxes measured with Hydra
with those derived from our NB photometry shows a large scatter, even when the comparison is limited to unresolved objects. Hence, we do not trust the line fluxes measured in our follow-up spectra. Our relative fluxes (i.e., emission-line ratios) should be robust, however.
A second problem is associated with our measured EWs for objects with faint underlying continua. As detailed in SFACT1 and SFACT2, many of our sources have r-band magnitudes fainter than 24. The observed variation between the continuum fluxes in the individual sky spectra used to create the composite sky spectrum employed in our sky subtraction has a typical value of \(\pm\)5-8%, probably due to fiber-to-fiber throughput variations. This translates into a characteristic uncertainty in our sky subtraction of around 5 \(\times\) 10\({}^{-18}\) erg s\({}^{-1}\) cm\({}^{-2}\) A\({}^{-1}\) in the continuum. For some of our sources (5-10%) this sky-subtraction uncertainty results in negative continua and negative EWs.
The problem is more insidious than the presence of negative EWs. For many of our faint ELGs, the measured continuum levels after sky subtraction will be positive but have very small values (less than 1 \(\times\) 10\({}^{-18}\) erg s\({}^{-1}\) cm\({}^{-2}\) A\({}^{-1}\)). An object with a measured continuum of 5 \(\times\) 10\({}^{-19}\) erg s\({}^{-1}\) cm\({}^{-2}\) A\({}^{-1}\) will have an uncertainty in the continuum of a factor of 10, and an uncertainty in the measured EW of a line of the same size. Hence, even for sources with positive EWs the measured values can be totally unreliable. One object located in the pilot-study fields has a measured EW for [O iii]\(\lambda\)5007 of 6800 A, which is unphysical large.
The problems with our measured EWs are, unfortunately, inherent in the process of trying to observe very faint objects with a multi-fiber spectrograph. We are exploring methods to help mitigate the impact on our data. We stress that this negative continuum issue does not impact our line flux measurements (and hence our line ratios), but it does render at least some of our EWs unreliable.
## 5 Results: Presentation of Spectral Data from the Pilot Study
To date 453 out of the 533 objects in the three pilot-study fields have been observed spectroscopically and 415 of the observed 453 are confirmed emission-line objects. That means 91.6% of our sources observed have an emission line within the narrow-band filter they were detected in and are determined to be true emission-line sources.
In this section, we present the results from the spectroscopic observations of the SFACT pilot-study fields. In Section 5.1 we tabulate the relevant spectral data for all three pilot-study fields. Next, Section 5.2 provides illustrative examples of the spectra for a number of SFACT objects. Section 5.3 presents an overview of the different emission lines detected in the pilot study as well as some of the properties of the ELGs. This includes a discussion of the redshift distribution of the sample and a presentation of the emission-line diagnostic diagrams derived from the spectral data.
### SFACT Spectral Data Tables
Tables 1, 2, and 3 present the spectral data for the SFACT objects in the SFF01, SFF10, and SFF15 fields, respectively.
Column (1) in each table lists the SFACT object identifier. This is a unique ID for each object and contains three parts. The first part represents the field where the object is located (e.g., SFF01), while the second designates the filter within which the source was detected. The third contains information on which quadrant of the field the object is located in (A, B, C, or D) followed by a number that represents the order in which the image processing software finds the object in the field. We use the SFACT object ID to refer to specific sources throughout the remainder of this paper (e.g., spectral plots in Section 5.2.1). Column (2) provides an alternate coordinate-based designation, using IAU-approved nomenclature.
Columns (3) and (4) list the RA and Dec of each source (J2000). Each table is sorted in ascending RA order. The SFACT coordinates are derived from an astrometric solution applied to the survey imaging data based on the Gaia database (Gaia Collaboration et al., 2016, 2021). Comparison of the coordinates of stars found in the SFACT images with those cataloged in the SDSS shows that there is little or no systematic offsets between the two sets of coordinates (mean \(\Delta\alpha\) and \(\Delta\delta\)\(\leq\) 0.05 arcsec for each field), and that the RMS scatter for individual stars is \(\sim\)0.15-0.20 arcsec.
Column (5) indicates the activity type of each source detected by the survey, where we adopt standard notations for the various classes of emission-line objects. The activity type is derived by visual inspection of the spectra, supplemented by use of line diagnostic diagrams (see SS5.3.2). The vast majority of objects detected by SFACT are star-forming galaxies, which are labeled SFG in the tables. We make no attempt to differentiate between different classes of SFGs here (e.g., Starburst Nucleus galaxies, blue compact dwarfs, Green Pea galaxies) since that would typically require additional information not available from the spectral data alone. However, objects specified as H ii regions is nearby disk galaxies (see SFACT1) are labeled as HII in column (5). Sy1 and Sy2 labels indicate that the listed object is either a Seyfert
1 or Seyfert 2 active galactic nucleus (AGN). LIN indicates the object is a low-ionization nuclear emission region (LINER; Heckman, 1980), and QSO indicates the object is a quasar. Objects that are clear detections of ELGs with an emission line in the appropriate filter, but have an uncertain classification at the time the lines are measured, are marked with ELG as the default designation. The latter classification is currently applied to the vast majority of the [O ii]-detected SFACT objects, since we lack spectral information from other diagnostic lines that would allow us to make a more definitive designation.
It is inevitable that false detections will creep into a survey like SFACT. These false detections have their type labeled in two ways. Objects that have no obvious emission lines in their spectra, or that have lines that are not located in the relevant survey filter, are simply labeled as FD (for false detection). Objects found to be stars based on their spectra are labeled with Star in this column. Some of these are stars with an emission line in the relevant filter (always NB2); these are specified in the table notes. An example of such an object is shown in Figure 11 of SFACT1. Finally, objects without any data in column (5) are objects that have yet to be observed spectroscopically.
Column (6) displays the redshift of each object. The characteristic uncertainties in our redshift measurements are 0.00003 to 0.00005, based on the RMS scatter of the redshifts measured from individual lines in a given spectrum (see SS4.2). The emission line responsible for the detection of each SFACT object is listed in column (7). That is, the emission line indicated is the one responsible for most or all of the excess emission present in the relevant NB survey filter. The majority of the SFACT galaxies are detected via either H\(\alpha\)\(\lambda\)6563, [O iii]\(\lambda\)5007 or [O ii]\(\lambda\)3727, but small numbers of ob
\begin{table}
\begin{tabular}{c c c c c c c c c c} \hline \hline \multicolumn{1}{c}{ SFACT Object ID} & SFACT Coordinate ID & \(\alpha\)(J2000) & \(\delta\)(J2000) & Type & z & Line & log([N ii]/H\(\alpha\)) & log([O iii]/H\(\beta\)) & log([O ii]/H\(\beta\)) \\ & & degrees & degrees & & & & & & \\ \multicolumn{1}{c}{(1)} & (2) & (3) & (4) & (5) & (6) & (7) & (8) & (9) & (10) \\ \hline SFF01-NB3-D20110 & SFACT J214123.25+200510.7 & 325.34689 & 20.08630 & SFG & 0.13692 & 6563 & -0.541 \(\pm\) 0.049 & -0.513 \(\pm\) 0.123 & \\ SFF01-NB3-D20084 & SFACT J214123.34+200509.5 & 325.34723 & 20.08598 & HII & 0.13662 & 6563 & -0.640 \(\pm\) 0.023 & -0.255 \(\pm\) 0.017 & \\ SFF01-NB3-D19969 & SFACT J214123.61+201118.8 & 325.34839 & 20.18856 & SFG & 0.13678 & 6563 & -0.794 \(\pm\) 0.101 & 0.466 \(\pm\) 0.109 & \\ SFF01-NB3-B20552 & SFACT J214126.31+195845.4 & 325.35962 & 19.97928 & SFG & 0.14168 & 6563 & -0.462 \(\pm\) 0.074 & & \\ SFF01-NB3-D201195 & SFACT J214126.46+201342.6 & 325.36023 & 20.22851 & SFG & 0.31981 & 5007 & & (0.774 \(\pm\) 0.147) & (0.154 \(\pm\) 0.172) \\ SFF01-NB3-B20542 & SFACT J214126.66+194344.1 & 325.36023 & 19.78928 & SFG & 0.38529 & 5007 & & 0.462 \(\pm\) 0.091 & 0.463 \(\pm\) 0.106 \\ SFF01-NB3-B20497 & SFACT J214126.52+195850.3 & 325.36050 & 19.98063 & SFG & 0.14220 & 6563 & -0.658 \(\pm\) 0.103 & -0.004 \(\pm\) 0.050 & \\ SFF01-NB3-B19399 & SFACT J214131.74+195847.3 & 325.38226 & 19.97982 & SFG & 0.13706 & 6563 & -0.816 \(\pm\) 0.101 & 0.321 \(\pm\) 0.021 & \\ SFF01-NB2-D17902 & SFACT J214132.16+202146.6 & 325.38397 & 20.36294 & & & & & \\ SFF01-NB2-B19207 & SFACT J214132.90+193945.5 & 325.38708 & 19.66263 & SFG & 0.00350 & 6563 & (-1.212 \(\pm\) 0.151) & \\ \hline SFF01-NB2-B19198 & SFACT J214132.93+193942.1 & 325.38721 & 19.66168 & HII & 0.00344 & 6563 & & \\ SFF01-NB1-B191076 & SFACT J214133.31+19426.7 & 325.38889 & 19.70742 & & & & & \\ SFF01-NB3-B18858 & SFACT J214124.29+194341.1 & 325.39285 & 19.78207 & ELG & 0.99895 & 3727 & & \\ SFF01-NB3-B15651 & SFACT J214135.87+194757.5 & 325.39948 & 19.79930 & ELG & 0.99635 & 3727 & & \\ SFF01-NB2-B18506 & SFACT J214136.06+19563.7 & 325.40027 & 19.94826 & ELG & 0.76173 & 3727 & & \\ SFF01-NB3-B18317 & SFACT J214136.92+197344.3 & 325.40381 & 19.62615 & & & & \\ SFF01-NB3-B17245 & SFACT J214118.71+195707.7 & 325.42444 & 19.95213 & SFG & 0.13816 & 6563 & (0.746 \(\pm\) 0.158) & \\ SFF01-NB3-B15415 & SFACT J214142.73+201252.3 & 325.42807 & 20.21452 & SFG & 0.13873 & 6563 & & (0.617 \(\pm\) 0.151) & \\ SFF01-NB2-D15191 & SFACT J214143.48+200448.4 & 325.43118 & 20.08011 & SFG & 0.31323 & 5007 & & 0.365 \(\pm\) 0.078 & 0.446 \(\pm\) 0.096 \\ SFF01-NB1-B16317 & SFACT J214146.59+194328.3 & 325.44412 & 19.72453 & SFG & 0.39873 & 4959 & & 0.435 \(\pm\) 0.103 & 0.187 \(\pm\) 0.153 \\ SFF01-NB3-B16011 & SFACT J214148.07+195758.9 & 325.45029 & 19.96636 & SFG & 0.14195 & 6563 & (-1.115 \(\pm\) 0.150) & 0.538 \(\pm\) 0.043 & \\ SFF01-NB3-B15732 & SFACT J214149.54+194038.6 & 325.45642 & 19.67739 & & & & & \\ SFF01-NB2-B15722 & SFACT J214141.60+193848.3 & 325.46671 & 19.64676 & ELG & 0.76295 & 3727 & & \\ SFF01-NB1-D13076 & SFACT J214150.50+200782.2 & 325.45665 & 20.21426 & ELG & 0.8580 & 3727 & & \\ SFF01-NB1-D12776 & SFACT J214150.91+20341.9 & 325.46213 & 20.39497 & FD & & & & \\ SFF01-NB3-B14965 & SFACT J214153.12+195235.0 & 325.47134 & 19.87638 & SFG & 0.13832 & 6563 & (-0.966 \(\pm\) 0.148) & 0.293 \(\pm\) 0.109 & \\ SFF01-NB3-B14790 & SFACT J214153.89+195227.9 & 325.47455 & 19.87441 & FD & & & & \\ SFF01-NB3-B1444 & SFACT J214155.02+021511.4 & 325.47925 & 20.25316 & SFG & 0
jects are detected via other lines, such as [O iii]\(\lambda\)4959, [S ii]\(\lambda\lambda\)6717,6731, [N iii]\(\lambda\)3869, and H\(\beta\)\(\lambda\)4861. SFACT QSOs are typically detected via one of the stronger UV lines: Mg ii\(\lambda\)2798, C iii] \(\lambda\)1908, C iv\(\lambda\)1549, or Ly\(\alpha\)\(\lambda\)1215.
Finally, columns (8) through (10) show the log\({}_{10}\) values for the [N ii]/H\(\alpha\), [O iii]/H\(\beta\), and [O ii]/H\(\beta\) line ratios and their formal errors. For line ratios were one or both lines are Category 2 re-examination measurements (described in SS4.2), the uncertainties in the line ratios will be larger. We denote these cases in the data tables by enclosing the line ratios in parentheses. The line ratios are corrected for reddening using the Balmer decrement method (e.g., Osterbrock & Ferland, 2006) when the relevant Balmer lines are available in our spectra. 100% of the H\(\alpha\)-detected SFACT objects have the necessary H\(\alpha\) and H\(\beta\) lines for this correction. The [O iii]-detected sources are corrected using the H\(\beta\) and H\(\gamma\) lines, both of which are present in only \(\sim\)25% of the spectra.
A given line ratio will only be measured if both of the relevant emission lines are within the observed spectral wavelength range (4760\(-\)7580 A) _and_ are measured by our software. Because SFACT discovers galaxies located within discrete redshift windows and we have used a fixed spectral coverage for our follow-up spectra, objects detected via a given emission line (column (7)) will always have the same set of emission-line ratios available. For example, H\(\alpha\)-detected galaxies will always have the [N ii]/H\(\alpha\) and [O iii]/H\(\beta\) ratios but never [O ii]/H\(\beta\). Similarly, [O iii]-detected galaxies will possess the [O iii]/H\(\beta\) and [O ii]/H\(\beta\) ratios in our tables, but never [N ii]/H\(\alpha\). Galaxies detected via their [O ii] emission will have no line ratios listed.
For the reasons specified in SS 4.3 our survey data tables do not include the individual line EWs or fluxes.
\begin{table}
\begin{tabular}{l l l l l l l l l l} \hline \hline \multicolumn{1}{c}{ SFACT Object ID} & \multicolumn{1}{c}{SFACT Coordinate ID} & \multicolumn{1}{c}{\(\alpha\)(J2000)} & \multicolumn{1}{c}{\(\delta\)(J2000)} & \multicolumn{1}{c}{Type} & \multicolumn{1}{c}{\(\pi\)} & \multicolumn{1}{c}{Line} & \multicolumn{1}{c}{log([N ii]/H\(\alpha\))} & \multicolumn{1}{c}{log([O iii]/H\(\beta\))} & \multicolumn{1}{c}{log([O ii]/H\(\beta\))} \\ & \multicolumn{1}{c}{degrees} & \multicolumn{1}{c}{degrees} & & & & & & & \\ \multicolumn{1}{c}{(1)} & \multicolumn{1}{c}{(2)} & \multicolumn{1}{c}{(3)} & \multicolumn{1}{c}{(4)} & \multicolumn{1}{c}{(5)} & \multicolumn{1}{c}{(6)} & \multicolumn{1}{c}{(7)} & \multicolumn{1}{c}{(8)} & \multicolumn{1}{c}{(9)} & \multicolumn{1}{c}{(10)} \\ \hline SFF10-NB3-D13755 & SFACT J014256.70+281615.3 & 25.73624 & 28.27093 & SFG & 0.49790 & 5007 & & & \\ SFF10-NB3-D13569 & SFACT J014258.14+275740.4 & 25.74223 & 27.96124 & SFG & 0.48292 & 5007 & & 0.334 \(\pm\) 0.045 & 0.799 \(\pm\) 0.043 \\ SFF10-NB2-B12883 & SFACT J014258.90+274309.1 & 25.74542 & 27.71919 & ELG & 0.77059 & 3727 & & & \\ SFF10-NB3-B12772 & SFACT J014259.69+274052.3 & 25.74872 & 27.68121 & & & & & \\ SFF10-NB3-D13083 & SFACT J014300.87+286028.8 & 25.7362 & 28.10660 & SFG & 0.49586 & 5007 & & 0.664 \(\pm\) 0.060 & \\ SFF10-NB1-B12579 & SFACT J014300.97+274121.2 & 25.75403 & 27.68948 & ELG & 0.86758 & 3727 & & & \\ SFF10-NB3-B12471 & SFACT J014301.74+273928.5 & 25.75727 & 27.65791 & ELG & 1.00192 & 3727 & & & \\ SFF10-NB1-NB12090 & SFACT J014302.15+281152.9 & 25.75895 & 28.19802 & SFG & 0.39568 & 5007 & & 0.154 \(\pm\) 0.029 & 0.595 \(\pm\) 0.028 \\ SFF10-NB3-B12244 & SFACT J014035.37+273652.2 & 25.76489 & 27.61533 & SFG & 0.48965 & 5007 & & & \\ SFF10-NB3-B12131 & SFACT J014304.20+275025.1 & 25.76750 & 27.84031 & SFG & 0.13419 & 6563 & -0.803 \(\pm\) 0.101 & 0.180 \(\pm\) 0.035 & \\ SFF10-NB3-B12081 & SFACT J014304.52+2705030.9 & 25.76884 & 27.84191 & SFG & 0.13458 & 6563 & -0.880 \(\pm\) 0.085 & 0.061 \(\pm\) 0.057 & \\ SFF10-NB3-B12096 & SFACT J014304.64+273418.2 & 25.76935 & 27.57173 & FD & & & & \\ SFF10-NB3-D12508 & SFACT J014305.18+275632.9 & 25.77158 & 27.94246 & SFG & 0.13950 & 6563 & & & \\ SFF10-NB3-B11870 & SFACT J014306.06+274431.6 & 25.77523 & 27.74211 & ELG & 1.00985 & 3727 & & & \\ SFF10-NB3-B11533 & SFACT J014037.678+275424.6 & 25.78199 & 27.87906 & ELG & 0.98897 & 3727 & & & \\ SFF10-NB3-D12024 & SFACT J014308.50+281039.9 & 25.78543 & 28.18441 & ELG & 1.01272 & 3727 & & & \\ SFF10-NB3-D11997 & SFACT J014308.81+281226.9 & 25.78671 & 28.20746 & QSO & 2.48155 & 1908 & & \\ SFF10-NB3-D11938 & SFACT J014309.34+280305.7 & 25.78890 & 28.05159 & SFG & 0.13572 & 6563 & -1.542 \(\pm\) 0.097 & 0.640 \(\pm\) 0.019 & \\ SFF10-NB3-D11925 & SFACT J014309.51+275434.0 & 25.78963 & 27.90944 & FD & & & & \\ SFF10-NB3-B11195 & SFACT J014309.97+273813.9 & 25.79156 & 27.63718 & ELG & 1.00344 & 3727 & & & \\ SFF10-NB3-D11772 & SFACT J014310.63+280641.6 & 25.79429 & 28.11155 & SFG & 0.49388 & 5007 & & 0.814 \(\pm\) 0.067 & -0.196 \(\pm\) 0.122 \\ SFF10-NB10986 & SFACT J014311.47+273300.7 & 25.79748 & 27.55019 & & & & & \\ SFF10-NB11-B10674 & SCT J014313.32+274124.6 & 25.80549 & 27.69016 & SFG & 0.06418 & 6563 & -1.475 \(\pm\) 0.088 & 0.543 \(\pm\) 0.014 & \\ SFF10-NB2-B10675 & SFACT J014313.39+27335.6 & 25.80581 & 27.56564 & & & & & \\ SFF10-NB1-D11121 & SFACT J014315.23+280125.9 & 25.813
While the majority of our EWs are reliable, the nature of the sky-subtraction uncertainty is such that it is impossible to know with confidence which objects possess less-robust values which should be ignored. For the purposes of calculating relevant physical quantities such as star-formation rates (SFR) the preferred methodology would be to use the emission-line flux measured from the NB images, since the latter will typically be a more robust measurement and will include all of the flux from each source. Furthermore, the fluxes of lines other than the one detected in the NB imaging survey can, in most cases, be scaled up using the relative line ratios from the spectra.
### Example SFACT Spectra
#### 5.2.1 Spectra of example objects illustrated in SFACT2
SFACT2 presents example images taken with both the broadband and narrowband filters used in our survey (Figures 3-6 in that paper). These images include objects detected in each of the three NB filters and with each of the three primary emission lines (H\(\alpha\), [O iii], and [O ii]) used to detect galaxies in the survey. In addition, one SFACT QSO is also displayed. The objects selected to illustrate the survey detections were also picked to show the range to emission-line fluxes detected in the NB filters. Figures 2 through 5 in the current paper present spectra of these same objects. The object's SFACT identifier, redshift, and activity class are labeled on the plot. The red-dashed vertical lines denote the wavelength range covered by the narrow-band filter in which the source was detected.
Figure 2 shows some of the H\(\alpha\)-detected sources in the pilot study. The object in panel (a) is an H ii region in a dwarf irregular galaxy with a redshift of z = 0.0034. It is the only H\(\alpha\)-detected galaxy in the NB2 filter in the pilot-study fields and is the lowest redshift source in
\begin{table}
\begin{tabular}{l l l l l l l l l l} \hline \hline \multicolumn{1}{c}{ SFACT Object ID} & \multicolumn{1}{c}{SFACT Coordinate ID} & \multicolumn{1}{c}{\(\alpha\)(J2000)} & \multicolumn{1}{c}{\(\delta\)(J2000)} & \multicolumn{1}{c}{Type} & \multicolumn{1}{c}{z} & \multicolumn{1}{c}{Line} & \multicolumn{1}{c}{log([N ii]/H\(\alpha\))} & \multicolumn{1}{c}{log([O iii]/H\(\beta\))} & \multicolumn{1}{c}{log([O ii]/H\(\beta\))} \\ & \multicolumn{1}{c}{degrees} & \multicolumn{1}{c}{degrees} & \multicolumn{1}{c}{degrees} & & & & & & & \\ \multicolumn{1}{c}{(1)} & \multicolumn{1}{c}{(2)} & \multicolumn{1}{c}{(3)} & \multicolumn{1}{c}{(4)} & \multicolumn{1}{c}{(5)} & \multicolumn{1}{c}{(6)} & \multicolumn{1}{c}{(7)} & \multicolumn{1}{c}{(8)} & \multicolumn{1}{c}{(9)} & \multicolumn{1}{c}{(10)} \\ \hline SFF15-NB3-B14284 & SFACT J023730.56+274425.1 & 39.37733 & 27.74032 & SFG & 0.13180 & 6563 & (-0.876 \(\pm\) 0.147) & 0.552 \(\pm\) 0.110 & \\ SFF15-NB1-B14251 & SFACT J023730.82+274059.9 & 39.37840 & 27.68330 & ELG & 0.86741 & 3727 & & & \\ SFF15-NB2-D24348 & SFACT J023731.19+281026.8 & 39.37994 & 28.17412 & SFG & 0.31444 & 5007 & & 0.251 \(\pm\) 0.104 & 0.446 \(\pm\) 0.115 \\ SFF15-NB2-B14131 & SFACT J023731.68+272845.4 & 39.38200 & 27.47929 & & & & & & \\ SFF15-NB3-B14046 & SFACT J023731.96+2725052.4 & 39.38317 & 27.84789 & SFG & 0.54159 & 4861 & & & \\ SFF15-NB2-D22777 & SFACT J023730.80+281011.2 & 39.38784 & 28.16978 & SFG & 0.31010 & 5007 & & 0.624 \(\pm\) 0.031 & 0.589 \(\pm\) 0.035 \\ SFF15-NB2-B13724 & SFACT J023734.14+274128.0 & 39.39226 & 27.69111 & SFG & 0.31705 & 5007 & & 0.660 \(\pm\) 0.071 & 0.505 \(\pm\) 0.095 \\ SFF15-NB2-B13721 & SFACT J023734.15+274129.1 & 39.39229 & 27.69142 & SFG & 0.31786 & 5007 & & (0.241 \(\pm\) 0.164) & (0.829 \(\pm\) 0.156) \\ SFF15-NB2-D22292 & SFACT J023734.35+280103.1 & 39.39310 & 28.16751 & SFG & 0.32192 & 5007 & & 0.325 \(\pm\) 0.083 & 0.617 \(\pm\) 0.089 \\ SFF15-NB2-D22224 & SFACT J023734.52+280514.9 & 39.39383 & 28.08749 & SFG & 0.30904 & 5007 & & (0.776 \(\pm\) 0.150) & \\ SFF15-NB3-D22277 & SFACT J023734.53+275653.7 & 39.39389 & 27.94825 & SFG & 0.48883 & 5007 & & & \\ SFF15-NB3-B13580 & SFACT J023735.237+24712.1 & 39.39680 & 27.678941 & & & & & \\ SFF15-NB2-D20719 & SFACT J023738.98+25030.3 & 39.42122 & 28.09174 & & & & & \\ SFF15-NB1-B12855 & SFACT J023740.25+274942.1 & 39.41772 & 27.82837 & ELG & 0.85732 & 3727 & & & \\ SFF15-NB2-B12874 & SFACT J023740.34+272939.9 & 39.41807 & 27.49442 & SFG & 0.32207 & 5007 & & 0.669 \(\pm\) 0.054 & 0.010 \(\pm\) 0.108 \\ SFF15-NB2-B12729 & SFACT J023741.67+279259.4 & 39.42332 & 27.49985 & SFG & 0.32167 & 5007 & & 0.813 \(\pm\) 0.096 & 0.489 \(\pm\) 0.117 \\ SFF15-NB2-D19839 & SFACT J023741.98+281030.5 & 39.42469 & 28.17839 & SFG & 0.33183 & 4959 & & 0.650 \(\pm\) 0.055 & 0.461 \(\pm\) 0.070 \\ SFF15-NB1-B12675 & SFACT J023742.16+272938.9 & 39.42568 & 27.49414 & & & & & \\ SFF15-NB3-B12608 & SFACT J023742.63+274616.0 & 39.42762 & 27.77110 & SFG & 0.13623 & 6563 & & (0.894 \(\pm\) 0.149) & \\ SFF15-NB3-B12427 & SFACT J023744.04+274533.7 & 39.43350 & 27.75936 & SFG & 0.13181 & 6563 & (-0.886 \(\pm\) 0.154) & 0.143 \(\pm\) 0.111 & \\ SFF15-NB3-B18788 & SFACT J023744.26+281330.5 & 39.43440 & 28.22513 & & & & & \\ SFF15-NB2-B12371 & SFACT J023744.60+273441.2 & 39.43585 & 27.57810 & ELG & 0.77650 & 3727 & & & \\ SFF15-NB1-B15839 & SFACT J023748.78+281340.3 & 39.43659 & 28.22785 & & & & & & \\ SFF15-NB3-B12122 & SFACT J023745.92+297318.5 & 39.44155 & 27.65515 & SFG & 0.48206 & 5007 & & 0.584 \(\pm\) 0.054 & 0.658 \(\pm\) 0.053 \\ SFF15-NB3-D17352 & SFACT J023746.87+280421.5 & 39.44530 & 28.07263 & FD & & & & \\ SFF15-NB3-B11965 & SFACT J023747.82+274851.5 & 39.44925 & 27.81431 & FD
the pilot study. The NB2 filter probes a redshift range of \(-0.002\) to 0.011 for the H\(\alpha\) line, resulting in a very small volume being surveyed. Hence, it is no surprise that the pilot study only contains one object that is an NB2 H\(\alpha\) detection (see SFACT1). Panel (b) shows the spectrum of an H ii region located in a spiral galaxy that was detected in NB1. It belongs to a fairly metal rich system, based on the observed ratios of [N ii]/H\(\alpha\) and [O iii]/H\(\beta\) (see Figure 9). This spectrum gives a good example of the additional lines we can detect when an object is selected by its H\(\alpha\) line: the [S ii] doublet, [N ii]\(\lambda\lambda\)6583, 6548, [O iii]\(\lambda\lambda\)5007, 4959, and H\(\beta\) lines are detected in the NB1 filter. The [O iii]\(\lambda\lambda\)5007, 4959, and [O iii]\(\lambda\lambda\)5007, 4959, and [O iii]\(\lambda\lambda\)5007, 4959, and [O iii]\(\lambda\lambda\)5007, 4959, and [O iii]\(\lambda\lambda\)5007, 4959, and [O iii]\(\lambda\lambda\)5007, 4959, and [O iii]\(\lambda\lambda\)5007, 4959, and [O iii]\(\lambda\lambda\)5007, 4959, and [O iii]\(\lambda\lambda\)5007, 4959, and [O iii]\(\lambda\lambda\)5007, 4959, and [O iii]\(\lambda\lambda\)5007, 4959, and [O iii]\(\lambda\lambda\)5007, 4959, and [O iii]\(\lambda\lambda\)5007, 4959, and [O iii]\(\lambda\lambda\)5007, 4959, and [O iii]\(\lambda\lambda\)5007, 4959, and [O iii]\(\lambda\lambda\)5007, 4959, and [O iii]\(\lambda\lambda\)5007, 4959, and [O iii]\(\lambda\lambda\)5007, 4959, and [O iii]\(\lambda\lambda\)5007, 4959, and [O iii]\(\lambda\lambda\)5007, 4959, and [O iii]\(\lambda\lambda\)5007, 4959, and [O iii]\(\lambda\lambda\)5007, 4959, and [O iii]\(\lambda\lambda\)5007, 4959, and [O iii]\(\lambda\lambda\)5007, 4959, and [O iii]\(\lambda\lambda\)5007, 4959, and [O iii]\(\lambda\lambda\)5007, 4959, and [O iii]\(\lambda\lambda\)5007, 4959, and [O iii]\(\lambda\lambda\)5007, 49
clearly visible. Finally, panel (c) shows a low-metallicity star-forming system detected by the NB3 filter. It has a \(g\)-band magnitude of 22.4, from which its derived absolute magnitude is calculated to be M\({}_{g}=-16.7\) (i.e., comparable to the luminosity of the SMC, but at a distance of \(\sim\)650 Mpc).
In Figure 3, we present three example spectra of [O iii]-detected sources. Panel (a) is a Seyfert 2 galaxy detected in the NB2 filter. Its emission lines are clearly broader than the lines seen in the two star-forming galaxies shown in panels (b) and (c), and its line ratios are indicative of a non-stellar ionizing source (e.g., [O iii]/H\(\beta>10\)). It has an absolute magnitude of M\({}_{g}=-19.9\). The second and third spectra displayed in this figure are NB1 and NB3 detections. They each show star-forming systems and have M\({}_{g}=-19.1\) and \(-19.2\), respectively. Note that our [O iii]-detected spectra usually contain additional lines like [O iii]\(\lambda\)4959, H\(\beta\), H\(\gamma\), and the [O ii] doublet.
Example spectra of [O ii]-detected sources are shown in Figure 4. As is the case with most of the [O ii]-detected galaxies, it is difficult to say much about the nature of these sources due to the limited number of lines detected in the survey's wavelength range. However, the [O ii] doublet is usually sufficiently resolved in our spectra so that the two lines that make up the doublet can be distinguished from each other. Additional lines, such as [Ne iii]\(\lambda\)3869, are also sometimes present in the [O ii]-detected spectra. [Ne iii] is visible in both panels (a) and (b) of the figure, though it is redshifted out of the survey's wavelength range in panel (c), the displayed NB3 detection. If we wish to gain additional information about the spectral characteristics of these objects, such as their [N ii]/H\(\alpha\) and [O iii]/H\(\beta\) ratios, follow up observations redward of our current wavelength coverage will need to be carried out.
Figure 4: Spectra of three objects detected by their [O ii]\(\lambda\)3727 doublet. The red-dashed vertical lines denote the wavelength range covered by the narrow-band filter in which the source was detected. Key emission lines are labelled in the middle panel. The flux scales on the y-axes are in units of erg s\({}^{-1}\) cm\({}^{-2}\) Å\({}^{-1}\). (a): An NB2-detected galaxy, included to show the faint nature of some of our sources. (b): A galaxy detected by the NB1 filter. [Ne iii]\(\lambda\)3869 is visible in both this spectrum and in panel (a). (c): An NB3-detected source, showing the limited information present in the spectra of the NB3 [O ii] detections.
Figure 5: Spectrum of an SFACT-detected QSO. The red-dashed vertical lines denote the wavelength range covered by the narrow-band filter in which the source was detected. Key emission lines are labelled. The flux scale on the y-axis is in units of erg s\({}^{-1}\) cm\({}^{-2}\) Å\({}^{-1}\). This QSO was detected by its C iii]\(\lambda\)1908 line.
The [O ii]-detected sources are among the faintest in apparent magnitude in the SFACT survey due to their distance. The spectrum in panel (a) of Figure 4 comes from a source with a _g_-band magnitude of 23.77 which is fainter than the 23.15 median of the sources in the pilot study (see SFACT1). This is not the faintest object in the SFACT survey but it is the faintest object presented in this series of examples. It has an absolute magnitude of M\({}_{g}=-19.7\) while the objects presented in (b) and (c) have absolute magnitudes M\({}_{g}\) of \(-20.8\) and \(-21.1\). All three of these sources are intrinsically luminous, which makes sense given that the survey was able to detect them at such large redshifts.
Figure 5 shows the spectrum of a QSO detected in SFACT. The survey image data for this object is shown in Figure 6 of SFACT2. It is detected in the NB2 filter by its C iii]\(\lambda\)1908 line, and C iv\(\lambda\)1549 is also seen in the spectrum. It has a redshift of z = 2.46 and an absolute magnitude of M\({}_{g}=-25.7\). This is a very luminous object with very broad emission lines, which is consistent with expectations for QSOs.
#### 5.2.2 Spectra of ELGs detected via non-standard lines
In addition to detecting galaxies with the three primary lines that the SFACT survey was expected to detect, objects are also detected via a variety of other emission lines. Though these objects are not the norm for the SFACT survey (\(\sim\)5% of non-QSO detections), it is worth highlighting a few examples to demonstrate the types of objects detected with these lines. Figure 6 presents spectra of some SFACT objects detected by a line other than H\(\alpha\), [O iii], or [O ii].
Panel (a) of Figure 6 shows an object detected due to its [S ii] emission. Since there are no [S ii] detections in the pilot-study fields we have included this object from the SFF08 field to illustrate this type of detection. This particular object is an H ii region in a metal rich, luminous star-forming spiral galaxy. Note that [S ii] detections are only possible in the NB1 and NB3 filters, since an [S ii] detection in the NB2 filter would require the object to possess a large blueshift. Panel (b) of the figure shows an object detected by its [O iii]\(\lambda\)4959 line. There are 16 objects in the pilot study fields that are detected by this line. As is evident in the figure, the 5007 line has been redshifted out of the filter's wavelength range. Since the 4959 line is significantly weaker than the 5007 line, it makes sense that this is somewhat of an unusual occurrence in the survey fields. Most of the 4959 detections are strong ELGs with high EW lines, such as SSF15-NB2-A6234 shown here. Panel (c) shows one of the three H\(\beta\) detections in the pilot-study fields, this one detected in the NB1 filter. It has an absolute magnitude of M\({}_{g}=-19.8\) and is likely a Green Pea-like star-forming galaxy. Finally, panel (d) presents the spectrum of the only [Ne iii]\(\lambda\)3869 detection in the pilot-study fields. It has a redshift of z = 0.79 and an absolute magnitude of M\({}_{g}=-20.4\).
The fact that SFACT can detect objects via these additional emission lines speaks to the sensitivity of the survey method. However, a secondary reason for presenting these spectra of ELGs detected via non-standard lines is to inject a note of caution regarding NB surveys. Most such surveys with depths comparable to or greater than that of SFACT will typically not possess follow-up spectra for most of their ELG candidates. It is clear that a sensitive NB survey can easily detect galaxies via their [S ii] lines that might well be mistaken for H\(\alpha\)-detections. Alternatively, such surveys will likely detect numerous galaxies via their H\(\beta\) or [O iii]\(\lambda\)4959 lines, and mistake them for [O iii]\(\lambda\)5007 detections. While the results presented here suggest that the level of such contamination should be small, it is likely to increase with the depth of the NB survey.
#### 5.2.3 Spectra of SFACT QSOs
While most of the emission-line detections in SFACT are of star-forming galaxies, there are also thirteen QSOs found in the pilot-study fields to date. From the outset of the project, we anticipated finding modest numbers of line-selected QSOs in our NB images. The expectation was that we would detect QSOs via one of the common UV emission lines, such as Mg ii \(\lambda\)2798, C iii] \(\lambda\)1908, C iv\(\lambda\)1549, or Ly\(\alpha\)\(\lambda\)1215. In addition to the quasar presented in Figure 5, we present four additional example spectra of some of these objects in Figure 7, illustrating detections in each of the first three lines listed above. There are no Ly\(\alpha\)-detected QSOs in the pilot-study fields, but several have been cataloged in other SFACT fields and will be presented in subsequent catalog papers.
Panel (a) of Figure 7 shows a quasar with a Mg ii \(\lambda\)2798 emission line that was redshifted into the NB3 filter. It has a redshift of z = 1.66 and an absolute magnitude of M\({}_{g}=-24.4\). Panel (b) shows a C iii] \(\lambda\)1908-detected QSO discovered with the NB2 filter. It has an absolute magnitude of M\({}_{g}=-23.2\) and a redshift of z = 2.48. It is one of the faintest of the SFACT-detected QSOs, with an r magnitude of 22.85; this helps to explain the poor quality of its spectrum. Panel (c) of the figure shows a quasar that was detected by its C iv \(\lambda\)1549 in the NB1 filter. Its redshift is z = 3.49 and its absolute magnitude is M\({}_{g}=-26.2\). This object is the second highest redshifted object in the pilot study, surpassed only by another C iv\(\lambda\)1549-detected quasar at
z = 3.51. In all of the spectra shown in Figure 7, at least one additional UV emission line is present, making the line identifications and redshift measurements secure.
Panel (d) of the figure shows a quasar that strongly departs from the properties of the others. While is was "detected" in the NB2 filter, our spectrum reveals that no line is present in the wavelength range covered by that filter. This happens occasionally when QSOs with strong, steep blue continua get selected by the automated software even though they have no line in the narrow-band filter. We call these objects color-selected QSOs and they mostly occur in the NB2 filter. There are only two such objects detected among the pilot-study fields, but many other examples exist in the other SFACT fields. In the example shown here, the measured redshift is z = 0.844, the r magnitude is 19.50 (the brightest of the 13 SFACT QSOs in the current sample), and it has an absolute magnitude of M\({}_{g}=-23.8\).
The reason these objects are detected in the survey has to do with our selection methodology. For the NB2 filter, the \(r\) and \(i\) filters are summed together to form the broadband continuum image. These filters are roughly equivalent to the SDSS filters and span the 5600\(-\)8200 A range. The NB2 filter is located at 6590 A, closer to the blue end of this wavelength range. For a QSO like the one in Figure 7(d), the strongly blue-sloping continuum is much higher at the location of the NB2 filter than it is at middle of the continuum filter (around 7000 A). Therefore, for an almost purely continuum source, more flux is measured at NB2 than from the scaled continuum image at 7000 A. When the NB and continuum fluxes are compared, there is excess flux in the NB filter that is detected by the software. For more information on the imaging data and the specifics of the survey's selection methods see SFACT2.
For the purposes of the SFACT survey, we will catalog the color-selected QSOs that are selected by our software. Strictly speaking, the color-selected QSOs are NOT emission-line objects, but since they represent _bona fide_ NB detections, it seems appropriate to retain them in our catalogs. They will be treated differently, however, from the line-selected QSOs (which are substantially more common), and not used in any statistical studies of the SFACT QSOs.
Figure 6: Spectra of four galaxies detected by a non-primary emission line (i.e., not their H\(\alpha\), [O iii], or [O ii] lines). The red-dashed vertical lines denote the wavelength range covered by the narrow-band filter in which the source was detected. Key emission lines are labelled. The flux scales on the y-axes are in units of erg s\({}^{-1}\) cm\({}^{-2}\) Å\({}^{-1}\). (a): An object detected by the [S ii] doublet in the NB1 filter. This object is not from the pilot study. (b): A star-forming galaxy detected via [O iii]\(\lambda\)4959 in the NB2 filter. (c): An H\(\beta\) detection discovered in NB1. (d): An object detected by its [Ne iii]\(\lambda\)3869 line in the NB1 filter.
### Properties of the SFACT Galaxies
A preliminary evaluation of the properties of the SFACT galaxies detected in the pilot-study fields is presented in SFACT1. Here we provide an updated view of some of these properties, based on the results of the spectroscopic analysis presented in the current paper. In particular, we present updated redshift histograms in Section 5.3.1 and substantially enhanced emission-line diagnostic diagrams that make use of our re-examination measurements in Section 5.3.2.
Table 4 provides a summary of the specific emission lines used to detect each of the SFACT galaxies in the current study. Only spectroscopically-verified objects are included. The first column of the table lists the emission lines used to detect our sources, while the numbers of detections for each of these lines is indicated for each of the NB filters individually (columns 2-4) and as a total number (column 5). The last column specifies the type of objects being detected and the redshift ranges relevant for each line.
As mentioned in the previous section, SFACT is capable of detecting objects via multiple emission lines. Objects are detected via a total of nine different lines in these three survey fields. Our broader sample also includes additional lines that do not happen to be represented in the current sample (e.g., [S ii] \(\lambda\lambda\)6717,6731, H\(\gamma\)\(\lambda\)4340, and Ly\(\alpha\)\(\lambda\)1215).
Not surprisingly, the frequency with which SFACT detects a given line varies substantially. Among the z \(\lesssim\) 1.0 ELGs there is an obvious tendency for the survey to detect objects via the stronger lines: H\(\alpha\) (n = 125), [O iii] \(\lambda\)5007 (n=162), and [O ii] \(\lambda\)3727 (n=95). This is no surprise, and is consistent with our expectations. These three lines are the primary lines detected by SFACT. The other optical lines like [Ne iii] \(\lambda\)3869, H\(\beta\)\(\lambda\)4861, and [O iii] \(\lambda\)4959 occur much less frequently. It is worth noting that for some redshifts, _both_ of the [O iii] lines are located within the NB filter, which will enhance the probability for detection. These cases are all classified as [O iii] \(\lambda\)5007 detections, since this line is the stronger of the two. Future survey papers em
Figure 7: Spectra of four QSOs detected in the pilot study. The red-dashed vertical lines denote the wavelength range covered by the narrow-band filter in which the source was detected. Key emission lines are labelled. The flux scales on the y-axes are in units of erg s\({}^{-1}\) cm\({}^{-2}\) Å\({}^{-1}\). (a): A QSO detected via Mg ii\(\lambda\)2798 in NB3. C iii]\(\lambda\)1908 is also visible. (b): A C iii]\(\lambda\)1908, NB2 detected QSO, with C iv\(\lambda\)1549 also visible. (c): A QSO detected by C iv\(\lambda\)1549 in the NB1 filter. Ly\(\alpha\) is also visible. (d): A color-selected quasar detected in NB2 without any emission line present in the filter. See text for explanation on how this object was detected.
ploying larger samples will explore the impact of this selection-function enhancement.
In Table 4 we break out H\(\alpha\) detections that are flagged as H ii regions and as host galaxies of H ii regions. SFACT1 details the reasons for distinguishing between these two categories of objects. In short, any object that is detected in SFACT as containing an H ii region outside of the nucleus is labeled as a host galaxy. In some, but not all, cases one or more bright H ii regions may also be cataloged (appropriately linked to their host galaxy). This is to allow abundance measurements, particularly in cases where there is little or no emission associated with the nucleus. In the pilot-study fields there are 36 galaxies identified as hosting H ii regions, all detected via the H\(\alpha\) line, and 17 additional H ii regions. The remaining H\(\alpha\) detections are stand-alone ELGs, which are predominantly dwarf star-forming galaxies (see SFACT1).
A total of eleven line-selected QSOs are also included in Table 4. Here there are less extreme variations between the numbers of objects detected by the three UV lines. There is an indication that the Mg ii \(\lambda 2798\) line is favored over the others, which is perhaps no surprise since objects detected via this line are located at smaller distances and hence would appear brighter for a fixed QSO line luminosity. The small numbers of QSOs present in the current sample precludes a detailed assessment of the QSO population at this time.
As the imaging and spectroscopic observations of additional SFACT survey fields are completed, we will have access to thousands of ELGs with data similar in nature to those presented in the pilot-study papers. Future SFACT papers will explore the properties of the survey constituents, as well as their completeness limits and selection function, in substantial detail.
#### 5.3.1 Redshift Distributions
A histogram of the redshifts of all the objects in the pilot-study fields are shown in Figure 8. The figure is broken into two subplots. The left subplot shows the SFACT ELGs that are detected below redshifts of 1.05 and the right shows the SFACT line-detected QSOs from redshifts of 1.25 and beyond. The binning of the two subplots changes, with the left plot having bins of width 0.05 and the right having bins of width 0.25.
There are four groups of objects in Figure 8, each coded by a different color. Each group represents a portion of the pilot-study sample detected by one or more specific emission lines. The SFACT survey discovers galaxies within discrete redshift windows, where each window is probed through a different narrow-band filter and each filter can detect different lines at different redshifts. Each "peak" in the figure represents a different redshift window of the SFACT survey where one of the primarily emission lines falls within one of the NB filters.
The first group are the lowest-redshift objects: the H\(\alpha\) detected galaxies. Unlike the redshift histogram in SFACT1, Figure 8 includes the H ii regions as well as the Host galaxies and ELGs. The first redshift bin in this group contains the objects discovered by the NB2, 6590 A filter which spans a redshift range of \(-0.002-0.011\), the second bin includes the objects discovered by the NB1, 6950 A filter which spans redshifts \(0.052-0.066\), and the third bin represents galaxies discovered by the NB3, 7450 A filter across redshifts \(0.129-0.144\). All H\(\alpha\)-detected histogram bins are shown in red. The
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline Line & \# of NB1 & \# of NB2 & \# of NB3 & Total, All & Object Description \\ & Detections & Detections & Detections & Filters & \\ (1) & (2) & (3) & (4) & (5) & (6) \\ \hline C iv\(\lambda\)1459 & 3 & 0 & 0 & 3 & QSOs - z range 3.22 to 3.85 \\ C iii] \(\lambda\)1908 & 0 & 2 & 0 & 2 & QSOs - z range 2.43 to 2.94 \\ Mg ii\(\lambda\)2798 & 1 & 3 & 2 & 6 & QSOs - z range 1.34 to 1.69 \\ \([\)O ii] \(\lambda\)3727 & 40 & 26 & 29 & 95 & ELGs - z range 0.75 to 1.02 \\ \([\)Ne iii] \(\lambda\)3869 & 1 & 0 & 0 & 1 & ELGs - z range 0.69 to 0.95 \\ H\(\beta\)\(\lambda\)4861 & 2 & 0 & 1 & 3 & ELGs - z range 0.34 to 0.55 \\ \([\)O iii] \(\lambda\)4959 & 3 & 9 & 4 & 16 & ELGs - z range 0.32 to 0.52 \\ \([\)O iii] \(\lambda\)5007 & 47 & 62 & 53 & 162 & ELGs - z range 0.30 to 0.50 \\ H\(\alpha\)\(\lambda\)8563 – All & 53 & 2 & 70 & 125 & ELGs + H ii regions - z range 0.00 to 0.15 \\ H\(\alpha\) detected H ii regions & 12 & 1 & 4 & 17 & H ii regions - z range 0.00 to 0.15 \\ Host Galaxies w/ H ii regions & 21 & 1 & 14 & 36 & Host Galaxies - z range 0.00 to 0.15 \\ \hline \end{tabular}
\end{table}
Table 4: Summary of Detected Emission Lines in Pilot-Study Fields
population of the first bin in the H\(\alpha\)-detected objects is small, as expected due to the small volume of space searched in the NB2 filter. The other two narrow-band filters yielded higher quantities of H\(\alpha\) detections.
The second grouping of bins in Figure 8 show primarily [O iii]\(\lambda\)5007-detected galaxies, shown in green. For the [O iii]\(\lambda\)5007 line, the NB2 filter probes a redshift range of \(0.308-0.325\) which places these sources within the first green bin. The second bin lines up with the NB1 redshift range of \(0.378-0.397\) and the fourth bin includes the NB3 redshift range of \(0.480-0.500\). In total, the [O iii]\(\lambda\)5007-selected sample contains 162 objects and represents the largest sample in the SFACT pilot study. This section of the redshift histogram also contains [O iii]\(\lambda\)4959 and H\(\beta\)-detected ELGs. The [O iii]\(\lambda\)4959-detected objects have their \(\lambda\)5007 line redshifted out of the filter so that only the \(\lambda\)4959 line is captured in the filter's wavelength range. Since the 4959 line is significantly weaker than the 5007 line, we expect to detect fewer galaxies this way. This holds true as we only have 16 ELGs that are detected via the [O iii]\(\lambda\)4959 line. There are also three H\(\beta\)-detected ELGs in this portion of the histogram. The final group of objects in the left panel of Figure 8 are primarily detected by their [O ii]\(\lambda\)3727 doublet. These sources are shown in blue. The [O ii] line is detected in the redshift range of 0.757\(-\)0.780 in the NB2 filter, in the range 0.852\(-\)0.877 in NB1, and 0.988\(-\)1.015 in NB3. The latter group of detections is split between two histogram bins in Figure 8. At these distances all of the detected objects are unresolved in our imaging data and appear as dots. They are also some of the faintest objects in the survey. A single source is detected due to its [Ne iii]\(\lambda\)3869 line in the NB1 filter at z = 0.79.
Finally, there are a total of 11 line-selected QSOs with spectroscopic follow up in the SFACT pilot-study fields. These objects are shown in the right panel of Figure 8. Six were detected by Mg ii\(\lambda\)2798 with redshifts between 1.34 and 1.67, two were detected by C iii]\(\lambda\)1908 with redshifts 2.47 and 2.48, and three were detected by the C iv\(\lambda\)1549 line with redshifts between 3.48 and 3.51. This is a small number of detections compared to the number of ELGs discovered by SFACT. However, given the relatively small volumes of the three pilot-study fields, combined with the relative rarity of QSOs, the small number of QSOs is not unexpected. For the completed SFACT survey (\(\sim 60\) total fields), we expect to detect about 200\(-\)300 QSOs located in the survey redshift windows.
#### 5.3.2 Emission-Line Diagnostic Diagrams
Emission-line diagnostic diagrams are used to separate galaxies based on their ionization sources (star-forming or AGN) and are often used to reveal some of the physical conditions present in these galaxies (Baldwin et al., 1981; Veilleux and Osterbrock, 1987). The most famous of these emission-line diagnostic diagrams is the Baldwin, Philips, Terlevich (BPT) diagram which uses the [O iii]/H\(\beta\) ratio vs. the [N ii]/H\(\alpha\) ratio to separate star forming galaxies from their AGN counterparts. In addition to the BPT diagram, Baldwin et al. (1981) also present an [O iii]/H\(\beta\) vs. [O ii]/[O iii] diagram. The redshift ranges and spectral coverage of the survey means that the SFACT galaxies only have certain emission lines present in their spectra and different lines will fall outside the wavelength range of the follow-up spectra depending on which emission line was detected in the SFACT narrow-band filters. As a result, we will display the SFACT pilot-study sample on both the classic BPT diagram as well as the [O iii]/H\(\beta\) vs. [O ii]/[O iii] diagram.
Before these objects can be added to any emission-line diagnostic diagram, their emission-line ratios need to be corrected for underlying Balmer absorption and reddening when possible. In the case of the underlying absorption, we adopt a statistical correction of 2 A of underlying absorption for each Balmer line, which is consistent with the correction in Skillman and Kennicutt (1993) and Hirschauer et al. (2022). The corrected Balmer lines are then used to determine the reddening correction, c\({}_{\rm H\beta}\), following the standard procedure (Osterbrock and Ferland,
Figure 8: Histogram of SFACT ELGs binned by redshift. SFACT samples the universe in discrete redshift windows with each line-filter combination probing a different redshift range. The bin width changes between the left and right sections of the plot. Bins in the left panel have a widths of 0.05 and bins on the right have widths of 0.25.
2006) and use H\(\alpha\) and H\(\beta\) for H\(\alpha\)-detected sources and H\(\beta\) and H\(\gamma\) for [O iii]-detected sources. In cases where Balmer lines are not detected or the computed value of c\({}_{\rm H\beta}\) comes out negative, c\({}_{\rm H\beta}\) is set to 0. The derived values for c\({}_{\rm H\beta}\) range between 0.00 and 1.77 for the current sample, with a mean value of 0.25. The corrected emission-line ratios shown in the plots in this section are the same ratios presented in the last columns of Tables 1 through 3.
For the H\(\alpha\)-detected sample, our spectral wavelength coverage does not extend to the [O ii] doublet and [S ii]\(\lambda\lambda 6731,6716\) falls outside the survey's redshift range for NB3 detected objects. Therefore, the BPT diagram is the best option for plotting these objects on an emission-line diagnostic diagram. However, only 56 of the 125 H\(\alpha\)-detected sources have the four required lines automatically detected by WRALF. Many of these objects have [N ii] or H\(\beta\) lines that fall below the signal-to-noise ratio required by the software to automatically detect them and must be re-examined. As described in Section 4.2, we look at the H\(\alpha\)-detected sources that are missing these lines and determine if there is a line present at the expected location for [N ii] or H\(\beta\). These emission lines are re-examined and are labeled as Category 1 or Category 2 measurements, depending on the SNR of the line. Based on these additional measurements, we have added 52 objects to the BPT diagram for a total of 108 of the pilot study's 125 H\(\alpha\)-detected objects. Note that objects that possess lower quality Category 2 measurements for both the [N ii] and H\(\beta\) lines have been excluded from the plot.
The H\(\alpha\)-detected galaxies are plotted on Figure 9. Here blue circles are objects that were measured by WRALF or are a Category 1 measurement of either [N ii] or H\(\beta\) (or both). Category 2 objects with the [N ii] or H\(\beta\) line re-examined are shown as left pointing, orange triangles or upward pointing, red triangles respectively. In the cases where the re-examined line measurement is an upper limit, the direction the triangle points indicates the direction these objects could move on the BPT diagram if we were able to more precisely measure the emission line with a higher signal-to-noise observation of these objects. In total, there are 26 objects with Category 2 [N ii] emission lines and 4 objects with Category 2 H\(\beta\) emission lines.
The empirical Kauffmann et al. (2003) line is indicated by the dashed line and separates the SF and AGN components of the diagram. The dashed-dotted curve that goes through the star-forming section of the graph is from Dopita & Evans (1986) and represents the high-excitation stellar photo-ionization models from their work. Most of the sources discovered by SFACT are star-forming galaxies and fall under the Kauffmann et al. (2003) line. Additionally, many of the objects on the left side of the diagram have Category 2 [N ii] emission line measurements. This is because [N ii] is typically very weak in metal-poor galaxies. The objects on the far left of the diagram have weakly determined [N ii] lines and, in some cases, the re-examination measurement is essentially an upper limit of this line's flux. There is one potential Seyfert 2 galaxy present in the upper right portion of Figure 9, although this is a system with Category 2 line measurements and hence its line ratios are highly uncertain. There is also a potential LINER that falls close to the Kauffmann et al. (2003) line in the bottom right.
For the [O iii]-detected sample, none of the lines around H\(\alpha\) are present in our spectral wavelength coverage. However, the [O ii] line is within the survey's wavelength range for these objects. Thus, the log([O iii]/H\(\beta\)) vs. log([O ii]/[O iii]) diagram from Baldwin et al. (1981) was chosen to display the [O iii]-detected galaxies and is shown in Figure 10. The symbols in this figure are the same as Figure 9 except this time orange left pointing triangles are [O ii] Category 2 re-examinations.
Figure 9: We present the BPT diagram for the H\(\alpha\)-detected pilot-study sample. Objects measured by WRALF or re-examined and labeled as a Category 1 measurement are displayed as blue circles. Objects with the [N ii] or H\(\beta\) line re-examined and labeled as a Category 2 measurement are shown as left pointing, orange triangles or upward pointing, red triangles. Objects that are a Category 2 measurement for both the [N ii] and H\(\beta\) re-examination have been excluded. Characteristic error bars for the two groups of objects (WRALF+Category 1 or Category 2) are shown. The dashed-dotted line is from Dopita & Evans (1986) and is derived from stellar photo-ionization models. The dashed line is from Kauffmann et al. (2003) and is an empirical deliminator between the star-forming galaxies and AGN.
Only 95 of the 179 objects have all the lines automatically detected by the software with 27 added by the re-examining process for a total of 122 objects on this plot. Of those 27, 13 are Category 1 re-examinations. For the 14 remaining Category 2 measurements, 3 have [O ii] measurements and 11 are H\(\beta\) measurements that are flagged as Category 2 re-examinations.
The dashed line in this figure shows the trend line from Figure 1 in Baldwin et al. (1981) and fits emission-line ratios of approximately solar metallicity H ii regions and planetary nebulae. The fact that many of the galaxies are above this line implies they have lower abundances. This, taken with the fact that they also have lower luminosity (see SFACT1) implies that they would occupy the upper left portion of the star-forming sequence in Figure 9. A single confirmed Seyfert 2 galaxy is plotted in this figure as well. This object, whose spectrum is shown in Figure 3a, falls above the trend line at log([O iii]/H\(\beta\)) of about 1.0. It is possible that some of the other galaxies plotted here are also Seyfert 2 AGN, but without their [N ii]/H\(\alpha\) ratios they can't be easily separated from their star-forming counterparts.
Finally, the [O ii]-detected sample has an extremely limited number of lines in the survey's spectral coverage. For this reason, they have not been plotted on any emission-line diagnostic diagram. In the future, we plan to reobserve the [O ii]-detected galaxies with spectral coverage that probes deeper into the red part of the spectrum, allowing additional lines to be measured for these objects so they can be placed similarly on these graphs.
## 6 Summary and Conclusions
The current paper, the third of a series, presents the spectroscopic data from the SFACT Survey pilot-study fields SFF01, SFF10, and SFF15 and describes how the spectroscopic portion of the survey is being carried out. Previous papers in this series present an overview of the overall survey (SFACT1) and describe the imaging portion of SFACT (SFACT2). The primary goals of the spectroscopic component of the SFACT program include the verification of objects cataloged in our narrowband images, including providing feedback on the candidate selection process, and the measurement of key properties such as activity class (e.g., star forming vs. AGN), accurate redshifts, and physical characteristics such as internal absorption and metal abundances.
Using the WIYN 3.5m telescope, as well as the multi-fiber positioner Hydra and the Bench Spectrograph, the survey is able to carry out follow-up spectroscopy on a large number of potential ELG candidates. Our chosen wavelength range of \(4760-7580\) A allows spectra to be simultaneously obtained for all objects detected in the three narrow-band filters of the SFACT survey. Each multi-spectral image is run through various processing steps after which emission lines within the spectra are identified and measured through the use of software in a semi-automatic fashion. Any lines missed by the software are re-examined and added into the survey data tables to extract as much useful information from the spectra as possible.
Example objects are shown for each of the primary emission lines used to detect objects in the survey (H\(\alpha\), [O iii]\(\lambda\)5007, and [O ii]\(\lambda\)3727) in each of the narrow-band filters. Spectra of additional objects detected by non-primary emission lines are also shown, as are spectra of several quasars detected in the pilot study. These objects demonstrate the wide range of objects in the SFACT catalog as well as the power and versatility of the SFACT survey to detect emission lines in even the faintest of sources. The redshift distribution of the pilot study is also examined, showing the range of distances at which these objects are detected. Finally, the sample is placed on two different emission-line diagnostic diagrams to distinguish the star-forming galaxies from the AGN.
The 533 SFACT sources detected in the three pilot-study fields are tabulated and spectral information for the 453 objects that have follow-up spectra is presented.
Figure 10: We present the [O iii]-detected sample plotted on a log([O iii]/H\(\beta\)) vs. log([O ii]/[O iii]) emission-line diagnostic diagram. The symbols are as in Figure 9 except this time orange left pointing triangles are [O ii] Category 2 re-examination measurements. Characteristic error bars for the two groups of objects (WRALF+Category 1 or Category 2) are shown. The dashed-dotted line is from Baldwin et al. (1981) Figure 1 and fits emission-line ratios of approximately solar metallicity H ii regions.
415 of these 453 are confirmed emission-line objects, giving the pilot-study a 91.6% successful detection rate for ELGs. This rate is expected to go up for the later SFACT fields as the survey's methodology has improved greatly over time.
As of the writing of this paper, spectra have been obtained for many additional SFACT survey fields over the course of several observing runs (November 2017 through March 2022). In total we have followed up on approximately 3800 potential ELGs. There are currently 35 additional SFACT fields with complete imaging data, most of which have some amount of spectroscopic follow-up.. Furthermore, these fields have the benefit of improvements made in our survey methods based on lessons learned during the analysis of the pilot-study fields. With hundreds of additional SFACT targets awaiting processing and measurement, future papers will have a significantly improved sample size with which to conduct interesting analysis.
We would like to thank the College of Arts and Sciences at Indiana University for their long term support of the WIYN Observatory. The Department of Astronomy and the Office of the Vice Provost for Research at Indiana University have also helped to support this project with additional funds.
We would like to thank the Conl ege of Arts and Sciences at Indiana University for their long term support of the WIYN Observatory. The Department of Astronomy and the Office of the Vice Provost for Research at Indiana University have also helped to support this project with additional funds.
|
2307.07063 | Bootstrapping Vision-Language Learning with Decoupled Language
Pre-training | We present a novel methodology aimed at optimizing the application of frozen
large language models (LLMs) for resource-intensive vision-language (VL)
pre-training. The current paradigm uses visual features as prompts to guide
language models, with a focus on determining the most relevant visual features
for corresponding text. Our approach diverges by concentrating on the language
component, specifically identifying the optimal prompts to align with visual
features. We introduce the Prompt-Transformer (P-Former), a model that predicts
these ideal prompts, which is trained exclusively on linguistic data, bypassing
the need for image-text pairings. This strategy subtly bifurcates the
end-to-end VL training process into an additional, separate stage. Our
experiments reveal that our framework significantly enhances the performance of
a robust image-to-text baseline (BLIP-2), and effectively narrows the
performance gap between models trained with either 4M or 129M image-text pairs.
Importantly, our framework is modality-agnostic and flexible in terms of
architectural design, as validated by its successful application in a video
learning task using varied base modules. The code will be made available at
https://github.com/yiren-jian/BLIText. | Yiren Jian, Chongyang Gao, Soroush Vosoughi | 2023-07-13T21:08:15Z | http://arxiv.org/abs/2307.07063v4 | # Bootstrapping Vision-Language Learning with Decoupled Language Pre-training
###### Abstract
We present a novel methodology aimed at optimizing the application of frozen large language models (LLMs) for resource-intensive vision-language (VL) pre-training. The current paradigm uses visual features as prompts to guide language models, with a focus on determining the most relevant visual features for corresponding text. Our approach diverges by concentrating on the language component, specifically identifying the optimal prompts to align with visual features. We introduce the Prompt-Transformer (P-Former), a model that predicts these ideal prompts, which is trained exclusively on linguistic data, bypassing the need for image-text pairings. This strategy subtly bifurcates the end-to-end VL training process into an additional, separate stage. Our experiments reveal that our framework significantly enhances the performance of a robust image-to-text baseline (BLIP-2), and effectively narrows the performance gap between models trained with either 4M or 129M image-text pairs. Importantly, our framework is modality-agnostic and flexible in terms of architectural design, as validated by its successful application in a video learning task using varied base modules.
## 1 Introduction
The field of vision-language (VL) learning seeks to create AI systems that mimic human cognition, processing the world through multi-modal inputs. Core research areas in VL include visual-question-answering (VQA), image captioning, image-text retrieval, and visual reasoning. VL learning began with task-specific learning [3; 62] and has since progressed to large-scale image-text pre-training paired with task-specific fine-tuning [48]. Furthermore, contemporary studies have begun exploring the use of off-the-shelf frozen pre-trained large language models (LLMs) in VL models [2; 34; 56], which have delivered impressive results in language generation tasks such as VQA and image captioning.
Present VL models utilizing frozen LLMs are characterized by shared design elements: visual encoders, visual-to-language modules, and frozen LLMs. Except for Flamingo [2], which employs a visual signal at each layer of the frozen LLM via gated cross-attention, the majority of works [6; 34; 40; 44; 56] feed aligned visual features as soft language prompts [29] into the frozen LLMs (see Figure 1_left_). The models are then trained end-to-end with an image-conditioned language generation loss using large-scale image-text pairs. This conceptually simple and implementation-wise straightforward design has proven effective. BLIP-2 [34] demonstrates that decoupling the end-to-end training into two stages is crucial for state-of-the-art results. The second stage of training involves standard end-to-end learning, while the first stage of training of BLIP-2 utilizes a learnable module (called Query-Transformer/Q-Former) to selectively choose/query visual features relevant to the corresponding text. This reduces 196 features of an entire image to the 32 most relevant visual features that will be sent into the following parts of the model. Stage 1 of BLIP-2 can be viewed as a refined learnable version of early VL works [3; 38; 69] that use object detectors like Faster-RCNN [17] to select features from regions of objects (objects in images are likely to be mentioned and thus
relevant to the accompanying text). We refer to this strategy as "forward-decoupling" since it uses a heuristic to learn/select which useful features are forward-passed into the subsequent model to mitigate challenges in the end-to-end optimization (shown in Figure 1_middle_).
We provide a novel insight to mitigate the challenges in end-to-end optimization by introducing "backward-decoupling" during back-propagation. For a caption \(t\) (e.g., "_a cat wearing sunglasses"_) from VL pre-training dataset \(\mathcal{D}_{\text{VL}}\), the optimizer first finds the optimal continuous prompt \(p\) for a fixed decoder LLM \(D_{\text{language}}\): \(p=\operatorname*{argmin}_{p}\mathcal{L}(D_{\text{language}}(p),t)\), before further back-propagating into the vision-to-language module (e.g., Q-Former in BLIP-2, or MLP in ClipCap) and the vision encoder (shown in Figure 1_right_). We realize that the first stage, optimization of \(p\) given \(D_{\text{language}}\) and \(t\), is purely linguistic and does not restrict the learning text examples from \(\mathcal{D}_{\text{VL}}\). Thus, we propose to learn this part independently with the available sentence dataset.
While it's not feasible to learn individual prompts \(p\) for each sentence \(t\) due to the infinite number of possible sentences, we propose to parameterize prompt \(p\) by a Prompting-Transformer (P-Former): \(p=E_{\text{P-Former}}(t)\). This effectively transforms the learning of \(p\) given \(D_{\text{language}}\) and \(t\) into learning \(E_{\text{P-Former}}\) by \(\operatorname*{argmin}_{E_{\text{P-Former}}}\mathcal{L}(D_{\text{language}}( E_{\text{P-Former}}(t)),t)\). Essentially, this is an autoencoder with the causal LLM \(D_{\text{language}}\) as the decoder. As for P-Former, we use a bidirectional Transformer and the [CLS] representation as the bottleneck. Besides the reconstruction loss, we add a contrastive loss to discriminate each sample. Such a design makes \(E_{\text{P-Former}}\) a semantic sentence embedding model like SimCSE [16] (i.e., semantically similar sentences have similar representations). Once \(E_{\text{P-Former}}\) is learned, \(p=E_{\text{P-Former}}(t)\) will be the "reference prompt" for LLM \(D_{\text{language}}\) to generate \(t\) auto-regressively. The training overview and P-Former details are shown in Figure 2.
Returning to the VL pre-training, we add a complementary loss to minimize the distance between aligned visual features (being used as language prompts) and the "reference prompt" given by P-Former. We expect this to improve the VL pre-training in two ways: (1) We further decouple the VL learning into another stage, as Li et al. [34] suggest that multi-stage training is important to mitigate alignment challenges. (2) A semantically rich space is learned for aligned visual features/prompts by a SimCSE design for our P-Former trained with the unimodal sentence dataset (i.e., semantically similar images are encouraged to align to "reference prompts" with close representations).
Our proposed framework only adds a learning objective on tensors feeding into LLMs as prompts (a.k.a images/multi-modalities as foreign languages [6; 59]). Therefore, our method is agnostic to the input modalities, X encoders, and X-to-language modules (where X can be images, videos, and audio). This could be especially salient for videos, which have much less high-quality paired data [15] compared to image-text pairs. And because P-Former is only trained with the LLM, there is no need to re-train the P-Former for different modalities.
In our experiments, we take BLIP-2 as an example and show that our proposed framework improves this latest VL method by great margins in various benchmarks of VQA and image captioning.
Figure 1: _left:_ End-to-end training of X-to-language models (where X can be images, videos, or audio), in which aligned input features are provided as prompts to LLMs. Examples include Frozen [29] and ClipCap [44]. _middle:_ “Forward-decoupled training” as demonstrated in BLIP-2 [34] and X-LLM [6]. For instance, in BLIP-2, the Q-Former is first trained to extract relevant features from the image encoder, and then the selected features are used as prompts for LLM for end-to-end learning. _right:_ We propose “backward-decoupled training”, which initially identifies the “reference prompt” for the LLM to generate the target text, followed by mapping input features to the “reference prompt”.
In Section 4.5, we demonstrate its effectiveness in other modalities (i.e., video) using different vision-to-language modules (i.e., plain Transformer over Q-Former).
We anticipate a growing body of future work within the paradigm of "images/multi-modalities as language prompts with frozen LLMs" due to its simplicity and effectiveness, as demonstrated by BLIP-2. For example, a concurrent work X-LLM [6] extends BLIP-2 from images to videos/speech with more advanced LLMs, augmenting BLIP-2's vision-to-language module Q-Former with Adapters. Because our proposed method is agnostic to input modalities, encoders, and X-to-language modules, it should seamlessly apply to future work within this paradigm of "images/multi-modalities as language prompts with frozen LLMs".
## 2 Related Work
End-to-End Vision-Language LearningMost end-to-end VL pre-training models can be broadly classified into two categories: dual-encoder and fusion-encoder models. Dual-encoder models employ two separate networks for vision and language, with the modality interaction computed via dot-product between visual and linguistic features (e.g., ALIGN [22] and CLIP [48]). Due to the efficient computation of vector dot-product through feature caching, dual-encoder models are effective and highly efficient for image-text retrieval tasks. However, their performance in VQA, captioning, and visual reasoning tasks is limited due to the lack of fine-grained alignment between the two modalities.
Fusion-encoder models, such as ALBEF [32], VLMo [4], and CoCa [67], introduce new fusion-Transformer layers to model deep interactions between the two modalities in addition to vision and language encoders. Common designs include concatenating visual and linguistic features before feeding them into a self-attentive Transformer [4; 7; 8; 14; 19; 20; 25; 27; 35; 37; 38; 52; 54; 57; 58; 59; 61; 64; 66; 69] or cross-attending vision and language encoders to compute fused features [2; 11; 12; 30; 32; 33; 42; 55; 63; 67]. The vision encoder can range from simple linear embeddings [27] and ConvNets [19; 20; 25; 52; 58; 61; 66] to Transformers [4; 11; 12; 32; 33; 57; 59; 64; 67], an offline pre-trained object detector like Faster-RCNN [7; 8; 14; 35; 37; 38; 54; 69], or an ensemble of models [41]. The language encoder can be initialized with a BERT-based [26] model or as part of a fusion-Transformer [4; 11; 12; 59; 67; 68]. Most methods utilize three types of losses during pre-training: image-text contrastive (ITC) loss, image-text matching (ITM) loss, and mask language modeling (MLM) loss or language generation (ITG) loss. Fusion-encoder models have shown superior performance in VQA and captioning tasks, though they are less efficient in retrieval tasks. A thorough review of the recent advancements in VL pre-training can be found in Gan et al. [15].
Vision-Language Learning with Frozen Language ModelsLarge language models, pre-trained on large text corpora, show exceptional performance in language generation tasks. Therefore,
Figure 2: Overview of P-Former. _left:_ The P-Former training resembles an autoencoder, with the bidirectional P-Former as the encoder and a causal LLM (frozen) as the decoder. The objective is to reconstruct input text auto-regressively. The [CLS] representation serves as sentence embeddings, which are projected back to the length of prompts. The contrastive loss at [CLS] mirrors the training of SimCSE [16]. A regularization vocabulary loss is utilized to encourage the prompts to be close to the vocabulary embeddings. _right:_ Overview of bootstrapping VL pre-training with the trained P-Former. The alignment loss introduced by P-Former is agnostic to input modalities, encoders, and X-to-language modules (i.e., modules within the dashed box can be flexible). P-Former is only used during training and not during inference.
incorporating these large frozen language models into VL models can be particularly beneficial for vision-language generation tasks, such as VQA and captioning. Flamingo [2] incorporates visual signals into each layer of a large frozen LLM using cross-attention. In contrast, Frozen [56] fine-tunes the image encoder to align visual features as soft prompts, which are input into the frozen language model. Recently, BLIP-2 [34] introduced an additional vision-to-language adaptation module Q-former (in conjunction with the frozen ViT [10] and an LLM), proposing a two-stage training process to mitigate the challenges in learning visual-language alignment. The first stage of BLIP-2 training optimizes the Q-former to extract beneficial visual features using ITC, ITM, and ITG losses. In the second stage of BLIP-2 training, all three modules (ViT, Q-former, and LLM) are trained end-to-end with only the parameters in Q-former updated. Despite being trained on 129M image-text pairs and with affordable computational resources, BLIP-2 demonstrates competitive results across multiple benchmarks. Finally, a concurrent work on visual chat-bot X-LLM [6] also adopts a similar architectural design philosophy to BLIP-2. _Our proposed framework with P-Former can be applied to models under this paradigm that use soft prompts as the visual-language interface (e.g., Frozen, BLIP-2, X-LLM, etc)._
Multi-Modal Auxiliary Data LearningBesides using off-the-shelf pre-trained vision encoders (ViT and Faster-RCNN [17; 49]) and language models, it is also interesting to explore how unimodal training can enhance multi-modal models. VLMo [4] demonstrated the benefits of conducting stage-wise pre-training with image-only and text-only data for their proposed model architecture. Li et al. [36] proposed using object tags from detectors as anchor points to bridge unpaired images and text, while Zhou et al. [72] formed pseudo-image-text pairs using an image-text retrieval alignment. Video-language models also leverage image-text pairs by repeating images to create static videos, constructing auxiliary paired datasets for pre-training. Jian et al. [23] showed that contrastive visual learning could also enhance contrastive sentence embeddings, a purely linguistic task. _We also show how pure language training can enhance a multi-modal model._
## 3 Methodology
Problem FormulationGiven an image-text dataset \(\{I,t\}\in\mathcal{D}_{\text{VL}}\) and a unimodal language dataset composed purely of sentences \(\{t\}\in\mathcal{D}_{\text{L}}\), our objective is to optimize the pre-training of a vision-language (VL) model. This model consists of a pre-trained vision encoder \(E_{\text{vision}}\), a vision-to-language adaptation module \(\underset{\text{V}\rightarrow\text{L}}{\Theta}\), and a frozen pre-trained language decoder \(D_{\text{language}}\). The goal is to minimize the image-conditioned language generation loss, given that the vision encoder \(E_{\text{vision}}\) is also frozen:
\[\underset{\underset{\text{V}\rightarrow\text{L}}{\Theta}}{\operatorname{ argmin}}\,\mathcal{L}_{\text{CrossEntropy}}(D_{\text{language}}(\underset{\text{V} \rightarrow\text{L}}{\Theta}(E_{\text{vision}}(I))),t) \tag{1}\]
As Li et al. [34] have noted, end-to-end optimization of Equation 1, visualized in Figure 1 _left_, can sometimes lead to catastrophic forgetting in LLMs.
### Backward-Decoupling and Soft Prompt Pre-training (Training P-Former)
Let's denote the adapted visual features as \(p=\underset{\text{V}\rightarrow\text{L}}{\Theta}(E_{\text{vision}}(I))\), which serve as soft prompts for the LLM \(D_{\text{language}}\). During the optimization, Equation 1 can be decomposed into two parts, visualized in Figure 1_right_:
\[\underset{p}{\operatorname{argmin}}\,\mathcal{L}_{\text{CrossEntropy}}(D_{ \text{language}}(p),t) \tag{2}\]
\[\underset{\underset{\text{V}\rightarrow\text{L}}{\Theta}}{\operatorname{ argmin}}\,\mathcal{L}_{\text{MSE}}(\underset{\text{V}\rightarrow\text{L}}{ \Theta}(E_{\text{vision}}(I)),p) \tag{3}\]
Equation 2 essentially asks _"What is the optimal soft prompt \(p\) that enables the auto-regressive language model \(D_{\text{language}}\) to generate the sentence \(t\)."_ Like all gradient-based deep learning models, depending on the training dataset, learning \(p\) given \(\{D_{\text{language}},t\}\) could lead to different sub-optimal points1 (a conventional deep learning problem is usually learning \(D_{\text{language}}\) given \(\{p,t\}\)). End-to-end
learning of Equation 1 can only use text \(t\) from image-text dataset \(\mathcal{D}_{\text{VL}}\) to update its intermediate variable \(p\). However, we observe that the learning of Equation 2 involves no image, thus allowing us to leverage abundantly available unimodal sentences in \(\mathcal{D}_{\text{L}}\).
Learning \(p\) for each \(t\) in \(\mathcal{D}_{\text{L}}\) without constraint is intractable. Thus, we model \(p\) by a bidirectional Transformer \(E_{\text{P-Former}}\) (named Prompt-Former, or P-Former) \(p=E_{\text{P-Former}}(t)\). Specifically, we use the output [CLS] hidden state of BERT as a compact representation for \(t\) and project it back to the token length of \(p\). Equation 2 can thus be reformulated as:
\[\operatorname*{argmin}_{E_{\text{P-Former}}}\mathcal{L}_{\text{CrossEntropy}}(D _{\text{language}}(E_{\text{P-Former}}(t)),t) \tag{4}\]
In essence, Equation 4 describes the training of an autoencoder with the bidirectional P-Former \(E_{\text{P-Former}}\) serving as the encoder, and the auto-regressive LLM \(D_{\text{language}}\) as the decoder. To enhance our model, we include an unsupervised contrastive loss \(\mathcal{L}_{\text{contrast}}\), acting on the [CLS] representations of sentences to differentiate distinct instances. This loss, combined with our P-Former design, emulates the training of SimCSE [16], a semantic sentence embedding model (i.e., for semantically similar image-text pairs, the predicted prompts by P-Former should also be close). Furthermore, we introduce a regularization loss \(\mathcal{L}_{\text{vocab}}\) to minimize the distance between each token in \(p\) and the closest embedding of the LLM's (\(D_{\text{language}}\)) vocabularies. The final objective becomes:
\[\operatorname*{argmin}_{E_{\text{P-Former}}}(\mathcal{L}_{\text{CrossEntropy}}( D_{\text{language}}(E_{\text{P-Former}}(t)),t)+\mathcal{L}_{\text{contrast}}+ \mathcal{L}_{\text{vocab}}) \tag{5}\]
A comprehensive view of the P-Former's architecture and learning losses is presented in Figure 2 _left_. We emphasize that the optimization of Equation 5 and P-Former training rely only on the text. Upon training the P-Former, Equation 3 can be reformulated as:
\[\operatorname*{argmin}_{\underset{\underset{\underset{\underset{ \underset{\underset{\underset{\underset{\underset{\underset{\underset{ \underset{\underset{\cdot}}}}}}}}}}}}}\mathcal{L}_{\text{MSE}}( \underset{\underset{\underset{\underset{\underset{\underset{\underset{\underset{ \underset{\underset{\underset{\underset{{\underset{\underset{\cdot}}}}}}}}}}}}}}{ \Theta}(E_{\text{vision}}(I)),E_{\text{P-Former}}(t))\equiv \operatorname*{argmin}_{\underset{\underset{\underset{\underset{\underset{ \underset{\underset{\underset{\underset{\underset{\underset{\underset{{ \underset{\underset{\underset{\cdotcdotcdotcdotcdotcdotcdotcdot}}}}}}}}}}}}}}}\mathcal{L}_{ \text{alignment}}\mathcal{L}_{\text{alignment}} \tag{6}\]
This new form, depicted in Fig 2 _right_, minimizes the distance between the aligned visual features and the prompts predicted by the trained P-Former, effectively aligning visual-linguistic representations.
### Preliminary: BLIP-2 Forward-Decoupled Training
While our proposed framework is flexible in regards to the specific architecture of \(\underset{\underset{\underset{\underset{\underset{\underset{\underset{ \underset{\underset{\underset{\underset{\underset{\overset{\cdotcdotcdotcdotcdotcdotcdotcdot}}}}}}}}}}}}{ \Theta}\text{ or the learning strategy deployed, for illustrative purposes, we employ BLIP-2 as a case study to demonstrate the applicability of our approach with state-of-the-art learning methods, owing to the strong performance and reproducibility of BLIP-2. In the context of BLIP-2, \(E_{\text{vision}}\) is a ViT-g, \(\underset{\underset{\underset{\underset{\underset{\underset{\underset{ \underset{\underset{\underset{\underset{\underset{\overset{\cdotcdotcdotcdotcdotcdotcdotcdotcdot}} }}}}}}}}}}{\Theta}\Theta}\) is referred to as Q-Former, and \(D_{\text{language}}\) is a OPT\({}_{2,7\text{B}}\). BLIP-2 proposes a two-stage pre-training process, with the initial stage involving the pre-training of \(\underset{\underset{\underset{\underset{\underset{\underset{\underset{ \underset{\underset{\underset{\underset{\underset{\overset{\cdotcdotcdotcdotcdotcdotcdotcdot}}}}}}}}}}}}{ \Theta}}\) by:
\[\operatorname*{argmin}_{\underset{\underset{\underset{\underset{ \underset{\underset{\underset{\underset{\underset{\underset{\underset{\underset{ \underset{\overset{\
### Model Pre-training
**Training Dataset** We employ a 12M subset of the pseudo-labeled [33] LIAON dataset [50], using only the sentences, for pre-training the P-Former. For VL pre-training, we widely adapted academic setting (since academic institutions lack the resources available to industry researchers to use very large datasets) with approximately 4M image-text pairs. This set comprises the MSCOCO [39], VG [28], CC3M [51], and SBU [45] datasets.
**Pre-training Models** Our method is universally applicable to any vision-to-text models that utilize prompts as the interface. Owing to its impressive performance and reproducibility, we chose BLIP-2 as the base model for our primary experiments. Thus, for VL pre-training, the image encoder \(E_{\text{vision}}\) is a ViT-g/14 from EVA-CLIP [13], the LLM decoder \(D_{\text{language}}\) is an OPT\({}_{\text{2.7B}}\)[70], and the vision-to-language adaptation module is a Q-Former [34]. The Q-Former is initialized by BERT-base with 32 learnable queries. Our newly proposed P-Former is a base Transformer initialized by BERT-base.
**Pre-training Details** The P-Former is trained on a system with 3 \(\times\) RTX-A6000 (48GB) GPUs, using PyTorch [46]. We trained for five epochs with a linear warm-up and cosine scheduling, using a batch size of 384 (\(3\times 128\)), and AdamW as the optimizer. The initial learning rate is set to \(1e^{-4}\), with a minimum learning rate of \(1e^{-5}\), a warm-up learning rate of \(1e^{-6}\), and \(2000\) warm-up steps. The VL pre-training is performed on a server equipped with 8 \(\times\) RTX-A6000 (48GB) GPUs, using PyTorch. We developed the code based on the LAVIS project [31]. Predominantly, we employed the default configuration files provided by BLIP-2 of LAVIS. Both the stage 1 and stage 2 training ran for 10 epochs with linear warm-up and cosine scheduling, using a batch size of 1024 (\(8\times 128\)), and AdamW as the optimizer. The weight decay is set to \(0.05\), the initial learning rate is \(1e^{-4}\), the minimum learning rate is \(1e^{-5}\), and the warm-up learning rate is \(1e^{-6}\). The key distinction is that stage 1 and stage 2 incorporate \(5000\) and \(2000\) warm-up steps, respectively. We set \(\omega_{1}=10\) and \(\omega_{2}=100\) while training BLIP-2 OPT\({}_{\text{2.7B}}\) with our P-Former.
**Computational Overhead Considerations** Incorporating \(\mathcal{L}_{\text{alignment}}\) from Equation 8 and 9 introduces only a minimal computational overhead, attributable to an additional forward pass of the P-Former (Transformer-base) at each iteration. To illustrate, in our experimental settings using BLIP-2 OPT\({}_{\text{2.7B}}\), the training time for stage 1 saw a modest increase from 2,669 minutes to 2,743 minutes. Similarly, for stage 2, the training time increased marginally from 1,862 minutes to 1,880 minutes. Thus, our methodology's overall computational burden remains manageable despite its enhancements.
Figure 3: An overview of our framework with BLIP-2, which employs a two-stage training process. The green components represent the alignment loss and modules added by us, which do not require gradients. The blue components are part of the original BLIP-2 structure. **P-Former is solely utilized during training and is not required during the inference phase.** Our proposed framework, with P-Former, can be seamlessly applied to any models that leverage prompts as the interface for multi-modal-language communications.
Experiments
Given the impressive performance and accessibility of the BLIP-2 model, coupled with its open-source nature, we primarily employ it as our base model. We aim to demonstrate how our proposed "backward-decoupling" strategy, along with the learned P-Former, can enhance the baselines across various image-to-text generation benchmarks. In Section 4.5, we further extend the applicability of our framework to other modalities, utilizing different base models.
### Zero-shot Image-to-Text Generation
We assess the performance of our pre-trained models on zero-shot VQA, encompassing GQA [21], OKVQA [43], and VQAv2 [18], without any task-specific fine-tuning. As per BLIP-2, we append text prompts to visual prompts prior to their processing by the frozen LLM. Both for the baseline BLIP-2 and our model, the text prompt used is "Question: Short answer:". The results, as detailed in Table 1, suggest that our proposed framework significantly enhances the zero-shot VQA performance of BLIP-2 trained with 4M image-text pairs. Remarkably, the gap between the BLIP-2 trained with 4M and 129M image-text pairs is largely bridged by our method.
### Fine-tuned Image Captioning
We further fine-tune our pre-trained model for MSCOCO [39] image captioning, employing the text prompt "a photo of ". Following BLIP-2, we fine-tune the model for 5 epochs using a batch size of 1024 (\(8\times 128\)), AdamW with an initial learning rate of \(1e^{-5}\), minimum learning rate of \(0\), warm-up learning rate of \(1e^{-8}\) and \(1000\) warm-up steps, with linear warm-up and cosine scheduling. We evaluate our fine-tuned model on the Karpathy test split of MSCOCO. Also, zero-shot transfer results on the NoCaps dataset [1] are reported. Shown in Table 2, our framework improves BLIP-2 in all metrics, with greater improvements in CIDEr compared to SPICE.
### Zero-shot Image-Text Retrieval
While our proposed method primarily focuses on refining visual prompts for a frozen LLM to generate corresponding text, it may not prove as beneficial for image-text retrieval tasks (the ITC and ITM losses are principally responsible for these tasks). Nevertheless, we present results on zero-shot
\begin{table}
\begin{tabular}{l l|c c c c|c c} \hline \hline \multirow{2}{*}{Models} & \#Pretrain & \multicolumn{3}{c|}{NoCaps Zero-shot (validation set)} & \multicolumn{2}{c}{COCO Fine-tuned} \\ & Image-Text & in-domain & near-domain & out-domain & overall & \multicolumn{2}{c}{Karpathy test} \\ & C & S & C & S & C & S & C & S & B@4 & C \\ \hline OSCAR [38] & 4M & - & - & - & - & 80.9 & 11.3 & 37.4 & 127.8 \\ VinVL [69] & 5.7M & 103.1 & 14.2 & 96.1 & 13.8 & 88.3 & 12.1 & 95.5 & 13.5 & 38.2 & 129.3 \\ BLIP [33] & 129M & 114.9 & 15.2 & 112.1 & 14.9 & 115.3 & 14.4 & 113.2 & 14.8 & 40.4 & 136.7 \\ OFA [58] & 20M & - & - & - & - & - & - & - & 43.9 & 145.3 \\ Flamingo [2] & 1.8B & - & - & - & - & - & - & - & 138.1 \\ SimVLM [61] & 1.8B & 113.7 & - & 110.9 & - & 115.2 & - & 112.2 & - & 40.6 & 143.3 \\ \hline OPT\({}_{2.78}\) BLIP-2 [34] & 4M & 115.3 & 15.0 & 111.0 & 14.6 & 112.5 & 14.0 & 111.9 & 14.5 & 41.8 & 140.4 \\ OPT\({}_{2.78}\) Ours & 4M & 118.3 & 15.3 & 114.7 & 14.9 & 114.1 & 14.1 & 115.1 & 14.8 & 42.3 & 141.8 \\ OPT\({}_{2.78}\) BLIP-2\({}^{\dagger}\)[34] & 129M & **123.0** & **15.8** & **117.8** & **15.4** & **123.4** & **15.1** & **119.7** & **15.4** & **43.7** & **145.8** \\ \hline \hline \end{tabular}
\end{table}
Table 2: Comparison with different captioning methods on NoCaps and COCO. All methods optimize the cross-entropy loss during fine-tuning. C: CIDEr, S: SPICE, B: BLEU. \({}^{\dagger}\): numbers taken from Li et al. [34].
\begin{table}
\begin{tabular}{l l|c c c c c c|c c} \hline \hline \multirow{2}{*}{Models} & \#Pretrain & \multicolumn{3}{c}{VQAv2} & \multicolumn{2}{c}{OK-VQA} & \multicolumn{1}{c}{GQA} \\ & Image-Text & val & test-dev & test & test-dev \\ \hline FewMLM [24] & 9.2M & 47.7 & - & 16.5 & 29.3 \\ Frozen [56] & 3M & 29.6 & - & 5.9 & - \\ VLKD [9] & 3M & 42.6 & 44.5 & 13.3 & - \\ Flamingo3B [2] & 1.8B & - & 49.2 & 41.2 & - \\ \hline OPT\({}_{2.78}\) BLIP-2 [34] & 4M & 46.8 & 45.6 & 25.9 & 30.5 \\ OPT\({}_{2.78}\) Ours & 4M & 52.6 & 52.2 & 30.0 & 34.0 \\ OPT\({}_{2.78}\) BLIP-2\({}^{\dagger}\)[34] & 129M & **53.5** & **52.3** & **31.7** & **34.6** \\ \hline \hline \end{tabular}
\end{table}
Table 1: Comparison with different methods on zero-shot VQA \({}^{\dagger}\): numbers taken from Li et al. [34].
MSCOCO, and zero-shot Flickr30K [47] image-to-text and text-to-image retrievals. We compare two models trained with \(\mathcal{L}_{\text{BLIP2-stage1}}\) (ITC, ITM and ITG) and \(\mathcal{L}_{\text{BLIP2-stage1}}+\mathcal{L}_{\text{alignment}}\), without any further task-specific fine-tuning. As expected, Table 3 reveals that the newly introduced \(\mathcal{L}_{\text{alignment}}\) offers limited benefits for retrieval tasks. However, it does not negatively impact the performance.
### Ablation Studies
Impact of Alignment Loss WeightsWe investigate the influence of \(\omega_{1}\) and \(\omega_{2}\) in Equation 8 and 9. \(\omega_{1}=0\) and \(\omega_{2}=0\) refers to BLIP-2, and \(\omega_{1}=10\) and \(\omega_{2}=100\) refers to our default configuration of BLIP-2 + P-Former.The alignment loss introduced by the P-Former proves beneficial in both stages of VL pre-training, as shown in Table 4.
Alternate Language ModelIn this section, we substitute the decoder-based \(\text{OPT}_{\text{2.7B}}\) model with an encoder-decoder-based FLAN-T5XL as the new LLM. The experiments are conducted with a limited computational budget on 3 \(\times\) RTX-A6000 and for 5 epochs on both stage 1 and stage 2. The results, displayed in Table 5, verify the effectiveness of our framework with another LLM.
Effect of P-Former's Pre-training Sentence DatasetsIn our primary experiments, we utilize a dataset containing 12M sentences for P-Former training. We investigate the impact of the pre-training sentence dataset for P-Former by re-training it with 4M sentences from our VL pre-training datasets. We then train BLIP-2 + P-Former and report zero-shot VQA results in Table 6. This examination underscores that both the implicit decoupling of BLIP-2's two-stage training into a 3-stage training (pre-training of P-Former), and the employment of additional unimodal sentences contribute to the improved outcomes.
### Video Captioning
Our framework is modality-agnostic with respect to the visual encoder and vision-to-language adaptor, making it applicable to other modalities, such as video. Consequently, we establish a video learning
\begin{table}
\begin{tabular}{c c|c c c} \hline \hline \multirow{2}{*}{P-Former} & \#Pretrain & VQAv2 & OK-VQA & GQA \\ & Sentences & val & test & test-dev \\ \hline \(\times\) & - & 46.8 & 25.9 & 30.5 \\ ✓ & 4M & 51.7 & 28.2 & 32.3 \\ ✓ & 12M & **52.6** & **30.0** & **34.0** \\ \hline \hline \end{tabular}
\end{table}
Table 6: Ablations on sentence datasets used to train P-Former (using \(\text{OPT}_{\text{2.7B}}\) as LLMs). The first row w/o P-Former is baseline BLIP-2.
\begin{table}
\begin{tabular}{c c|c c c} \hline \hline \multirow{2}{*}{Task} & Pre-training & \multicolumn{2}{c}{Image \(\rightarrow\) Text} & \multicolumn{2}{c}{Text \(\rightarrow\) Image} \\ & objectives & R@1 & R@5 & R@1 & R@5 \\ \hline Flickr30K & \(\mathcal{L}_{\text{BLIP2-stage1}}\) & **94.3** & **99.8** & \(82.9\) & 95.5 \\ & \(\mathcal{L}_{\text{BLIP2-stage1}}\) + \(\mathcal{L}_{\text{alignment}}\) & 93.7 & 99.7 & **83.0** & **95.8** \\ \hline MSCOCO & \(\mathcal{L}_{\text{BLIP2-stage1}}\) & 78.4 & 93.8 & **60.5** & **83.0** \\ & \(\mathcal{L}_{\text{BLIP2-stage1}}\) + \(\mathcal{L}_{\text{alignment}}\) & **78.7** & **94.5** & 60.4 & 82.8 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Comparison with different image-to-text and text-to-image retrieval methods.
pipeline, with the vision encoder set as a frozen I3D [5] video encoder, the vision-to-language adaptor as a Transformer-base, and the LLM decoder as the OPT\({}_{2.7\text{B}}\) (also frozen). We then train this model on the VATEX [60] English training set and evaluate it on the validation set. This dataset contains 26K videos for training. The experiments are conducted on an RTX-A6000. Initially, we train the model solely using \(\mathcal{L}_{\text{alignment}}\) for 10 epochs with the P-Former, followed by end-to-end learning with \(\mathcal{L}_{\text{ITG}}\) for an additional 10 epochs.
Our baseline, represented in Table 7, is competitive with two well-established video captioning models: MITS-VC [53] and ORG-TRL [71]. It is noteworthy that the current state-of-the-art on this benchmark, VideoCoCa [65], is trained on 10M videos, in contrast to our model, which is trained on merely 26K videos. Furthermore, the integration of P-Former and \(\mathcal{L}_{\text{alignment}}\) enhances the CIDEr score by \(4.3\) (from \(56.6\to 60.9\)).
Despite being a smaller-scale experiment without large-scale pre-training, we demonstrate that our learning framework can be generalized to another modality (i.e., video-learning), employing a different vision-language adaptor (i.e., a plain Transformer as opposed to a Q-Former).
## 5 Limitations
Despite the modality-agnostic nature of P-Former and its ability to adapt to various encoders and vision-to-language adaptors, the unimodal language pre-training remains contingent on the choice of the frozen LLM. This necessitates re-training of the P-Former for different language decoders such as OPT\({}_{2.7\text{B}}\) and FLAN-T5XL. Moreover, incorporating P-Former primarily enhances image-to-text generation tasks such as VQA and image captioning, while it falls short in improving image-text retrieval tasks. Finally, our methodology primarily assists in bootstrapping prompt-based VL pre-training, i.e., providing aligned visual features as soft prompts to LLMs. Its application to Flamingo remains unclear due to its cross-attention basis and non-open-source status. Nevertheless, given the simplicity of sequential modules of prompt-based models (as demonstrated by recent works such as Frozen, BLIP-2, X-LLM, etc.), we anticipate that our framework will be broadly applicable to most future work in the academic setting.
## 6 Conclusion and Discussion
This paper introduces a novel optimization framework for enhancing vision-language models based on large, frozen LLMs. We observe that the end-to-end image-to-text pre-training can be backwardly decoupled: initially determining the "ideal prompt" that triggers the LLM to generate the target text (which can be trained in an unsupervised fashion), followed by the alignment of visual features to the prompt. To this end, we train a P-Former, which functions similarly to a semantic sentence embedding model, to predict prompts to which visual features should align. Experimental results demonstrate that including alignment loss (via P-Former) in the BLIP-2's framework significantly narrows the performance gap between models trained with 4M and 129M image-text pairs.
The key contributions of this paper are as follows:
* Contrary to most prior studies, which decouple VL pre-training into (1) learning/selecting which visual features to forward into language modules and (2) conducting end-to-end learning with the selected visual features (dubbed "forward-decoupling"), we propose an innovative perspective of VL decoupled-training from a backward viewpoint. Our approach bifurcates the training into (1) determining the "ideal prompt" for the LLM to generate the text and (2) aligning visual features to that prompt.
* We introduce the P-Former, designed to predict the "ideal prompt," which is trained using a unimodal sentence dataset. This exhibits a novel application of unimodal training in enhancing multi-modal learning.
* Our proposed training framework substantially enhances a robust and recent baseline (BLIP-2), bridging the gap between models trained with 4M and 129M image-text pairs using accessible hardware (8 \(\times\) RTX-A6000 in less than 4 days). This considerably lowers the entry barriers to VL pre-training research and is expected to attract interest from groups with limited resources.
* The proposed framework generally applies to different modalities (images, videos, audio, etc.), vision encoders, and vision-to-language modules. |
2306.10636 | Gap solitons and nonlinear Bloch states in Bose-Einstein condensates
with current-dependent interactions | We show how the chiral properties of Bose Einstein condensates subject to
current-density interactions and loaded in optical lattices can be observed in
the realization of nonlinear Bloch states, whose spectrum lacks the usual
periodic structure. Chirality is also manifested by spatially localized states,
or gap solitons, which are found for positive rotation rates of the lattice at
the energy gaps between the linear energy bands, whereas for negative rotations
they appear in the semi-infinite gap of the linear spectrum. The stability of
extended and localized states is checked through the spectrum of linear
excitations and nonlinear time evolution of perturbed states, and the
phenomenon of Bloch oscillations is explored. Our results are obtained in quasi
1D ring geometries with feasible experimental parameters. | Jintao Xu, Qian Jia, Haibo Qiu, Antonio Muñoz Mateo | 2023-06-18T20:22:53Z | http://arxiv.org/abs/2306.10636v2 | Gap solitons and nonlinear Bloch states in Bose-Einstein condensates with current-dependent interactions
###### Abstract
We show how the chiral properties of Bose Einstein condensates subject to current-density interactions and loaded in optical lattices can be observed in the realization of nonlinear Bloch states, whose spectrum lacks the usual periodic structure. Chirality is also manifested by spatially localized states, or gap solitons, which are found for positive rotation rates of the lattice at the energy gaps between the linear energy bands, whereas for negative rotations they appear in the semi-infinite gap of the linear spectrum. The stability of extended and localized states is checked through the spectrum of linear excitations and nonlinear time evolution of perturbed states, and the phenomenon of Bloch oscillations is explored. Our results are obtained in quasi 1D ring geometries with feasible experimental parameters.
## I Introduction
Synthetic gauge fields that depend locally on the density of matter have been recently realized in ultracold-atom settings [1; 2; 3; 4]. The unusual properties of these systems were theoretically predicted by means of a non-local unitary transformation that maps the density-dependent gauge into a current-dependent interaction [5; 6]. Bose-Einstein condensates (BECs) endowed with such inter-particle interactions were shown to exhibit chiral properties in a free expansion, the onset of persistent currents, or the center of mass oscillations [7; 8]; additionally, in the absence of external potential, it was demonstrated that chiral bright solitons can exist only if they move along one (but not the opposite) direction [4; 5; 9; 10], and that collisions between them differ significantly from those between regular solitons [11; 12].
Many aspects of this chiral theory remain unexplored, and very recent experimental realizations [4] open new prospects for testing the theoretical predictions. A significant subject that was previously restricted to solid state systems, the matter dynamics in periodic potentials, became also accessible to the field of ultracold gases with the realization of optical lattices [13; 14]. As far as we know, this subject has still not been addressed within the chiral theory.
In this work, we focus on BECs that are loaded in optical lattices and subject to interactions that depend locally on the current density. The lattice is assumed to be imprinted on a quasi-1D ring, as generated by a tight transverse confinement of the atoms, and to be able to rotate. In particular, within the framework of a generalized Gross-Pitaevskii equation, we study the properties of nonlinear Bloch waves and gap solitons, and demonstrate their unusual properties. While the energy dispersion of the former states loses the usual periodic structure, and new non-regular Bloch states emerge, the situation of gap solitons within the energy gaps changes drastically with the direction of the rotation rate. We analyze the spectrum of linear excitations and show that the stability of both extended and localized states is also conditioned by the chiral properties. Finally, we perform numerical simulations of the equation of motion to explore the stability of stationary states against small perturbations, and also the existence of Bloch oscillations in the presence of current-density interactions.
## II Model
We assume that the condensate wave function \(\psi(x,t)\) follows a generalized Gross-Pitaevskii equation in a ring of radius \(R\):
\[i\hbar\frac{\partial\psi}{\partial t}=\left[\frac{(-i\hbar\partial_{x}-m \Omega R)^{2}}{2m}+U_{\rm latt}+\hbar\kappa J\right]\psi, \tag{1}\]
where \(U_{\rm latt}(x)=U_{0}\,\sin^{2}(\pi x/d)\) is the lattice potential, with amplitude \(U_{0}\) and lattice spacing \(d\), which can rotate with angular velocity \(\Omega\). The strength of the current-dependent mean field is measured by the dimensionless parameter \(\kappa\), and \(J(x,t)=\hbar(\psi^{*}\partial_{x}\psi-\psi\partial_{x}\psi^{*})/(i2m)\) is the current density in the lab frame. The total number of particles \(N=\int dx|\psi|^{2}\), and the total energy (which does not include an explicit dependency on the current density)
\[E=\int dx\,\psi^{*}\left[\frac{(-i\hbar\partial_{x}-m\Omega R)^{2}}{2m}+U_{ \rm latt}\right]\psi, \tag{2}\]
are conserved quantities [5]. We particularize our analysis on an \(M-\)site lattice over the ring, so that \(2\pi R=M\,d\). As energy reference, we will make use of the lattice recoil energy \(E_{L}=\hbar^{2}(\pi/d)^{2}/(2m)\)[13].
The stationary states take the form \(\psi(x,t)=\psi(x)\,\exp(-i\mu t/\hbar)\), with \(\mu\) as the energy eigenvalue. In
the search of stationary states, it is worth noticing that, from the continuity equation \(\partial_{t}|\psi|^{2}+\partial_{x}(J-|\psi|^{2}\,\Omega\,R)=0\), the current density fulfills \(J-|\psi|^{2}\,\Omega\,R=J_{0}\), where \(J_{0}\) is a constant, thus it transforms the equation of motion (1) into the regular time-independent Gross-Pitaevskii equation
\[(\mu-\hbar\kappa\,J_{0})\psi=\left[\frac{(-i\hbar\partial_{x}-m \Omega R)^{2}}{2m}+U_{\rm latt}+g_{\Omega}\,|\psi|^{2}\right]\psi, \tag{3}\]
where the effective constant-interaction strength is \(g_{\Omega}=\hbar\kappa\Omega R\). Furthermore, if the lattice does not rotate, the initial nonlinear equation (1) is transformed into the time-independent Schrodinger equation
\[(\mu-\hbar\kappa\,J_{0})\psi=\left[-\frac{\hbar^{2}}{2m}\partial_{x}^{2}+U_{ \rm latt}\right]\psi, \tag{4}\]
with the current constraint \(J_{0}=|\psi(x)|^{2}\,\hbar\partial_{x}\theta(x)/m\), where \(\theta=\arg\,\psi(x)\) is the phase.
The linear excitations \(\delta\psi_{j}=[u_{j},\,v_{j}]^{T}\) of stationary states \(\psi(x,t)\rightarrow\exp(-i\mu t/\hbar)\left\{\psi(x)+\sum_{j}[u_{j}(x)\, \exp(-i\omega_{j}t)\,+\,v_{j}(x)^{*}\,\exp(i\omega_{j}^{*}t)]\right\}\), with \(j\) being a mode index, can be obtained through the Bogoliubov's equations \(B\delta\psi_{j}=\hbar\omega_{j}\,\delta\psi_{j}\). The Bogoliubov matrix is given by
\[B=\begin{pmatrix}H_{\rm GP}+i\kappa B_{uu}-\mu&i\kappa B_{uv}\\ i\kappa B_{uv}^{*}&-H_{\rm GP}^{*}+i\kappa B_{uu}^{*}+\mu\end{pmatrix}, \tag{5}\]
where \(H_{\rm GP}=(-i\hbar\partial_{x}-m\Omega R)^{2}/2m+U_{\rm latt}+\hbar\kappa J\) is the Hamiltonian operator in Eq. (1), \(B_{uu}=\hbar^{2}(\psi\partial_{x}\psi^{*}-|\psi|^{2}\partial_{x})/2m\), and \(B_{uv}=-\hbar^{2}(\psi\partial_{x}\psi-\psi^{2}\partial_{x})/2m\). Linear excitations with complex frequencies, \(\Im(\omega_{j})\neq 0\), lead to the exponential growth (in the linear regime) of small perturbations on the stationary state that can produce its decay during time evolution.
## III Extended and Localized Eigenstates
In a non-rotating linear system (\(\Omega=0\) and \(\kappa=0\)), the dispersion relations of quantum states in a ring lattice consist of energy bands separated by energy gaps [15]. The corresponding spectrum of eigenstates can be described in terms of Bloch waves \(\psi_{n,k}(x,t)=\exp[i(kx-\epsilon_{n,k}t/\hbar)]\,u_{n,k}(x)\), with eigenenergies \(\epsilon_{n,k}\), where \(n=1,\,2,\dots\) identifies the band number, and \(k=q\,k_{0}\), with \(q=0,\,\pm 1,\,\pm 2,\dots\) and \(k_{0}=1/R\), is the wave number associated with the quasimomentum \(\hbar k\). The functions \(u_{n,k}(x)\) share spatial period with the lattice \(u_{n,k}(x+d)=u_{n,k}(x)\), so that the probability density profile is homogeneous over the lattice sites. If the lattice is finite, and contains \(M\) sites, there are just \(M\) values of quasimomentum [15].
In a system with varying contact interactions, the linear Bloch waves have been shown to find continuation as nonlinear Bloch waves when the interactions are switched on [13; 16]. We will show that this continuation also exists in the presence of current-density interactions. In addition, differently to the case of contact interactions, there exist new extended states with a non-homogeneous density profile over the lattice sites.
Figure 1: (a) Linear (dotted line) and nonlinear (symbols) lowest energy bands for a ring lattice at rest, \(\Omega=0\), with ten sites, \(M=10\), and fixed average number density \(N/(2\pi R)=0.47/d\). Two types of interparticle interactions with equal strength are represented: contact interaction parameterized by \(|g|=1\) (see text), both repulsive \(g=1\) and attractive \(g=-1\), and current-dependent interaction \(\kappa=1\). (b) Density (top) and phase (bottom) profiles of nonlinear Bloch states \(\psi_{n,k}\) in the presence of current-density interactions, \(\kappa=1\) and \(g=0\). (c) Three instances of non-regular Bloch states with quasimomentum \(k=k_{0}\), from almost homogeneous, \(\psi_{1a}\), to large variations, \(\psi_{1c}\), in the density peaks.
### Nonlinear Bloch waves
First, we focus on the dispersion of the system at \(\Omega=0\) for varying quasimomentum. Insight can be obtained from the comparison with a system subject to contact interactions (hence following the usual GP equation); in this case, the linear energy bands are shifted to higher energies when the interaction is repulsive, whereas the opposite happens for attractive interaction. Therefore, in the presence of current-density interaction, where the effective interaction changes from repulsive to attractive according to the sign of the particle current, the resulting dispersion curves are expected to be asymmetric with respect to the value of quasimomentum, with energies higher than the linear bands for states with positive currents, and lower than the linear bands for states with negative currents.
As can be seen in Fig. 1(a), this is indeed the scenario shown by our numerical results for a ring lattice with \(M=10\) sites and shallow depth \(s=2\). The number of particles has been fixed for the nonlinear states considered to produce the average number density \(N/(2\pi R)=0.47/d\). For comparison, contact interaction cases are represented, and have been parameterized by the non-dimensional quantity \(g=m\,d\,g_{\rm 1D}/\hbar^{2}\), where \(g_{\rm 1D}\) is the one-dimensional contact interaction strength, so that \(|g|=\kappa=1\). The lowest energy band of the linear system (dotted line) lays in between the lowest chemical potential bands of nonlinear systems with positive (open symbols joined by dashed lines to guide the eye) and negative (open symbols joined by dot-dashed lines) contact interactions, whereas the energy eigenvalues of the system with current-density interaction (filled symbols) vary as predicted. The states with minimum (zero) and maximum (\(k=5\,k_{0}\)) values of quasimomentum have no particle currents, thus they follow a linear (Schrodinger) equation of motion, and match the energy of the linear bands.
Four instances of Bloch states, with quasimomentum wave number \(k/k_{0}=1\), \(-1\), \(5\), represented by their density and phase profiles, are shown in Fig. 1(b). While states with opposite currents but equal absolute value of quasimomentum \(|k|\) do not show appreciable differences in the density profile, states with different \(|k|\) do, reflecting the increasing interaction associated to higher \(|k|\).
### Non-regular Bloch states
An interesting novelty of the systems with current-density interactions, contrary to the case of contact interactions, is the existence of extended stationary states that do not conform to the usual picture of Bloch states, since they present a non-homogeneous density profile over the lattice sites. They do not conform either to the features of alternative states hosting dark solitons in the lattice [17]. We will refer to them as non-regular Bloch states, since the quasimomentum, associated with the phase winding number \(q=k/k_{0}\), is still a well defined quantity. As a general aspect, the higher the variation between density peaks of lattice sites that they present, the lower the constant current density becomes. Our numerical results for characteristic quantities suggest a continuum of non-regular Bloch states, for we were able to find close states with very small differences, of the order of \(1\%\) in energies.
Figure 1(c) shows the density (top) and phase (bottom) profiles of three non-regular Bloch states with winding number \(q=k/k_{0}=1\). They range from almost homogeneous \(\psi_{1a}\), to intermediate \(\psi_{1b}\), and up to large variation \(\psi_{1c}\), in the density peaks. The density modulation over the whole lattice has the form \(n_{k}\left[1+\beta\,\sin(k_{0}\,x)\right]\), where \(n_{k}\) is the density of the homogeneous Bloch state, and \(\beta\) varies from almost zero for \(\psi_{1a}\), to \(\beta=0.6\) for \(\psi_{1c}\). Despite the large differences in the density profile, their energy eigenvalues differ in less than \(1\%\), and their energies in less that \(1\%\). The phase profiles show also small differences and follow a monotonic increase in the range \([0,\,2\pi]\).
### Lattice rotation and gap solitons
Gap solitons are localized states in systems loaded in optical lattices; they usually occupy a few sites, a small region of the whole lattice. Although their existence can be easily understood in systems with attractive interactions, similarly as in translation-invariant settings, their emergence in the presence of repulsive interactions is a prior not that evident [18], and can be explained through the sign change induced by the lattice in the effective
Figure 2: Trajectories of nonlinear Bloch waves (solid and dashed lines) and gap solitons (thin lines with symbols) in a ring lattice moving with angular rotation \(\Omega\). All states contain the same number of particles, so that the varying rotation translates into a varying interaction. Nonlinear Bloch waves are characterized by the wave number \(k\) that indexes their quasimomentum, whereas gap solitons trajectories differ for positive (open circles) and negative (open triangles) currents. Left: Energy eigenstates measured in units of the lattice recoil energy \(E_{L}\); the underlying energy bands of the linear problem (shaded regions) are represented for comparison. Right: Average current density in units of \(J_{0}=N\Omega_{0}\).
mass of the particles (see for instance Ref. [19]). In this work, the current-density interaction provides both possibilities for the emergence of gap solitons, which become distinct for positive and negative current densities. Since the interaction (or nonlinearity) is necessary for the solitons to exist, the ring lattice has to rotate in order for the gap solitons to emerge.
Although the finite lattice considered here, having \(M\) sites, allows for only \(M\) Bloch waves, the introduction of rotation gives access to the continuous spectrum of the infinite lattice [16]. In systems with Galilean symmetry (as happens for contact interactions), for varying rotation rate \(\Omega\in(-M/2,M/2]\times\Omega_{0}\), where \(\Omega_{0}=\hbar/(mR^{2})\), the energy of each Bloch wave \(\epsilon_{n,k}(\Omega)\) in the finite lattice [as obtained from Eq.(2)] reproduces the energy band profile against quasimomentum in the first Brillouin zone \(k\in(-\pi/d,\pi/d]\) of the infinite lattice with \(\Omega=0\). The dispersion graph, \(\epsilon_{n,k}\) versus \(\Omega\), is also useful in understanding the emergence of gap solitons; the energy degeneracies found in this graph for the linear system, which correspond to crossings of Bloch-wave trajectories, provide the origin of gap solitons when the interparticle interactions are switched on. Thus, gap solitons are the nonlinear continuation of linear states made of Bloch-wave superpositions [16].
Figure 2 shows our numerical results for stationary states in a moving ring lattice with current density interactions and same parameters as in Fig. 1. The eigenenergy (left) and current density (right) of both nonlinear Bloch states and gap solitons are represented for fixed number of particles. Gap solitons spread when approaching the energy bands (light-gray shaded regions in the graph), and then become dynamically unstable [18]. Eventually, as happens in the present case at the bottom of the second energy band, they extend to the whole system (or stop existing in an infinite lattice) when entering a band [16]. Overall, the chirality of the system manifests as asymmetric trajectories for positive and negative rotation rates of the lattice. In addition, as we demonstrate next, apparent differences arise in the states belonging to these families, which show distinct density profiles and stability properties.
Figure 3(a) shows the density and phase profiles of two typical gap solitons with the same number of particles and opposite lattice rotation. For positive rotation rate (top panel) the soliton is situated between the first and second energy bands, corresponding to the family represented by lines with open circles in Fig. 2. For negative rotation (bottom panel) the soliton belongs to the family lying in the semi-infinite gap, indicated by lines with open triangles in Fig. 2. The latter soliton, having negative current and then effective attractive interparticle interaction, is comparatively more compact than the former, and occupies just one lattice site. On the contrary, as can be seen in Fig. 3(b), the density profiles of two nonlinear Bloch waves with equal quasimomentum \(k=5\,k_{0}\) but opposite lattice rotations, hence opposite current densities, are almost indistinguishable (notwithstanding, the differences become clearer for increasing number of particles).
### Linear stability analysis
We have studied the linear stability of the stationary states reported in Fig. 2 by numerically solving the corresponding Bogoliubov's equations (5). Before analyzing our results, it is insightful to remind the scenario of equivalent states with contact interparticle interactions; we particularize it for otherwise equal parameters as in Fig. 2. In such a case, while there is no difference regarding the sign of the quasimomentum, the dynamical stability depends strongly on the character of the contact interactions, either repulsive or attractive. For the former case,
Figure 3: Gap solitons (a) and nonlinear Bloch waves with quasimomentum \(k=5\,k_{0}\) (b), for positive \(\Omega=\Omega_{0}\) and negative \(\Omega=-\Omega_{0}\) lattice rotations, and otherwise equal parameters to those used in Fig. 2.
it is known that Bloch states close to the edge of the Brillouin zone become unstable; how close depends in turn on the strength of the interactions and the lattice depth [20]. The opposite happens for attractive contact interactions, where Bloch states close to zero quasimomentum become unstable; in this latter situation, one can understand the source of instability as associated with smoother, slowly varying density profiles, on which modulation instability can operate due to the existence of lower energy states with localized density profiles. Regarding fundamental (one main peak) gap solitons, although instabilities can be found when their chemical potential approaches a linear energy band, they are usually stable states.
Our results in current-interacting systems show, in general, a trend similar to the scenario of contact interactions. The main difference resides in the asymmetry between positive and negative quasimomenta, which can be mapped into systems with effective positive and negative contact interactions, respectively. Another particular difference is observed in the stability of gap solitons with positive rotation rates for the lattice depth considered in Fig. 2, \(s=2\), for which we have not found stable cases, despite the fact that the energy of the unstable modes (having complex frequencies) can be very small in comparison with the corresponding energy eigenvalue \(\mu\). Still, we did find stability for these solitons at higher values of the lattice depth.
Several examples of these general features on linear stability are presented in Fig. 4; both the real and imaginary part of the excitation modes are shown for each state considered. The panels (a) show the excitation energies of two nonlinear Bloch waves with the same absolute value of quasimomentum \(|k|=4k_{0}\) (close to the edge of the Brillouin zone) in a lattice at rest \(\Omega=0\). While the state with negative quasimomentum is dynamically stable, the positive-quasimomentum state is not. The scenario is analogous to systems with contact interparticle interactions, hence our results are the opposite (stable for positive rotation and unstable for negative rotation) for states with quasimomentum \(|k|=k_{0}\) (not shown).
The panels in Fig. 4(b) represent the energy excitations of the two gap solitons shown in Fig. 3. As anticipated, the soliton with negative rotation is dynamically stable, while the soliton with positive rotation, \(\Omega=\Omega_{0}\), presents complex frequencies that can cause the dynamical decay of this stationary state in a real-time evolution (see next section). However, for the same parameters but in a deeper lattice, \(s=3\), we have found that the corresponding soliton becomes stable.
## IV Dynamics
Despite the linear analysis performed in the previous section, the stability of stationary states can only be ensured through the nonlinear time evolution of the system. To this end, we have solved the time-dependent Eq. (1) for initial stationary states on which perturbative noise has been added. Although our results for the subsequent time evolution are consistent with the predictions of the linear stability analysis, we have also found some interesting cases whose dynamics show features of structural stability (small variations of the stationary state profile that do not break qualitatively its structure) despite the presence of unstable linear modes.
Figure 5(a) depicts three snapshots for selected times \((t=0,\,T/2,\,T)\) of a time evolution, with total time \(T=25\,\,m\!d^{2}/\hbar\), of the linearly-unstable soliton of Fig. 3(a) with positive rotation rate. As can be seen, it confirms the linear prediction of instability shown in Fig. 4(b), since it displays the tunneling of particles into nearby lattice sites as time passes. The linear analy
Figure 4: Linear excitation modes of stationary states for fixed number of particles. (a) Nonlinear Bloch states with \(k=4\,k_{0}\) in a lattice at rest. (b) Gap solitons with opposite rotation rates shown in Fig. 3.
sis predicts also the instability of the non-linear Bloch states with \(k=5k_{0}\) and opposite rotation rates shown in Fig. 3(b), with unstable linear modes of higher energy in the case of positive rotation. However the time evolution, Fig. 5(b), shows a distinct nonlinear dynamics for them. While the positive rotation Bloch state evolves into a non-symmetric structure that is highly variable in time, the negative rotation one exhibits a small breathing dynamics that do not alter the geometric shape of the initial state. We have checked that different types of small perturbations lead to the same conclusions, and not different dynamics is shown for evolution times much longer (up to ten times) than the case shown in Fig. 4(b).
### Bloch oscillations
One of the most interesting features of the dynamics in optical lattices is the emergence of Bloch oscillations. This phenomenon has been shown to appear in BECs with contact interparticle interactions when the lattice velocity is slowly ramped up [21, 22]. It manifests the periodic nature of the system through the ground state transit from positive to negative quasimomentum states over the first energy band. As a consequence, the system's average velocity oscillates with respect to the lattice velocity. But differently to linear systems, the nonlinearity introduced by the interactions in BECs can lead to the breakdown of Bloch oscillations, which is associated with the instability of Bloch states close to the edge of the Brillouin zone [20]. In addition, the energy band structure changes with increasing interactions, so that, precisely at the edge of the Brillouin zone, it develops a cusp, first, and a swallow tail configuration, later, that prevents the adiabatic transit between different quasimomentum states[20].
In what follows, we make an exploration of Bloch oscillations in systems with current-dependent interactions. On examination of the dispersion of nonlinear Bloch states for varying rotation, Fig. 2, and despite the lack of symmetry with respect to the rotation direction, one can expect the transit from positive to negative velocities to take place if the states passed across for varying rotation are dynamically stable, or if, these states being unstable, their instability modes grow at slower rate than the transit speed.
The expected period of Bloch oscillations is
\[T_{B}=\frac{2\pi\hbar}{m\alpha Rd}, \tag{6}\]
where \(\alpha R\) is a constant acceleration in the ring, since this is the time taken to cross the first Brillouin zone from quasi-momentum \(k=-\pi/d\) to \(k=\pi/d\). The influence of the nonlinearity on the oscillations in systems with contact interactions can be captured, at least for smooth density profiles, by effectively modifying the lattice depth as \(U_{0}^{\text{(eff)}}=U_{0}/(1+4g_{1\text{D}}\bar{n}/E_{L})\)[21], where \(\bar{n}\) is the average number density. This effective potential provides us with a way to account for the current-density interactions on Bloch oscillations by means of an equivalent effective lattice depth
\[U_{0}^{\text{(eff)}}=U_{0}\,\left(1+4\frac{\hbar\kappa\bar{J}}{E_{L}}\right)^{ -1}, \tag{7}\]
where \(\bar{J}\) is the average current density.
To test Eqs. (6) and (7), we have chosen the initial ground state in Fig. 1, with \(k=0\), and have imparted a constant angular acceleration \(\alpha\) to the ring lattice. The results are shown in Fig. 6 for the absolute value of the angular acceleration \(|\alpha|/\Omega_{0}^{2}=2\). Figure 6(a) represents the relative rotation of the evolved state with respect to the lattice for negative and positive signs of the acceleration. The observed period of the Bloch oscillations is consistent with Eq. (6), with the zero crossing (or zero relative velocity) reached at \(T_{B}/2\). At this point, the system transits trough the edge of the Brillouin zone and
Figure 5: Selected snapshots of the real time evolution of stationary states shown in Fig. 3 after perturbative noise has been added on the initial states \(\psi_{0}\). In all the cases, the duration of the total time evolution is \(T=25\;md^{2}/\hbar\). (a) Gap soliton. (b) Nonlinear Bloch states.
the wave function presents \(M\) nodes, which can be interpreted as \(M\) dark solitons preceding the entry of \(M\) vortices in the ring during the subsequent evolution [16]; this view is better understood by monitoring the state average velocity measured in ring units \(\hbar/mR\), as depicted in Fig. 6(b).
Our results show that the oscillations that result with higher acceleration last longer, since this process provides a shorter time for the unstable modes to grow. As shown in Fig. 6, for \(|\alpha|/\Omega_{0}^{2}=2\) the decay does not appear before \(2T_{B}\). For instance, at a lower acceleration of \(\alpha=-0.5\Omega_{0}^{2}\) (not shown) the oscillations hardly complete one whole period \(T_{B}\) before breaking down, producing the decay of the homogeneous profile over the lattice sites into more localized density peaks. This decay resembles the action of modulation instabilities [23].
Apparent differences can be observed in the duration and smoothness of the oscillations that depend on the sign of the acceleration. In general terms, they can be explained by the effective lattice depth of Eq. (7). Since negative rotations translate into negative current densities, they reduce the effective lattice depth and favor the adiabatic variations in the particle flow. It is also worth noticing that, contrary to the case of contact interactions, the amplitude of Bloch oscillations (the maximum relative speed) varies monotonically after each zero crossing; it keeps increasing (respectively decreasing) for positive (resp. negative) acceleration. This phenomenon reflects the monotonic dependence of interactions on the current density. When the latter takes high negative values the superfluid features of the system tend to disappear, the system state becomes strongly localized, and the current density approaches the lattice rotation. On the other hand, for high positive current densities the effective repulsive interaction makes the lattice progressively less relevant.
## V Conclusions
We have reported on gap solitons and nonlinear Bloch states in a rotating ring lattice within a theory with current-density interactions. Our results show that the presence of chirality is manifest in all the states considered, including their spectrum of linear excitations and the display of Bloch oscillations for constant angular acceleration. A novelty is the existence of stationary and dynamically stable non-regular Bloch states characterized by a modulated density profile.
The recent experimental achievement of this theory in Bose-Einstein condensates of ultracold atoms [4] opens the way for the experimental realization of the states and phenomena that we have described. Although currently optical lattice potentials in a ring are experimentally available [24; 25], our results are not restricted to this geometry, and can also be realized in one dimensional linear lattices, as has been routinely done in the presence of contact interatomic interactions [13; 14].
Future prospects of our work include the study of fundamental and higher-order soliton states in different energy gaps, and the extension to realistic 2D or 3D systems that reach the quasi-1D regime.
###### Acknowledgements.
This work was supported by the National Natural Science Foundation of China (Grant No. 11402199), the Natural Science Foundation of Shaanxi Province(Grant No. 2022JM-004, No. 2018JM1050), and the Education Department Foundation of Shaanxi Province(Grant No. 14JK1676).
Figure 6: Bloch oscillations with current-density interactions. (a) Relative angular rotation, \(\langle J/R|\psi|^{2}\rangle-\Omega\), versus time for a lattice with varying rotation rate \(\Omega=\alpha\,t\) and absolute value of the angular acceleration \(|\alpha|=2\,\Omega_{0}^{2}\). (b) For \(\alpha>0\), snapshots of the system state at selected times: \(t_{1}\) at the minimum relative velocity, \(t_{2}\) at the maximum relative velocity, and \(t_{3}\) at the first local minimum after the maximum of the relative velocity. |
2302.13810 | Large ordered moment with strong easy-plane anisotropy and vortex-domain
pattern in the kagome ferromagnet Fe$_3$Sn | We report the structural and magnetic properties of high-quality bulk single
crystals of the kagome ferromagnet Fe$_3$Sn. The dependence of magnetisation on
the magnitude and orientation of the external field reveals strong easy-plane
type uniaxial magnetic anisotropy, which shows a monotonous increase from
$K_1=-0.99\times 10^6 J/m^3$ at 300\,K to $-1.23\times10^6 J/m^3$ at 2\,K. Our
\textit{ab initio} electronic structure calculations yield the value of total
magnetic moment of about 6.9 $\mu_B$/f.u. and a magnetocrystalline anisotropy
energy density of 0.406\,meV/f.u. ($1.16\times10^6 J/m^3$) both being in good
agreement with the experimental values. The self-consistent DFT computations
for the components of the spin/orbital moments indicate that the small
difference between the saturation magnetisations measured along and
perpendicular to the kagome layers results from the subtle balance between the
Fe and Sn spin/orbital moments on the different sites. In zero field, magnetic
force microscopy reveals micrometer-scale magnetic vortices with weakly pinned
cores that vanish at $\sim$3\,T applied perpendicular to the kagome plane. Our
micromagnetic simulations, using the experimentally determined value of
anisotropy, well reproduce the observed vortex-domain structure. The present
study, in comparison with the easy-axis ferromagnet Fe$_3$Sn$_2$, shows that
varying the stacking of kagome layers provides an efficient control over
magnetic anisotropy in this family of Fe-based kagome magnets. | Lilian Prodan, Donald M. Evans, Sinéad M. Griffin, Andreas Östlin, Markus Altthaler, Erik Lysne, Irina G. Filippova, Serghei Shova, Liviu Chioncel, Vladimir Tsurkan, István Kézsmárki | 2023-02-06T11:43:02Z | http://arxiv.org/abs/2302.13810v1 | Large ordered moment with strong easy-plane anisotropy and vortex-domain pattern in the kagome ferromagnet Fe\({}_{3}\)Sn
###### Abstract
We report the structural and magnetic properties of high-quality bulk single crystals of the kagome ferromagnet Fe\({}_{3}\)Sn. The dependence of magnetisation on the magnitude and orientation of the external field reveals strong easy-plane type uniaxial magnetic anisotropy, which shows a monotonous increase from \(K_{1}=-0.99\times 10^{6}J/m^{3}\) at 300 K to \(-1.23\times 10^{6}J/m^{3}\) at 2 K. Our _ab initio_ electronic structure calculations yield the value of total magnetic moment of about 6.9 \(\mu_{B}\)/f.u. and a magnetocrystalline anisotropy energy density of 0.406 meV/f.u. (\(1.16\times 10^{6}J/m^{3}\)) both being in good agreement with the experimental values. The self-consistent DFT computations for the components of the spin/orbital moments indicate that the small difference between the saturation magnetisations measured along and perpendicular to the kagome layers results from the subtle balance between the Fe and Sn spin/orbital moments on the different sites. In zero field, magnetic force microscopy reveals micrometer-scale magnetic vortices with weakly pinned cores that vanish at \(\sim\)3 T applied perpendicular to the kagome plane. Our micromagnetic simulations, using the experimentally determined value of anisotropy, well reproduce the observed vortex-domain structure. The present study, in comparison with the easy-axis ferromagnet Fe\({}_{3}\)Sn\({}_{2}\), shows that varying the stacking of kagome layers provides an efficient control over magnetic anisotropy in this family of Fe-based kagome magnets.
**Notes.** _Version 2. This manuscript is under review in Physical Review B._
## I Introduction
Magnetic compounds with kagome-lattice arrangement of spins have recently attracted much attention due to their unusual magnetic and electronic properties related to the specific topology of their electronic band structures [1; 2; 3; 4; 5]. Recent theoretical and experimental studies have demonstrated that the existence of flat bands, nodal points and nodal lines appearing close to the Fermi energy significantly affect magnetic, magneto-transport and magneto-optical properties of kagome materials [6]. In particular, fascinating physical phenomena like giant anomalous and topological Hall effects, giant Nernst effect, topological superconductivity, magnetic spin chirality and skyrmion bubbles have been reported for large group of kagome materials like van der Waals \(AV_{3}\)Sb\({}_{5}\) (\(A\) = K, Cs, Rb) [7; 8], rare earth based \(ReT_{6}\)Sn\({}_{6}\) (\(Re\) = Gd, Tm, Tb, Y; \(T\) = Mn,V) [9; 10], magnetic Weyl-semimetal Co\({}_{3}\)Sn\({}_{2}\)S\({}_{2}\)[11; 12], and binary metals \(T_{x}\)Sn\({}_{9}\) (\(T\) = Fe, Mn, Co; \(x\):\(y\) = 1:1, 3:2, 3:1) [13; 14; 15; 16; 17; 18]. The coexistence of these effects provide an exceptional platform to study electronic band topology and its interplay with magnetic spin and orbital effects.
Here we focus on the magnetic properties of Fe\({}_{3}\)Sn quasi-2D kagome magnet, investigated by anisotropy measurements, _ab initio_ calculations and imaging of the magnetic domain pattern. Fe\({}_{3}\)Sn is an itinerant ferromagnet, a member of a family of iron stamalles with the general formula Fe\({}_{x}\)Sn\({}_{y}\). Our particular interest is on compounds with the kagome-lattice arrangement, namely compounds with \(x:y\) = 1:1; 3:1; 3:2. Depending on the ratio of \(x:y\), i.e. the stacking of kagome layers, these compounds realize various hexagonal magnetic space groups and different magnetic ground state. For example, FeSn crystallises in the \(P6/mmm\) symmetry and below 365 K it is an easy-plane antiferromagnet with ferromagnetic arrangement of spins within each kagome layer [19; 20; 21]. Fe\({}_{3}\)Sn has a structure of \(P6_{3}/mmc\) symmetry and exhibits ferromagnetic order below 743 K [19; 22]. The crystal structure of Fe\({}_{3}\)Sn is shown in Figs. 1(a) and (b). The _ab_ layer consists of a breathing kagome lattice of two sizes of equilateral Fe triangles with Sn atoms in the center of the kagome. The unit cell of Fe\({}_{3}\)Sn contains two adjacent, laterally
displaced kagome layers, separated by half of the lattice constant along the \(c\) axis. The structure of Fe\({}_{3}\)Sn is much simpler than of Fe\({}_{3}\)Sn\({}_{2}\), which realizes a third type of stacking and crystallizes in the \(R\overline{3}m\) symmetry. Namely, Fe\({}_{3}\)Sn\({}_{2}\) contains two such basic blocks of Fe\({}_{3}\)Sn bilayers stacked along the \(c\) axis and separated by a honeycomb layer of Sn atoms [19; 22]. In contrast to Fe\({}_{3}\)Sn, Fe\({}_{3}\)Sn\({}_{2}\) is an easy-axis ferromagnet with a Curie temperature of 612 K [23], which shows skyrmion bubbles at room temperature [18; 24]. However, this compound shows a spin reorientation below room temperature, when the easy axis of magnetisation gets tilted towards the kagome plane [25; 26; 27; 22].
Physical properties of Fe\({}_{3}\)Sn are investigated predominantly on polycrystalline samples synthesized by solid state reaction or by arc-melting of metalic Fe and Sn in argon atmosphere. Early Mossbauer studies of Trumpy et al. [23] and Djega-Mariadassou et al. [28] revealed a ferromagnetic behavior of Fe\({}_{3}\)Sn and concluded that the spin moment is oriented along the \(c\) axis. In contrast, recent analysis of magnetic properties of Fe\({}_{3}\)Sn by Sales et al. [29], performed on field-oriented polycrystalline powder, implied predominantly in-plane orientation of the magnetisation with the magnitude of easy-plane anisotropy \(K_{1}=-1.8MJ/m^{3}\) at 300 K. The easy-plane character of the anisotropy was further supported by magnetisation studies of Fayyazi et al. [30], that was performed on an oriented plaque of very small crystals with a mass of 10 \(\mu\)g, obtained by reactive flux technique. However, this work reported an unusual temperature dependence of the anisotropy field, namely its decrease from 200 K to 10 K.
The precise quantification of magnetic anisotropy is the prerequisite to understand the diversity of complex spin textures, for which kagome magnets offer a fertile ground [31]. Moreover, the magnetic anisotropy in interplay with the Dzyaloshinskii-Moriya interaction, competing and frustrated exchanges, or dipole-dipole interactions, can affect the properties of mesoscale spin textures, including conventional domain walls, vortices, and skyrmions [32; 33; 34; 35; 36; 37]. As a prime example, the competition between easy-axis magnetic anisotropy and dipole-dipole interaction can stabilize magnetic bubbles and skyrmions in finite magnetic fields [18; 24; 33; 38]. On the other hand, the in-plane magnetized systems, e.g., magnets with easy-plane anisotropy, can develop magnetic vortices and anti-vortices [39; 40; 41; 42; 43].
Although Fe\({}_{x}\)Sn\({}_{y}\) compounds are known for long, the microscopic origin and the precise value of exchange interactions and magnetic anisotropy constants is still unsettled in these compounds, even the spin arrangement is under debate in some of these materials. From the experimental point of view, the situation is complicated by large variation of sample quality, leading to inconsistencies of the data available in literature. A large piece of the data was obtained on polycrystalline samples, while those reported for single crystals show large scattering evidently related to different quality of samples, e.g., homogeneity, deviation from the stoichiometry and impurity content. Therefore, precisely qualifying the magnetic interactions in a single bilayer Fe\({}_{3}\)Sn is a prerequisite for understanding magnetism on the microscopic level in these kagome magnets.
Here we report detailed magnetometry studies performed on large stoichiometric single crystals grown by the chemical transport reactions. The appropriate size of the crystals (several mg) allowed us to accurately determine their orientation prior to the magnetisation measurements along the different crystallographic directions. Moreover, our angular-dependent magnetisation measurements, performed in different fields and at various temperatures, allowed a highly reliable quantification of the anisotropy constants of Fe\({}_{3}\)Sn. The experimental studies were complemented by _ab initio_ electronic structure calculations to determine the magnetic properties in the ground state (magnetic moments on Fe and Sn as well as anisotropy energies).
The spin/orbital moments were restricted to specific orientations and by including the spin-orbit coupling, calculations revealed easy-plane magnetic anisotropy. The computed uniaxial magnetocrystalline anisotropy energy, which arises from the collective effect of the crystal structure and spin-orbit coupling, is in excellent agreement with the experimentally determined value. Furthermore, a slight departure from the collinear magnetic configuration is predicted due to tilting of the orbital-magnetic moment of Fe with respect to its spin moment. Finally, our MFM study reveals the formation of a complex domain pattern built of in-plane flux enclosure domains, i.e. magnetic vortices, on the micrometer scale. This vortex-domain pattern is well reproduced by our micromagnetic simulations using the magnetic interaction parameters determined experimentally and theoretically, further supporting the high reliability of these parameters.
## II Methods
**Crystal growth.** Polycrystalline Fe\({}_{3}\)Sn was prepared by solid state reactions of high purity elements of Fe (99.99%) and Sn (99.995%). A stoichiometric amount of Fe and Sn has been mixed and sealed in quartz ampule evacuated to 10\({}^{-3}\) mbar. The mixture was annealed for four days at 780 \({}^{\circ}\)C followed by quenching in ice water. In order to achieve the full reaction and homogeneity two sintering cycles were performed. The phase purity after each sintering was checked by x-ray powder diffraction. Single crystals of Fe\({}_{3}\)Sn have been grown by the chemical transport reactions method using the polycrystalline powder as starting material and I\({}_{2}\) as the transport agent. The growth was performed in exothermic conditions in a two-zone horizontal furnace with the temperature gradient of 50 \({}^{\circ}\)C in a temperature range of 850-800 \({}^{\circ}\)C. Crystals up to 1.5 mm size have been obtained after 6 weeks of transport. The chemical composition of single crystals was analyzed by the ZEISS Crossbeam 550, using energy
dispersive x-ray spectroscopy (EDS) technique.
**Single-crystal x-ray diffraction.** The single-crystal x-ray diffraction measurement was carried out with a Rigaku Oxford-Diffraction XCALIBUR E CCD diffractometer equipped with graphite-monochromated MoK\(\alpha\) radiation. The data were corrected for the Lorentz and polarization effects and for the absorption by multi-scan empirical absorption correction methods. The structure was refined by the full matrix least-squares method based on \(F^{2}\) with anisotropic displacement parameters. The unit cell determination and data integration were carried out using the CrysAlis package of Oxford Diffraction [44]. All calculations were carried out by the programs SHELXL2014 [45] and the WinGX software [46].
**Magnetic properties.** Magnetic properties were measured by a SQUID magnetometer (MPMS 3, Quantum Design) in the temperature range of 1.8-900 K and magnetic fields up to 7 T. The angular-dependent magnetization measurements have been performed both within \(ab\) and \(ac\) planes on cylindrical sample with the \(ac\) plane as the basal plane of the cylinder. The measurements in fields up to 14 T were performed with a vibrating-sample magnetometer using a physical properties measurement system (PPMS, Quantum Design).
**Computational Details.** Density Functional Theory (DFT) calculations were performed with the Vienna _Ab initio_ Simulation Package (vasp) [47; 48; 49] using projector augmented wave (PAW) pseudo-potentials [50; 51]. We treated the Fe(3d, 4s) and Sn(5s, 5p) electrons as valence. Plane waves were expanded to an energy cut-off of 800 eV, and a Monkhorst-Pack Gamma-centered k-point grid of \(10\times 10\times 10\) for structural optimizations and \(20\times 20\times 20\) for accurate total energy calculations to obtain the magnetic energetics. A smearing value of 0.02 eV was used for all calculations. The PBEsol functional was used throughout [52]. Spin-orbit coupling was included self-consistently as implemented in VASP. The electronic convergence criteria set to \(10^{-8}\) eV and the force convergence criteria set to 0.002 eV/A. Our reference structure was taken from the Materials Project (mp-20883) [53; 54; 55; 56].
ELK [57] computation has been performed for the a equally large \(k\)-mesh (\(21\times 21\times 21\)), and the PBE-GGA functional was used with the experimental lattice parameters given in Table 1. For the FPLAPW calculations the plane-wave cutoff \(K_{max}\) was set so that \(R_{mt}K_{max}=7.0\), where \(R_{mt}\) is the smallest muffin-tin radius.
**MFM imaging.** The magnetic domain pattern was imaged using a low-temperature attocube attoAFM I, in MFM mode, equipped with a superconducting magnet. The magnetic texture on an as-grown _ab_ surface was recorded via a phase-sensitive feedback loop that records changes in the resonance frequency, which are proportional to the gradient of the stray magnetic field of the sample along the magnetisation of the MFM tip, that is normal to the surface. The magnetic tips used were PPP-MFMR probes from NanoSensor with the magnetic moment of \(5.09\times 10^{-13}\,A/m^{2}\). All data was collected at 100 K for experimental reasons, such as the thermal stability of the equipment.
**Micromagnetic simulations.** Simulations were carried out using MuMax3 v3.10 [58]. Here, the microstructure is found by minimizing a total energy made up of terms representing, Heisenberg exchange, first-order uniaxial anisotropy, Zeeman and demagnetisation energy. Practically this is done by solving the equation
\[\varepsilon=\int_{V_{S}}[A_{ex}(\Delta m)^{2}-K_{u}m_{z}^{2}+M_{s}B_{ext}m- \frac{1}{2}M_{s}B_{dem}m]dr \tag{1}\]
Where the reduced magnetisation, m(x,y,z), is the magnetisation M(x,y,z) divided by the saturation magnetization, \(M_{s}\), within a sample volume \(V_{S}\), where the magnetization makes up a continuous vector field. The exchange stiffness constant, the first order uniaxial anisotropy constant, the external magnetic field and the demagnetisation field are represented by \(A_{ex}\), \(K_{u}\), \(B_{ext}\), \(B_{dem}\), respectively. Periodic boundary conditions were not necessary; the simulation was initiated from a random state, and easy-plane anisotropy (0, 0, 1) was used. The mesh was \(4\times 4\times 4\) nm cells, and the geometry was \(2048\times 1024\times 256\) nm. Note, that MuMax3 is able to calculate the MFM response as it knows the orientation of the magnetisation, and therefore the stray fields, and the MFM response comes from the interactions between the MFM tip and the gradient of the stray fields.
## III Results and discussions
### Structure, magnetic properties and anisotropy
Typical as-grown Fe\({}_{3}\)Sn crystals used in our study are shown in the inset of Fig. 2 (a). The energy-dispersive x-ray analysis found an excess of Fe \(\lesssim\)1 %, indicating that the stoichiometry Fe\({}_{3}\)Sn is close to ideal. Single-crystal x-ray diffraction study revealed the hexagonal \(P6_{3}/mmc\) (#194) space group symmetry and showed no traces of impurity phases. The calculated cell parameters \(a\) and \(c\) are close to those reported for polycrystalline Fe\({}_{3}\)Sn [29; 19]. The structural parameters obtained from the single-crystal refinement are summarised in Table [10].
The magnetisation measurements were carried out on a single crystal with a size of \(0.7\times 0.6\times 0.5\,mm^{3}\). The temperature-dependent magnetisation in 1 T applied along the \(a\) axis exhibits a steep increase below 750 K indicating the onset of ferromagnetic order, as shown in Fig. 2 (a). The Curie temperature of \(T_{C}=705\) K was obtained as the location of the maximum in the temperature derivative of the magnetisation, and is close to previously reported data for polycrystalline samples [22; 19].
Figure 2(b) shows the field-dependent magnetisation curves, \(M(H)\), measured at 2 K for magnetic fields applied along three orthogonal directions. Two directions are in the basal \(ab\) plane, \([\overline{1}2\overline{1}0]\) and \([10\overline{1}0]\) being parallel and perpendicular to the \(a\) axis, respectively, and the
hexagonal \(c\) axis (equivalently [0001]) is chosen as the third direction. The magnetisation within the \(ab\) plane reaches the saturation already at \(\sim\)1 T. We found that the saturation is reached in a slightly higher field for \(H||[1\overline{0}\overline{1}0]\) than for \(H||[1\overline{2}\overline{1}0]\), indicating that the \(a\) axis is the easy axis of the magnetisation. For fields along the \(c\) axis, the saturation takes place at higher fields, above 3 T. The uniaxial magnetocrystalline anisotropy within the \(ac\) plane was calculated from the hysteresis curves using an "area method" [59], considering the difference of the integrals \(\int_{0}^{M_{s}}H_{i}dM\) for in-plane (\(a\) axis) and out-of-plane (\(c\) axis) field orientations. Here \(H_{i}\) is the internal field determined as the difference between the applied magnetic field \(H\) and the demagnetizing field \(DM\), where \(D\) is the demagnetisation coefficient. We obtained the value of \(K_{u}=-1.27\times 10^{6}J/m^{3}\) at 2 K for the uniaxial anisotropy constant, which decreases to \(K_{u}\)=-0.97\(\times 10^{6}\)\(J/m^{3}\) at room temperature. A similar approach was used to calculate the sixth-order anisotropy in the \(ab\) plane, that led to the value of \(2.3\times 10^{4}J/m^{3}\) at 2 K, which decreases to \(1.8\times 10^{4}J/m^{3}\) at 300 K. The anisotropy in the basal plane is nearly two orders of magnitude weaker than uniaxial anisotropy.
The inset in Fig.2(b) shows the temperature dependence of the magnetisation measured in 7 T. The value of the saturation moment is 2.27 \(\mu_{B}\)/Fe at 2 K for field in the \(ab\) plane, which is \(\sim\)2 % larger than the saturation value along the \(c\) axis. Although this difference is rather small it cannot originate from demagnetisation effects, since the sample has a circular shape in the \(ac\) plane. Additional measurements, not shown here, show this difference is present up to 14 T.
\({}^{*}\)U\({}_{eq}\) is defined as one third of the trace of the orthogonalized \(U_{ij}\) tensor.
The angular dependence of the magnetisation \(M(\phi)\) within the \(ac\)-plane, measured in different fields at 300 K, is shown in Figure 3(b). Here \(\phi\) is the angle spanned by the magnetic field and the \(c\) axis (see inset of Fig. 3(b)). As a general trend, the modulation of \(M(\phi)\) is suppressed with increasing magnetic field. In addition, there is a change in its functional form. While \(M(\phi)\) exhibits a minimum and a maximum in low fields parallel and perpendicular to the \(c\) axis, respectively, upon saturation a local maximum develops also for f
Figure 1: (a) Schematic representation of the crystal structure of layered Fe\({}_{3}\)Sn. Red and green spheres represent Fe and Sn ions, respectively. Transparent and bold atoms are from different planes. (b) Breathing kagome lattices of equilateral Fe triangles of two distinct sizes (marked by blue and red lines) in the \(ab\) plane, as seen when viewed along the \(c\) axis. (c)/(d) Magnetic structure with moment parallel/perpendicular to the \(c\) axis. Green and orange arrows show Fe and Sn spin moments, respectively. (e) Calculated magnetocrystalline anisotropy energy for rotation of Fe spin within the \(ab\) plane (purple) and the \(ac\) plane (green), assuming a ferromagnetic ordering. In the former no angular dependence is observed, while in the latter case the energy has a maximum for spins pointing along the \(c\) axis (\(\phi=0\)). (f) Calculated orbital moment magnitude of Fe for in-plane (\(l_{x}\)) and out-of-plane (\(l_{z}\)) projections as a function of Fe spin tilt angle.
\begin{table}
\begin{tabular}{l l} \hline \hline \multicolumn{2}{l}{Refined empirical formula} & Fe\({}_{3}\)Sn \\ \hline Space group & \(P6_{3}/mmc\) (No. 194) \\ \(a\) (Å) & 5.4604(3) \\ \(c\) (Å) & 4.3458(3) \\ Volume (Å\({}^{3}\)) & 112.215(15) \\ Z, D\({}_{calc}\) (g/cm\({}^{3}\)) & 2, 8.472 \\ \(\mu\) (mm\({}^{-1}\)) & 29.551 \\ \(\Theta\) range (\({}^{\circ}\)) & 4.310 - 29.014 \\ Reflections collected/unique & 577/68 [R\({}_{int}\) = 0.0467)] \\ No. variables & 8 \\ Goodness-of-fit on \(F^{2}\) & 1.001 \\ Extinction coefficient & 0.174(9) \\ R\({}_{1}\), wR\({}_{2}\) (all data) & 0.0143, 0.0352 \\ \hline Atom & Position & \(x\) & \(y\) & \(z\) \\ \hline Fe & 6h & 0.1544(1) & 0.3088(2) & 0.2500 & 7(1) \\ \hline Sn & 2d & 0.6667 & 0.3333 & 0.2500 & 5(1) \\ \hline \end{tabular}
\end{table}
Table 1: Structural refinement details and crystal data for Fe\({}_{3}\)Sn determined through single-crystal x-ray diffraction.
axis. To avoid complications due to the presence of multiple magnetic domains, we determined the anisotropy based on \(M(\phi)\) curves measured in the fully field polarized state, as shown in Fig. 3(b) for the 4 T curve. The angular dependence of the magnetisation was fitted based on commonly used phenomenological expression for the magnetic anisotropy energy for a hexagonal ferromagnet [59]
\[E_{A}=K_{1}\sin^{2}\theta+K_{2}\sin^{4}\theta+K_{3}\sin^{6}\theta\cos 6\Psi. \tag{2}\]
Here, \(K_{1}\) and \(K_{2}\) are the first- and second-order anisotropy constants, respectively. \(\theta\) is the angle spanned by the magnetisation with the \(c\) axis, while \(\Psi\) denotes the angle between the projection of the magnetisation to the \(ab\) plane and the \(a\) axis. In our fitting the \(K_{3}\) term is neglected, since our data show that the in-plane anisotropy is less than 2 % of the total magnetic anisotropy, as already discussed above.
The fitting formula for calculation of the anisotropy constants \(K_{1}\) and \(K_{2}\) for angular-dependent magnetisation measurements was derived by minimising the total energy per unit volume described as \(E=E_{A}+E_{Z}+E_{D}\), where \(E_{Z}=-M_{s}Hcos(\theta-\phi)\) corresponds to the Zeeman energy, and \(E_{D}=1/2(M_{||}^{2}\,D)\) is the magnetostatic energy of a sample in its own field. The formula of fitting of the experimental data is given by
\[\sin 2\theta(K_{1}+2K_{2}\sin^{2}\theta)=-H_{i}\sqrt{M_{s}^{2}-M_{||}^{2}}, \tag{3}\]
where \(M_{||}\) is the projection of \(M_{s}\) on the applied field, which is measured in experiment (see inset of Fig. 3(b)).
The fit to the angular-dependent magnetisation (marked as R method, fit 1 in Fig. 3(b)) describes well the angular dependence of \(M(\phi)\) over the whole angular range except for the immediate vicinity of the \(c\) axis, where it overshoots the experimental data. The experimentally observed difference in \(M_{s}\) for fields parallel and perpendicular to the \(c\) axis
Figure 3: (a) Angular dependence of the magnetisation measured in different fields at 300 K. The magnetisation component parallel to the field is detected. (b) Experimental data (open circles) and fit to the experimental data (solid lines) in 4 T at 300 K.
Figure 2: (a) The temperature-dependent magnetisation of Fe\({}_{3}\)Sn single crystal measured in 1 T applied along \(a\) axis. The inset shows as-grown Fe\({}_{3}\)Sn single crystals on a mm-scale paper.(b) Magnetisation curves measured at 2 K in magnetic fields applied in two directions in the \(ab\) plane—parallel to the \(a\) axis or [\(\bar{1}\bar{2}\bar{1}0\)] and perpendicular to the \(a\) axis or [\(\bar{1}0\bar{1}0\)], and with fields along the \(c\) axis or [0001]. The inset shows the temperature dependence of the magnetisation in 7 T for these three directions.
(\(\sim 2\,\%\)) of the \(g\)-tensor, i.e. imply peculiar orbital contributions to the magnetisation. (Please notice, if the \(g\)-tensor anisotropy is completely negligible, the magnetisation should point parallel to the applied field in the high-field limit, hence \(\theta\) in Eq. 2 and \(\phi\) in Fig. 3(b) would become identical.) From the fit to the experimental data we derived the value of the first anisotropy constant \(K_{1}=-1.32\times 10^{6}J/m^{3}\) at 2 K, which is in good agreement with the \(K_{1}=-1.34\times 10^{6}J/m^{3}\) determined by Sucksmith-Thompson (S-T) method (see below) [60].
To account for the angular dependence of the saturation magnetisation anisotropy we used the approach developed by J. Alameda \(et\,al.\) and A. S. Bolyachkin \(et\,al.\)[61; 62]. This fit (fit 2 in Fig. 3(b)) improves the description of the experimental data for angles close to the \(c\) axis. Figure 4 (a) summarizes the temperature dependence of the anisotropy constants \(K_{1}\) and \(K_{2}\), as determined by two independent methods: a) from the fits to the angular dependence of the magnetisation measured in 4 T and b) calculated from the magnetisation curves along the \(c\) axis, following the modified (S-T) method, which considers the anisotropy of the saturation magnetisation [62]. At 300 K, both methods yield very close values \(K_{1}=-0.99\times 10^{6}J/m^{3}\) and \(-1.01\times 10^{6}J/m^{3}\), respectively. The magnitude of \(K_{1}\) shows a monotonous increase with decreasing temperature in contrast to non-monotonous behavior reported by Fayyazi \(et\,al.\)[30]. Moreover, the absolute values of \(K_{1}\) reported in [30] are by about 20% lower than our experimental values. These discrepancies likely arise due to limitations of proper sample orientation in the former study, which was not an issue in our case due to the larger single crystals. \(K_{2}\) has opposite sign and is lower by approximately one order magnitude than \(K_{1}\). At 300 K, we obtained \(K_{2}=1.4\times 10^{5}J/m^{3}\) and \(2.7\times 10^{4}J/m^{3}\) using the R and S-T methods, respectively.
The reason of the difference in the values of \(K_{1}\) and \(K_{2}\) is unclear and may be related to different data sets for fitting: high-field magnetisation data in the saturated state in R method and low-field data in S-T method. However, the value of \(K_{2}\) derived from the R method should be taken with precaution because of larger error in determination due to nonlinear extension of the rotator spring and change of its elasticity with lowering temperature.
To further validate our quantification of the uniaxial anisotropy in Fe\({}_{3}\)Sn, in Fig. 4(b), we compare the sum of \(K_{1}+K_{2}\), as determined using the angular dependent magnetisation and the Sucksmith-Thompson methods, with the overall \(K_{u}\) anisotropy, as obtained using the "area method". The values obtained by the three methods agree within less than 10% at all temperatures.
### _Ab initio_ calculations of magnetic moments and anisotropy
The origins of magnetocrystalline anisotropy energy (MAE) is well-established for \(3d\)-transition metals as arising from a combination of the spin-orbit coupling and the crystal-field environment of the magnetic species. Being a ground state property the MAE can in principle be calculated using DFT, however, the small energy scales associated with MAE in \(3d\) transition-metal systems make this challenging owing to the tight convergence and sampling needed. Because of this, we compare two different implementations of DFT - a pseudo-potential code with a plane-wave basis set (VASP), and an all-electron code with LAPW basis (ELK).
DFT computation of Fe\({}_{3}\)Sn binary system has been recently published [29; 30] and in particular it was found that the easy magnetic axis lies in the hexagonal plane. Here, we first present our calculations of the magnetic properties using VASP. We find the calculated ground state to be ferromagnetic order with a spin moment of 2.20 \(\mu_{B}\) per Fe, an orbital moment of 0.06 \(\mu_{B}\) per Fe, and a small antiparallel moment of 0.18 \(\mu_{B}\) on the Sn site, giving a total magnetisation of 6.4 \(\mu_{B}\)/f.u., consistent with our experimental value of 6.8 \(\mu_{B}\)/f.u.. We find no off-plane canting of moments aligned along the easy-plane direction in our VASP calculations, as shown in Fig.1(f).
To calculate the magnetocrystalline anisotropy, we rotate the spin in the in-plane and out-of-plane directions, finding the corresponding energy for several angles, as depicted in Figure 1(e). We find the ground state magnetic
Figure 4: (a) Temperature-dependent anisotropy constants \(K_{1}\) and \(K_{2}\) as determined from the angular dependence of the magnetisation (circles, marked as R) and by the S-T method (triangles). (b) The uniaxial constant \(K_{u}\) determined by the “area method” (Ar) is plotted together with \(K_{1}\) + \(K_{2}\) taken from panel (a). Lines are guides to the eye.
order to be ferromagnetic with an easy-plane anisotropy. In fact, we find the in-plane (\(ab\)) spin directionality to be fully degenerate (up to 2 \(\mu\)eV), in agreement with the negligible sixth-order \(ab\)-plane anisotropy found experimentally is this study. Our calculated magnetocrystalline anisotropy energy by comparing the energy of the in-plane and out-of-plane spin axes is 0.406 meV/f.u. (\(1.16\times 10^{6}J/m^{3}\)), fully consistent with our experiment. Note that this value is by ca. 20 % lower/higher than the theoretical estimate reported by Sales \(et\,al.\)[29]/Fayyazi \(et\,al.\)[30]. In Fig.1(f) we also plot the variation of the calculated orbital moment on Fe from varying the spin easy-axis in the (\(ac\)) direction. For the ground-state spin direction (fully in-plane), we find no out-of-plane contribution to either the spin or orbital moment. However, as the spin axis is rotated in the out-of-plane direction, we find a gradual increase in the out-of-plane orbital moment which is compensated by a decrease in the in-plane orbital moment.
We now discuss our results using ELK. For systems containing \(3d\)-electrons, the relatively weak SOC can be treated in a Russel-Sounders (\(LS\))-coupling scheme, with well defined spin and orbital quantum numbers. In this case it is possible to define a quantization direction (direction of magnetisation). In practice we have considered the quantization axis (spin moment \(\vec{S}\)) aligned along the \(a\)- and \(c\)- crystallographic directions and performed DFT self-consistent computations for the components of the spin/orbital moments. The values of Fe spin/orbital moments are about 2.32/0.07 \(\mu_{B}\) for the considered orientations. We have noticed, however, that in the self-consistent computation with the moment along the crystallographic \(a\)-direction the overall (unit cell) moment is slightly increased. The DFT self-consistent configuration for spin moments of Fe and Sn atoms oriented along the \(c/a\)-directions respectively are shown in Figs. 1 (c)/(d). In the ferromagnetic ground state the induced spin moment on Sn-sites is of about 0.12 \(\mu_{B}\). Orbital moments of Sn although existing, have no significant magnitudes (\(\leq 10^{-3}\)\(\mu_{B}\)). The total magnetic moment per formula unit, as obtained from DFT is 6.9 \(\mu_{B}\)/f.u., in good agreement with the experimental value of 6.8 \(\mu_{B}\)/f.u.
The DFT moments (on all atoms) computed for the configuration aligned along the \(a\)-direction are somewhat smaller, in particular the decrease in the magnitude of the Sn atoms spin-moments (pointing opposite to the Fe spins) are larger than the corresponding decrease for Fe atoms. This is the cause for a slightly larger total (unit cell) moments along the \(a\)-direction. Thus, the anisotropy of the moments results from the subtle balance between spin and orbital moments on the different sites of the unit cell.
Let us comment further upon the distinction between the anisotropy of magnetic moments and the macroscopic anisotropy energies. In systems (containing heavier elements) with lower than cubic symmetry the magnetic moment per atom or the corresponding \(g\)-tensor is expected to be anisotropic and the spin-orbit interaction might be important also in the absence of a long-range order. Below the ordering temperature, the cooperative alignment of the moments produces a net magnetisation. The energy cost to rotate the magnetisation from the easy axis (lowest energy) into the hard direction is higher at lower temperatures. In the framework of second-order perturbation theory (at zero temperature) the anisotropy energy is proportional to the anisotropy of the orbital magnetic moment. Note also that for Fe\({}_{3}\)Sn, for the \(c\) direction configuration, the spin and orbital moments align collinearly. At the same time, we found that the orbital moments rotate slightly in the \(ab\) plane away from the initial \(a\) direction (angular departure of about 7 degrees). Although the bulk measurements presented here are not able to capture the small changes in the orientation and magnitude, recent progress in XMCD allows for a precise determination of spin and orbital moments. Through the XMCD sum rules also the magnitude of the orbital moment and its anisotropy can be measured, note that the relative change in the orbital moment magnitude and direction is important, not the absolute number, which is obviously small.
### Imaging and simulations of magnetic vortices
To move beyond the investigation of bulk properties, we use MFM to image the local magnetic textures on an as-grown \(ab\) plane. A contact-mode topography image is given in Fig. 5(a), to show that the surface is smooth enough for the magnetic signal to be collected in a single-pass, constant height mode. While generally flat, there is a change in topography in the top-left of the image and a few point defects which need to be considered when interpreting the MFM data. The magnetic microstructure of the sample is represented by the relative shift of the cantilever's resonance frequency, at a lift height of 150 nm. Figure 5(b) shows a representative MFM image of the virgin zero-field magnetic state. Strikingly this evidences the formation of smooth vortices across the sample. Such vortices are characteristic of systems where the magnetisation is confined to a nearly isotropic plane, either via easy-plane anisotropy or shape anisotropy, the latter being the consequence of dipole-dipole interaction in thin samples.
Following the procedure used for the bulk measurements, we collect MFM images in a 3 T magnetic field, perpendicular to the sample surface. As expected from the bulk measurements, 2(a), Figure 5(c) shows a nearly mono-domain, saturated, state. The remaing contrast predominantly originates at features that correlate with the topography changes seen in Fig. 5 (a), suggesting that they are not purely of magnetic origin. The field is removed and a subsequent scan, Fig. 5(d), shows that the vortices have reformed but in different positions and with different geometries. The different positions of the vortices show that the vortex microstructure is not pinned. The more angular geometry of vortex cores, see the high
lighted square in Figure 5(d), are associated with larger stray fields, this likely indicates that the microstructure, at 100 K, has not fully relaxed on the measurement time scale of tens of minutes.
In order to directly connect our images of vortex-antivortex microstructure to our bulk measurements, we use micromagnetic simulation. The bulk data are used as input parameters to compute the expected orientation of the local magnetic moments, as shown in Fig. 5(e). Here, the small arrows represent the nanoscale orientation of the magnetisation vector and the rainbow false colors showing areas of the same orientation. The pattern is typical for an easy-plane magnet, with sets of vortex-antivortex pairs. Figure 5(f) shows one such example of this, with a continuous in-plane rotation of magnetisation.
The connection between the local moment and the stray fields, that our MFM senses, can be counterintuitive, and so we use MuMax3 to calculate the expected MFM response of the microstructure (shown in Fig. 5(e)) which is presented in Fig. 5(g). This is possible once the tip material and lift height are given, as the force on the MFM tip goes as the gradient of the stray field - which is known via the preceding magnetisation calculations. The calculated MFM signal is in excellent qualitative agreement with the observed signal of the zero-field state, e.g. Fig. 5(b) and (d), showing a series of vortex points connected by domain walls. The simulations also show that the contrast at the walls depends on the exact nature of the magnetisation reorientation: some walls have very sharp bright-dark features, while others have smoother transitions; again, as observed in the MFM data. Note, observing in-plane closure domains with an MFM is long established for thin-films, for instance, Ref. [63].
## IV Conclusions
In conclusion, we have grown high-quality bulk single crystals of single kagome bilayer ferromagnet Fe\({}_{3}\)Sn and performed magnetisation and magnetic-force-microscopy studies. The studies revealed a strong uniaxial easy-plane anisotropy characterized by the first-order anisotropy constant \(K_{1}=-0.99\times 10^{6}J/m^{3}\) at \(300\,\mathrm{K}\) and \(-1.23\times 10^{6}J/m^{3}\) at \(2\,\mathrm{K}\). The three independent methods, applied for the calculation of anisotropy constants in our work, provide values that are in good agreement with each other as well as the anisotropy energy obtained from our _ab initio_ study. Moreover, the used angular dependent measurements allowed to evidence the anisotropy in the saturation magnetisation along the \(a\) and the \(c\) axes, which we ascribe to orbital contributions. Our DFT calculations predict an induced spin moment on the Sn sites of a similar magnitude to the Fe's orbital moment, the two pointing opposite to each other. All-electron DFT calculations suggest that the orbital-magnetic mo
Figure 5: (a) Topographic image collected in contact mode on an as-grown Fe\({}_{3}\)Sn \(ab\) surface, also imaged in the MFM scans in panels (b), (c), and (d). (b)-(d) MFM images, recorded at \(100\,\mathrm{K}\), showing the shift in resonance frequency of the cantilever: Light/dark colors show an increase/decrease in the resonance frequency. (b) Extended domain walls meeting at vortex cores in the zero-field state. (c) Uniform MFM signal, observed except in the vicinity of topographic features, indicate uniform magnetisation in \(3\,\mathrm{T}\). (d) The zero-field state after field treatment hosts a high density of vortices, some of which are much more angular in structure. The difference between the fully relaxed vortex structure (b) and the metastable structure shortly after the removal of a field (d) is highlighted by the dashed circle and square, respectively. (e) Micromagnetic simulations of the local orientation of the magnetisation (gray arrows), viewed down the \(c\) axis. The false color code (bottom right) indicates the in-plane orientation of the magnetisation vector. (f) Magnified view of one of the closure structures (vortices) in (e). (g) MFM response calculated for the domain pattern in panel (e).
ments of Fe tilt within the easy plane with respect to the main crystallographic directions, thus a slight departure from the collinear magnetic configuration is obtained. Although the current experimental analysis of the bulk magnetisation data can not distinguish between the orbital collinear and non-collinear configuration within the \(ab\)-plane, the neutron diffraction measurements might confirm the existence and the magnitude of the induce spin moment on Sn sites. Moreover, XMCD studies have the potential to reveal the predicted non-collinearity of the orbital moments.
Our MFM study further confirms the system is an easy-plane ferromagnet that contains a rich microstructure of the magnetisation pattern, dominated by (anti)vortices. The experimentally observed MFM contrast is well reproduced by the micromagnetic simulations, using experimentally determined values of the magnetic interaction parameters.
Overall, our bulk and microscopic experimental studies in combination with _ab initio_ and micromagnetic calculations provide a multi-scale, highly reliable approach to quantify magnetic interactions, e.g., magnetic anisotropy and spin arrangement, and to reveal their origins in the kagome bilayer model system Fe\({}_{3}\)Sn. This is an important step towards understanding complex magnetism emerging in kagome magnets in presence of remarkable spin-orbit effects.
###### Acknowledgements.
This work was supported by the Deutsche Forschungsgemeinschaft (DFG) through Transregional Research Collaboration TRR 80 (Augsburg, Munich, and Stuttgart), and by the project ANCD 20.80009.5007.19 (Moldova). DME wishes to thank and acknowledge funding by the DFG individual fellowship, number (EV 305/1-1). SMG acknowledge funding by the U.S. Department of Energy, Office of Science, Office of Basic Energy Sciences, Materials Sciences and Engineering Division under Contract No. DEAC02-05-CH11231 (Nonequilibrium magnetic materials program MSMAG). Computational resources were provided by the National Energy Research Scientific Computing Center and the Molecular Foundry, DOE Office of Science User Facilities supported under same contract.
|
2304.11351 | Ihara's Lemma for $\mathrm{GL}_n$: the limit case | Clozel, Harris and Taylor proposed conjectural generalizations of the
classical Ihara's lemma for $\mathrm{GL}_2$, to higher dimensional similitude
groups. We prove these conjectures in the so called limit case, which after
base change is the essential one, under any hypothesis allowing level raising
as for example in the work of Gee. | Pascal Boyer | 2023-04-22T09:10:57Z | http://arxiv.org/abs/2304.11351v4 | # Ihara's Lemma in higher dimension: the limit case
###### Abstract
Clozel, Harris and Taylor proposed in [1] conjectural generalizations of the classical Ihara's lemma for \(\mathrm{GL}_{2}\), to higher dimensional similitude groups. We prove these conjectures in the so called _limit case_ under some mild hypothesis coming from a level raising theorem of Gee in [1].
###### Contents
* 1 Introduction
* 1.1 Ihara's original Lemma: origin and proofs
* 1.2 Generalisations of Ihara's Lemma
* 1.3 Main result
* 2 Preliminaries
* 2.1 Representations of \(\mathrm{GL}_{d}(L)\)
* 2.2 Weil-Deligne inertial types
* 2.3 Kottwiz-Harris-Taylor Shimura varieties
* 2.4 Filtrations of stratification
* 3 Genericity for KHT-Shimura varieties
* 3.1 Level raising
* 3.2 Local and global monodromy
* 3.3 Typicality and monodromy
Introduction
### Ihara's original Lemma: origin and proofs
In the Taylor-Wiles method Ihara's lemma is the key ingredient to extend a \(R=T\) property from the minimal case to a non minimal one. It is usually formulated by the injectivity of some map as follows.
Let \(\Gamma=\Gamma_{0}(N)\) be the usual congruence subgroup of \(SL_{2}(\mathbb{Z})\) for some \(N>1\), and for a prime \(p\) not dividing \(N\) let \(\Gamma^{\prime}:=\Gamma\cap\Gamma_{0}(p)\). We then have two degeneracy maps
\[\pi_{1},\pi_{2}:X_{\Gamma^{\prime}}\longrightarrow X_{\Gamma}\]
between the compactified modular curves of levels \(\Gamma^{\prime}\) and \(\Gamma\) respectively, induced by the inclusion
\[\Gamma^{\prime}\hookrightarrow\Gamma\text{ and }\left(\begin{array}{cc}p&0 \\ 0&1\end{array}\right)\Gamma^{\prime}\left(\begin{array}{cc}p&0\\ 0&1\end{array}\right)^{-1}\hookrightarrow\Gamma.\]
For \(l\neq p\), we then have a map
\[\pi^{*}:=\pi_{1}^{*}+\pi_{2}^{*}:H^{1}(X_{\Gamma},\mathbb{F}_{l})^{2} \longrightarrow H^{1}(X_{\Gamma^{\prime}},\mathbb{F}_{l}).\]
**Theorem 1.1.1**.: _Let \(\mathfrak{m}\) be a maximal ideal of the Hecke algebra acting on these cohomology groups which is non Eisenstein, i.e. that corresponds to an irreducible Galois representation. Then after localizing at \(\mathfrak{m}\), the map \(\pi^{*}\) is injective._
Diamond and Taylor in [10] proved an analogue of Ihara's lemma for Shimura curves over \(\mathbb{Q}\). For a general totally real number field \(F\) with ring of integers \(\mathcal{O}_{F}\), Manning and Shotton in [MS] succeeded to prove it under some large image hypothesis. Their strategy is entirely different from those of [10]but rather consists roughly
* to carry Ihara's lemma for a compact Shimura curve \(Y_{\bar{K}}\) associated to a definite quaternion algebra \(\overline{D}\) ramified at some auxiliary place \(v\) of \(F\), in level \(\bar{K}=\bar{K}^{v}\bar{K}_{v}\) an open compact subgroup of \(D\otimes\mathbb{A}_{F,f}\) unramified at \(v\),
* to the indefinite situation \(X_{K}\) relatively to a quaternion division algebra \(D\) ramified at all but one infinite place of \(F\), and isomorphic to \(\bar{D}\) at all finite places of \(F\) different to \(v\), and with level \(K\) agreing with \(\bar{K}^{v}\) away from \(v\).
Indeed in the definite case Ihara's statement is formulated by the injectivity of
\[\pi^{*}=\pi_{1}^{*}+\pi_{2}^{*}:H^{0}(Y_{\bar{K}},\mathbb{F}_{l})_{\mathfrak{m} }\oplus H^{0}(Y_{\bar{K}},\mathbb{F}_{l})_{\mathfrak{m}}\longrightarrow H^{0} (Y_{\bar{K}_{0}(w)},\mathbb{F}_{l})_{\mathfrak{m}}\]
where both \(\overline{D}\) and \(\bar{K}\) are unramified at the place \(w\) and \(\bar{K}_{0}(w)_{w}\) is the subgroup of \(\operatorname{GL}_{2}(F_{w})\) of elements which are upper triangular modulo \(p\).
The proof goes like this, cf. [MS] theorem 6.8. Suppose \((f,g)\in\ker\pi^{*}\). Regarding \(f\) and \(g\) as \(K^{v}\)-invariant function on \(\overline{G}(F)\backslash\overline{G}(\mathbb{A}_{F,f})\), then
\(-g(x\omega)\) where \(\omega=\left(\begin{array}{cc}\varpi_{w}&0\\ 0&1\end{array}\right)\), \(\varpi_{v}\) being an uniformizer for \(F_{w}\) and \(\overline{G}\) being the algebraic group over \(\mathcal{O}_{F}\) associated to \(\mathcal{O}_{\overline{D}}^{\times}\) the inversible group of the maximal order \(\mathcal{O}_{\overline{D}}\) of \(\overline{D}\): note that \(\overline{G}(F_{w})\cong\mathrm{GL}_{2}(F_{w})\). Then \(f\) is invariant under \(K^{v}\) and \(\omega^{-1}K^{v}\omega\) so that, using the strong approximation theorem for the subgroup of \(\overline{G}\) of elements of reduced norm \(1\), then \(f\) factors through the reduced norm map, and so is supported on Eisenstein maximal ideals.
The link between \(X_{K}\) and \(Y_{K^{v}}\) is given by the geometry of the integral model of the Shimura curve \(X_{K_{0}(v)}\) with \(\Gamma_{0}(v)\)-level structure. The main new ingredient of [MS] to carry this geometric link to Ihara's lemma goes through the patching technology which allows to obtain maximal Cohen-Macaulay modules over deformation rings. Using a flatness property and Nakayama's lemma, there are then able to extend a surjective property, dual to the injectivity in the Ihara's lemma, from the maximal unipotent locus on the deformation space to the whole space, and recover the Ihara's statement reducing by the maximal ideal of the deformation ring.
Recently Caraiani and Tamiozzo following closely [MS] also obtained Ihara's lemma for Hilbert varieties essentially because Galois deformations rings are the same and so regular which is not the cas outside the case of \(\mathrm{GL}_{2}\).
### Generalisations of Ihara's Lemma
To generalize the classical Ihara's lemma in higher dimension, there are essentially two approaches.
- The first natural one developed by Clozel, Harris and Taylor in their first proof of Sato-Tate theorem [10], focuses on the \(H^{0}\) with coefficients in \(\mathbb{F}_{l}\) of a zero dimensional Shimura variety associated to higher dimensional definite division algebras. More precisely consider a totally real field \(F^{+}\) and a imaginary quadratic extension \(E/\mathbb{Q}\) and define \(F=F^{+}E\). We then consider \(\overline{G}/\mathbb{Q}\) a unitary group with \(\overline{G}(\mathbb{Q})\) compact so that \(\overline{G}\) becomes an inner form of \(\mathrm{GL}_{d}\) over \(F\). This means, cf. SS2.3, we have fixed a division algebra \(\overline{B}\) with center \(F\), of dimension \(d^{2}\), provided with an involution of the second kind such that its restriction to \(F\) is the complex conjugation. We moreover suppose that at every place \(w\) of \(F\), either \(\overline{B}_{w}\) is split or a local division algebra.
Let \(v\) be a place of \(F\) above a prime number \(p\) split in \(E\) and such that \(\overline{B}_{v}^{\times}\cong\mathrm{GL}_{d}(F_{v})\) where \(F_{v}\) is the associated local field with ring of integers \(\mathcal{O}_{v}\) and residue field \(\kappa(v)\).
**Notation 1.2.1**.: Let \(q_{v}\) be the order of the residue field \(\kappa(v)\).
Consider then an open compact subgroup \(\overline{K}^{v}\) infinite at \(v\) in the following sense: \(\overline{G}(\mathbb{Q}_{p})\cong\mathbb{Q}_{p}^{\times}\times\prod_{v_{i}^{+ }}\overline{B}_{v_{i}^{+}}^{\infty}\) where \(p=\prod_{i}v_{i}^{+}\) in \(F^{+}\) and we identify places of \(F^{+}\) over \(p=uu^{c}\in E\) with places of \(F\) over \(u\). We then ask \(\overline{K}_{p}^{v}=\mathbb{Z}_{p}^{\times}\times\prod_{w|u}\overline{K}_{w}\) to be such that \(\overline{K}_{v}\) is restricted to the identity element.
The associated Shimura variety in level \(\overline{K}=\overline{K}^{v}\overline{K}_{v}\) for some finite level \(\overline{K}_{v}\) at \(v\), denoted by \(\overline{\mathrm{Sh}_{\overline{K}}}\), is then such that its \(\mathbb{C}\)-points are \(\overline{G}(\mathbb{Q})\backslash\overline{G}(\mathbb{A}_{\mathbb{Q}}^{ \infty})/\overline{K}\) and
for \(l\) a prime not divisible by \(v\), its \(H^{0}\) with coefficients in \(\overline{\mathbb{F}}_{l}\) is then identified with the space
\[S_{\overline{G}}(\overline{K},\overline{\mathbb{F}}_{l})=\{f:\overline{G}( \mathbb{Q})\backslash\overline{G}(\mathbb{A}_{\mathbb{Q}}^{\infty})/\overline{K }\longrightarrow\overline{\mathbb{F}}_{l}\text{ locally constant}\}.\]
It is naturally an admissible smooth representation of \(\operatorname{GL}_{d}(F_{v})\) and of the Hecke algebra \(\mathbb{T}(\overline{K}^{v})\) defined as the image of the abstract unramified Hecke algebra, cf. definition 2.4.1, inside \(\operatorname{End}(S_{\overline{G}}(\overline{K}^{v},\overline{\mathbb{F}}_{l }))\).
To a maximal ideal \(\mathfrak{m}\) of \(\mathbb{T}(\overline{K}^{v})\) is associated a Galois \(\overline{\mathbb{F}}_{l}\)-representation \(\overline{\rho}_{\mathfrak{m}}\), cf. SS3.1. We consider the case where this representation is irreducible. Note in particular that such an \(\mathfrak{m}\) is then not pseudo-Eisenstein in the usual terminology.
**Conjecture 1.2.2**.: _(cf. conjecture B in [1]) Any irreducible \(\operatorname{GL}_{d}(F_{v})\)-submodule of \(S_{\overline{G}}(\overline{K}^{v},\overline{\mathbb{F}}_{l})_{\mathfrak{m}}\) is generic._
- For rank \(2\) unitary groups, we recover the previous statement as the characters are exactly those representations which do not have a Whittaker model, i.e. are the non generic ones.
- For \(d\geq 2\), over \(\overline{\mathbb{Q}}_{l}\), the generic representations of \(\operatorname{GL}_{d}(F_{v})\) are the irreducible parabolically induced representations \(\operatorname{st}_{t_{1}}(\pi_{v,1})\times\cdots\times\operatorname{st}_{t_{r} }(\pi_{v,r})\) where for \(i=1,\cdots,r\),
* \(\pi_{v,i}\) is an irreducible cuspidal representation of \(\operatorname{GL}_{g_{i}}(F_{v})\),
* \(\operatorname{st}_{t_{i}}(\pi_{v,i})\) is a Steinberg representations, cf. definition 2.1.2,
* \(\sum_{i=1}^{r}t_{i}g_{i}=d\) where the Zelevinsky segments \([\pi_{v,i}\{\frac{1-t_{i}}{2}\},\pi_{v,i}\{\frac{t_{i}-1}{2}\}]\)are not linked in the sense of [21].
- Over \(\overline{\mathbb{F}}_{l}\) every irreducible generic representation is obtained as the unique generic subquotient of the modulo \(l\) reduction of a generic representation. It can also be characterized intrinsically using representation of the mirabolic subgroup, cf. SS2.1.
Here we will be mainly interested in the following weak form of Ihara's lemma, except that we will allow ramification, see the main theorem below.
**Definition 1.2.3**.: (cf. definition of [1] 5.1.9)
An admissible smooth \(\overline{\mathbb{F}}_{l}[\operatorname{GL}_{d}(F_{v})]\)-module \(M\) is said to have the weak Ihara property if for every \(m\in M^{\operatorname{GL}_{d}(\mathcal{O}_{v})}\) which is an eigenvector of \(\overline{\mathbb{F}}_{l}[\operatorname{GL}_{d}(\mathcal{O}_{v})\backslash \operatorname{GL}_{d}(F_{v})/\operatorname{GL}_{d}(\mathcal{O}_{v})]\), every irreducible submodule of the \(\overline{\mathbb{F}}_{l}[\operatorname{GL}_{d}(F_{v})]\)-module generated by \(m\), is generic.
_Remark_.: In particular if we ask about \(S_{\overline{G}}(\overline{K}^{v},\overline{\mathbb{F}}_{l})_{\mathfrak{m}}\) having the weak Ihara property, then \(S_{\overline{G}}(\overline{K}^{v},\overline{\mathbb{F}}_{l})_{\mathfrak{m}}\) should have non trivial unramified vectors so that the supercuspidal support of the restriction \(\overline{\rho}_{\mathfrak{m},v}\) of \(\overline{\rho}_{\mathfrak{m}}\) to the decomposition subgroup at \(v\), is made of unramified characters.
- The second approach is to find a map playing the same role as \(\pi^{*}=\pi_{1}^{*}+\pi_{2}^{*}\). It is explained in section 5.1 of [1] with the help of the element
\[\theta_{v}\in\mathbb{Z}_{l}[K_{1}(v^{n})\backslash\operatorname{GL}_{d}(F_{v}) /\operatorname{GL}_{d}(\mathcal{O}_{F_{v}})]\]
constructed by Russ Mann, cf. proposition 5.1.7 of [10], where \(F_{v}\) is here a finite extension of \(\mathbb{Q}_{p}\) with ring of integers \(\mathcal{O}_{v}\).
**Definition 1.2.4**.: An admissible smooth \(\overline{\mathbb{F}}_{l}[\operatorname{GL}_{d}(F_{v})]\)-module \(M\) is said to have the almost Ihara property if \(\theta_{v}:M^{\operatorname{GL}_{d}(\mathcal{O}_{v})}\longrightarrow M\) is injective.
Recall that \(l\) is called quasi-banal for \(\operatorname{GL}_{d}(F_{v})\) if either \(l\nmid\sharp\operatorname{GL}_{d}(\kappa_{v})\) (the basal case) or \(l>d\) and \(q_{v}\equiv 1\mod l\) (the limit case).
**Proposition 1.2.5**.: _(cf. [10] lemma 5.1.10) Suppose that \(l\) is quasi-banal and \(M\) is a \(\overline{\mathbb{F}}_{l}[\operatorname{GL}_{d}(F_{v})]\)-module verifying the Ihara property. If \(\ker(\theta_{v}:M^{\operatorname{GL}_{d}(\mathcal{O}_{v})}\longrightarrow M)\) is a \(\mathbb{F}_{l}[\operatorname{GL}_{d}(\mathcal{O}_{F_{v}})\setminus \operatorname{GL}_{d}(F_{v})/\operatorname{GL}_{d}(\mathcal{O}_{F_{v}})]\)-module, then \(M\) has the almost Ihara property._
**Applications**: the generalizations of the classical Ihara's lemma were introduced in [10] to prove a non minimal \(R=\mathbb{T}\) theorem. The weaker statement \(R^{red}=\mathbb{T}\) where \(R^{red}\) is the reduced quotient of \(R\), was later obtained unconditionally using Taylor's _Ihara avoidance_ method, cf. [11] which was enough to prove the Sato-Tate conjecture. However, the full \(R=\mathbb{T}\) theorem would have applications to special values of the adjoint \(L\)-function and would imply that \(R\) is a complete intersection. It should also be useful for generalizing the local-global compatibility results of [Eme].
In [14], the author also proved that Ihara's property in the quasi-banal case is equivalent to the following result.
**Proposition 1.2.6**.: _(cf. [14] corollary 9.5) Let \(\mathfrak{m}\) be a non-Eisenstein maximal ideal of \(\mathbb{T}^{S}\) and \(f\in S_{\overline{G}}(\overline{K}^{v}\operatorname{GL}_{d}(\mathcal{O}_{v}), \overline{\mathbb{F}}_{l})\). Let \(\operatorname{Iw}_{v}\) be the Iwahori subgroup of \(\operatorname{GL}_{d}(\mathcal{O}_{v})\), then the \(\overline{\mathbb{F}}_{l}[\operatorname{Iw}_{v}\setminus\operatorname{GL}_{d }(F_{v})/\operatorname{GL}_{d}(\mathcal{O}_{v})]\)-submodule of \(S_{\overline{G}}(\overline{K}^{v}\operatorname{Iw}_{v},\overline{\mathbb{F}} _{l})\) generated by \(f\) is of dimension \(d!\)._
### Main result
With the previous notations, let \(q_{v}\) be the order of the residue field of \(F_{v}\). We fix some prime number \(l\) unramified in \(F^{+}\) and split in \(E\) and we place ourself in the limit case where \(q_{v}\equiv 1\mod l\) with \(l>d\), which is, at least by base change, the crucial case to consider.
**Definition 1.3.1**.: As in definition 2.5.1 of [10], we say that a subgroup \(H\subseteq\operatorname{GL}_{d}(\overline{\mathbb{F}}_{l})\) is big if :
* \(H\) has no \(l\)-power order quotients;
* \(H^{i}(H,\mathfrak{g}_{d}^{0}(\overline{\mathbb{F}}_{l}))=(0)\) for \(i=0,1\) and where \(\mathfrak{g}_{d}:=\operatorname{Lie}\operatorname{GL}_{d}\) and \(\mathfrak{g}_{d}^{0}\) is the trace zero subspace of \(\mathfrak{g}_{d}\);
* for all irreducible \(\overline{\mathbb{F}}_{l}[H]\)-submodules \(W\) of \(\mathfrak{g}_{d}(\overline{\mathbb{F}}_{l})\), we can find \(h\in H\) and \(\alpha\in\overline{\mathbb{F}}_{l}\) satisfying the following properties.
* The \(\alpha\)-generalized eigenspace \(V(h,\alpha)\) of \(h\) on \(\overline{\mathbb{F}}_{l}^{d}\) is one dimensional.
Let \(\pi_{h,\alpha}:\overline{\mathbb{F}}_{l}^{d}\twoheadrightarrow V(h,\alpha)\) be the \(h\)-equivariant projection of \(\overline{\mathbb{F}}_{l}^{d}\) to \(V(h,\alpha)\) and let \(i_{h,\alpha}:V(h,\alpha)\hookrightarrow\overline{\mathbb{F}}_{l}^{d}\) be the \(h\)-equivariant injection of \(V(h,\alpha)\) into \(\overline{\mathbb{F}}_{l}^{d}\). Then \(\pi_{h,\alpha}\circ W\circ i_{h,\alpha}\neq(0)\).
**Theorem 1.3.2**.: _In the limit case, suppose moreover that there exists a prime \(p_{0}=u_{0}\bar{u}_{0}\) split in \(E\) with a place \(v_{0}|u_{0}\) of \(F\) such that \(\overline{B}_{v_{0}}\) is a division algebra. Consider \(\mathfrak{m}\) such that_
\[\overline{\rho}_{\mathfrak{m}}:G_{F}\longrightarrow\operatorname{GL}_{n}( \overline{\mathbb{F}}_{l})\]
_is a strongly irreducible representation which is unramified at all places of \(F\) lying above primes which do not split in \(E\) and which satisfies the following hypothesis:_
* _after semi-simplification_ \(\overline{\rho}_{\mathfrak{m},v}\) _is a direct sum of characters;_
* \(\overline{F}^{\ker\operatorname{ad}\overline{\rho}}\) _does not contain_ \(F(\zeta_{l})\) _where_ \(\zeta_{l}\) _is any primitive_ \(l\)_-root of_ \(1\)_;_
* \(\overline{\rho}(G_{F^{+}(\zeta_{l})})\) _is big._
_Then Ihara's lemma of the conjecture 1.2.2 is true, i.e. every irreducible \(\operatorname{GL}_{d}(F_{v})\)-submodule of \(S_{\overline{G}}(\overline{K}^{v},\overline{\mathbb{F}}_{l})_{\mathfrak{m}}\) is generic._
_Remark_.: The first hypothesis, if you moreover suppose that the characters are unramified, corresponds to the weak form of Ihara's lemma of definition 1.2.3. The last two ones come from theorem 5.1.5 of [10] which is some level raising statement, cf. theorem 3.1.2. Any other similar statement, for example theorem 4.4.1 of [1], with different hypothesis can then be used to formulate another result about Ihara's lemma. The _strongly_ irreducibility is needed to prove the semi-simplicity of the cohomology of the KHT Shimura variety in [16]: however note that if one believe in Tate's conjecture, the semi-simplicity should be true in general.
In [1] we essentially proved conjecture 1.2.2 in the banal case under some restrictive hypothesis. The idea was mainly to transfer the property about irreducible subspaces to a similar one for the middle degree cohomology group of some KHT Shimura variety \(\operatorname{Sh}_{K^{v}(\infty)}\) associated to some similitude group \(G/\mathbb{Q}\) such that \(G(\mathbb{A}_{\mathbb{Q}}^{\infty,p})\cong\overline{G}(\mathbb{A}_{\mathbb{Q} }^{\infty,p})\), cf. SS2.3 for more details, and with level \(K^{v}(\infty):=\overline{K}^{v}\) meaning finite level outside \(v\) and infinite level at \(v\).
The localization at \(\mathfrak{m}\) of the cohomology groups of \(\operatorname{Sh}_{K^{v}(\infty)}\) can be computed as the cohomology of the geometric special fiber \(\operatorname{Sh}_{K^{v}(\infty),\bar{s}_{v}}\) of \(\operatorname{Sh}_{K^{v}(\infty)}\), with coefficient in the complex of nearby cycles \(\Psi_{K^{v}(\infty),v}\).
The Newton stratification of \(\operatorname{Sh}_{K^{v}(\infty),\bar{s}_{v}}\) gives us a filtration of \(\Psi_{K^{v}(\infty),v}\), cf. [1], and so a filtration of \(H^{d-1}(\operatorname{Sh}_{K^{v}(\infty),\bar{\eta}_{v}},\overline{\mathbb{ Z}}_{l})_{\mathfrak{m}}\) and the main point of [1] is to prove that each graded part of this filtration verifies the Ihara property, i.e. each of their irreducible sub-space are generic. To realize this strategy we need first the cohomology groups of \(\operatorname{Sh}_{K^{v}(\infty)}\) to be torsion free: this point is now essentially settled by the main result of [1]. More crucially the previous filtration should be strict, i.e. its graded parts have to be torsion free, cf. theorem 2.4.3.
It appears that these graded parts are parabolically induced and in the limit case when the order \(q_{v}\) of the residue field is such that \(q_{v}\equiv 1\mod l\), the socle of these parabolic induced representations are no more irreducible and do not fulfill the Ihara property. It then appears that we have:
* first to verify that the first non trivial graded part of our filtration verifies the genericity property of its irreducible submodule. For this we need a level raising statement as theorem 5.1.5 in [11], cf. theorem 3.1.2, or theorem 4.4.1 of [1].
* Then we have to understand that the extensions between the graded parts of our filtration, are non split.
The idea here is to work with the Galois representation \(\rho_{\mathfrak{m}}\) with values in the localized Hecke algebra \(\mathbb{T}(K^{v}(\infty))_{\mathfrak{m}}\) and more precisely with \(\rho_{\mathfrak{m}}\otimes_{\overline{\mathbb{Z}}_{l}}\overline{\mathbb{F}}_{l}\) which should be as far as possible from being semi-simple, i.e. the graded parts of its socle filtration should be irreducible. Over \(\overline{\mathbb{Q}}_{l}\), \(\rho_{\mathfrak{m}}\otimes_{\overline{\mathbb{Z}}_{l}}\overline{\mathbb{Q}}_{l}\) is equipped with the action of the nilpotent monodromy operator \(N_{v}\) coming from a monodromy operator acting on \(\Psi_{K^{v}(\infty),v}\) constructed from the fact that any \(\overline{\mathbb{Q}}_{l}\)-representation with finite dimension is quasi-unipotent.
Over \(\overline{\mathbb{F}}_{l}\) the usual arithmetic approach for defining the nilpotent monodromy operator, is hopeless because, up to consider a finite extension of \(F_{v}\), such a \(\overline{\mathbb{F}}_{l}\)-representation has a trivial action of the inertia group. We are then looking for a geometric definition of \(N_{v}\) which then exists whatever are the coefficients, \(\overline{\mathbb{Q}}_{l}\), \(\overline{\mathbb{Z}}_{l}\) and \(\overline{\mathbb{F}}_{l}\), compatible with tensor products. One classical construction is known in the semi-stable reduction case, cf. [12] SS3, which corresponds to the case where the level at \(v\) of our Shimura variety is of Iwahori type: this implies that we deal with automorphic representations \(\Pi\) such that the cuspidal support of \(\Pi_{v}\) is made of unramified characters, and so with the weak form of Ihara's lemma of definition 1.2.3. Using ou knowledge of the \(\overline{\mathbb{Z}}_{l}\)-nearby cycles described completely in [10], it is quite easy to construct such a geometric nilpotent monodromy operator which generalizes the semi-stable case by allowing ramified characters, cf. SS3.2.
Taking this geometric monodromy operator, we then obtain a cohomological monodromy operator \(N_{v,\mathfrak{m}}^{coho}\) acting on \(H^{0}(\operatorname{Sh}_{K,\bar{s}_{v}},\Psi_{K^{v}(\infty),v})_{\mathfrak{m}}\) as soon as the irreducible constituants of \(\overline{\rho}_{\mathfrak{m}}\) are characters. One of the main point, cf theorem 2.4.3, is that the graded parts of the filtration of \(H^{0}(\operatorname{Sh}_{K,\bar{s}_{v}},\Psi_{K^{v}(\infty),v})_{\mathfrak{m}}\) induced by the Newton filtration on the nearby cycles spectral sequence, are all torsion free, so that in particular we are in position to understand quite enough the action of \(\overline{N}_{v,\mathfrak{m}}^{coho}:=N_{v,\mathfrak{m}}^{coho}\otimes_{ \overline{\mathbb{Z}}_{l}}\overline{\mathbb{F}}_{l}\) on \(H^{0}(\operatorname{Sh}_{K,\bar{s}_{v}},\Psi_{K^{v}(\infty),v})_{\mathfrak{m} }\otimes_{\overline{\mathbb{Z}}_{l}}\overline{\mathbb{F}}_{l}\), see _Main observation_ at the end of SS3.2, and prove that its nilpotency order is as large as possible.
Finally using typicity of \(H^{0}(\operatorname{Sh}_{K,\bar{s}_{v}},\Psi_{K^{v}(\infty),v})_{\mathfrak{m}}\) in the sense of definition 3.3.2, we infer for every irreducible sub-quotient \(\varrho\) of \(\overline{\rho}_{\mathfrak{m}}\), a monodromy operator \(N_{\mathfrak{m},\varrho}\) on \(\rho_{\mathfrak{m}}\). We then consider the image of \(N_{\mathfrak{m},\varrho}^{r-1}\otimes_{\overline{\mathbb{Z}}_{l}}\overline{ \mathbb{F}}_{l}\) where \(r\) is the multiplicity of \(\varrho\) in \(\overline{\rho}_{\mathfrak{m}}\):
* From the theorem of Gee and the _main observation_, we know both that
this image is non zero
* and it is associated to \(\varrho\)-generic representations.
Varying \(\varrho\) we infer genericness of irreducible quotients of \(H^{0}(\operatorname{Sh}_{K,\bar{s}_{v}},\Psi_{K^{v}(\infty),v})_{\mathbb{m}} \otimes_{\overline{\mathbb{Z}}_{l}}\overline{\mathbb{F}}_{l}\) and we conclude about its irreducible subspaces through Grothendieck-Verdier duality as in proposition 3.7 (3) of [16].
## 2 Preliminaries
### Representations of \(\operatorname{GL}_{d}(L)\)
Consider a finite extension \(L/\mathbb{Q}_{p}\) with residue field \(\mathbb{F}_{q}\). We denote by \(|-|\) its absolute value. For a representation \(\pi\) of \(\operatorname{GL}_{d}(L)\) and \(n\in\frac{1}{2}\mathbb{Z}\), set
\[\pi\{n\}:=\pi\otimes q^{-n\mathrm{valodet}}.\]
**Notation 2.1.1**.: For \(\pi_{1}\) and \(\pi_{2}\) representations of respectively \(\operatorname{GL}_{n_{1}}(L)\) and \(\operatorname{GL}_{n_{2}}(L)\), we will denote by
\[\pi_{1}\times\pi_{2}:=\operatorname{ind}_{P_{n_{1},n_{1}+n_{2}}(L)}^{ \operatorname{GL}_{n_{1}+n_{2}}(L)}\pi_{1}\{\frac{n_{2}}{2}\}\otimes\pi_{2}\{ -\frac{n_{1}}{2}\},\]
the normalized parabolic induced representation where for any sequence \(\underline{r}=(0<r_{1}<r_{2}<\cdots<r_{k}=d)\), we write \(P_{\underline{r}}\) for the standard parabolic subgroup of \(\operatorname{GL}_{d}\) with Levi
\[\operatorname{GL}_{r_{1}}\times\operatorname{GL}_{r_{2}-r_{1}}\times\cdots \times\operatorname{GL}_{r_{k}-r_{k-1}}.\]
Recall that a representation \(\varrho\) of \(\operatorname{GL}_{d}(L)\) is called _cuspidal_ (resp. _supercuspidal_) if it is not a subspace (resp. subquotient) of a proper parabolic induced representation. When the field of coefficients is of characteristic zero, these two notions coincides, but this is no more true over \(\overline{\mathbb{F}}_{l}\).
**Definition 2.1.2**.: (see [11] SS9 and [12] SS1.4) Let \(g\) be a divisor of \(d=sg\) and \(\pi\) an irreducible cuspidal \(\overline{\mathbb{Q}}_{l}\)-representation of \(\operatorname{GL}_{g}(L)\). The induced representation
\[\pi\{\frac{1-s}{2}\}\times\pi\{\frac{3-s}{2}\}\times\cdots\times\pi\{\frac{s -1}{2}\} \tag{1}\]
holds a unique irreducible quotient (resp. subspace) denoted \(\operatorname{st}_{s}(\pi)\) (resp. \(\operatorname{Speh}_{s}(\pi)\)); it is a generalized Steinberg (resp. Speh) representation. Their cuspidal support is the Zelevinsky segment
\[[\pi\{\frac{1-s}{2}\},\pi\{\frac{s-1}{2}\}]:=\Big{\{}\pi\{\frac{1-s}{2}\},\pi \{\frac{3-s}{2}\},\cdots,\pi\{\frac{s-1}{2}\}\Big{\}}.\]
More generally the set of sub-quotients of the induced representation (1) is in bijection with the following set.
\[\operatorname{Dec}(s)=\{(t_{1},\cdots,t_{r}),\text{ such that }t_{i}\geq 1 \text{ and }\sum_{i=1}^{r}t_{i}=s\}.\]
For any \(\underline{s}\in{\rm Dec}(s)\), we the denote by \({\rm st}_{\underline{s}}(\pi)\) the associated irreducible subquotient of (1). Following Zelevinsky, we fix this bijection such that \({\rm Speh}_{s}(\pi)\) corresponds to \((s)\) and \({\rm st}_{s}(\pi)\) to \((1,\cdots,1)\). The Lubin-Tate representation \(LT_{h,s}(\pi)\) will also appear in the following, it corresponds with \((\overbrace{1,\cdots,1}^{h},s-h)\).
**Proposition 2.1.3**.: _(cf. [20] III.5.10) Let \(\pi\) be an irreducible cuspidal representation of \({\rm GL}_{g}(K)\) with a stable \(\overline{\mathbb{Z}}\)-lattice1, then its modulo \(l\) reduction is irreducible and cuspidal (but not necessary supercuspidal)._
Footnote 1: We say that \(\pi\) is integral.
We now suppose as explained in the introduction that
\[q\equiv 1\mod l\quad\text{ and }\quad l>d\]
so the following facts are verified:
* the modulo \(l\) reduction of every irreducible cuspidal representation of \({\rm GL}_{g}(L)\) for \(g\leq d\), is supercuspidal.
* For a \(\overline{\mathbb{F}}_{l}\)-irreducible supercuspidal representation \(\varrho\) of \({\rm GL}_{g}(L)\), the parabolic induced representation \(\varrho\times\cdots\times\varrho\), with \(s\) copies of \(\varrho\), is semi-simple with irreducible constituants the modulo \(l\) reduction of \({\rm st}_{\underline{s}}(\pi)\) for \(\underline{s}\in{\rm Dec}(s)\) and \(\pi\) any cuspidal representation whose modulo \(l\) reduction is isomorphic to \(\varrho\).
Concerning the notion of genericity, consider the mirabolic subgroup \(M_{d}(L)\) of \({\rm GL}_{d}(L)\) as the set of matrices with last row \((0,\cdots,0,1)\): we denote by
\[V_{d}(L)=\{(m_{i,j})\in M_{d}(L):\ m_{i,j}=\delta_{i,j}\text{ for }j<d\}.\]
its unipotent radical. We fix a non trivial character \(\psi\) of \(L\) and let \(\theta\) be the character of \(V_{d}(L)\) defined by \(\theta((m_{i,j}))=\psi(m_{d-1,d})\). For \(G={\rm GL}_{r}(L)\) or \(M_{r}(L)\), we denote by \({\rm alg}(G)\) the abelian category of smooth representations of \(G\) and, following [1], we introduce
\[\Psi^{-}:{\rm alg}(M_{d}(L))\longrightarrow{\rm alg}({\rm GL}_{d-1}(L)),\]
and
\[\Phi^{-}:{\rm alg}(M_{d}(L))\longrightarrow{\rm alg}(M_{d-1}(L)),\]
defined by \(\Psi^{-}=r_{V_{d},1}\) (resp. \(\Phi^{-}=r_{V_{d},\theta}\)) the functor of \(V_{d}\) coinvariants (resp. \((V_{d},\theta)\)-coinvariants), cf. [1]. For \(\tau\in{\rm alg}(M_{d}(L))\), the representation
\[\tau^{(k)}:=\Psi^{-}\circ(\Phi^{-})^{k-1}(\tau)\]
is called the \(k\)-th derivative of \(\tau\). If \(\tau^{(k)}\neq 0\) and \(\tau^{(m)}=0\) for all \(m>k\), then \(\tau^{(k)}\) is called the highest derivative of \(\tau\). In the particular case where \(k=d\), there is a unique irreducible representation \(\tau_{nd}\) of \(M_{d}(L)\) with derivative of order \(d\).
**Definition 2.1.4**.: An irreducible representation \(\pi\) of \(\mathrm{GL}_{d}(L)\) is said generic, if its restriction to the mirabolic subgroup admits \(\tau_{nd}\) as a subquotient.
Let \(\pi\) be an irreducible generic representation of \(\mathrm{GL}_{d}(L)\) and consider any stable lattice which gives us by modulo \(l\) reduction a \(\overline{\mathbb{F}}_{l}\)- representation uniquely determined up to semi-simplification. Then this modulo \(l\) reduction admits a unique generic irreducible constituant.
### Weil-Deligne inertial types
Recall that a Weil-Deligne representation of \(W_{L}\) is a pair \((r,N)\) where
* \(r:W_{L}\longrightarrow\mathrm{GL}(V)\) is a smooth2 representation on a finite dimensional vector space \(V\); and Footnote 2: i.e. continuous for the discrete topology on \(V\)
* \(N\in\mathrm{End}(V)\) is nilpotent such that \[r(g)Nr(g)^{-1}=||g||N,\] where \(||\bullet||:W_{L}\longrightarrow W_{L}/I_{L}\to q^{\mathbb{Z}}\) takes an arithmetic Frobenius element to \(q\).
_Remark_.: To a continuous3 representation on a finite dimensional \(\mathbb{Q}_{l}\)-vector space \(V\), \(\rho:W_{L}\longrightarrow\mathrm{GL}(V)\) is attached a Weil-Deligne representation denoted by \(\mathrm{WD}(\rho)\). A Weil representation of \(W_{L}\) is also said of Galois type if it comes from a representation of \(G_{L}\).
Footnote 3: relatively to the \(l\)-adic topology on \(V\)
_Main example_: let \(\rho:W_{L}\longrightarrow\mathrm{GL}(V)\) be a smooth irreducible representation on a finite dimensional vector space \(V\). For \(k\geq 1\) an integer, we then define a Weil-Deligne representation
\[\mathrm{Sp}(\rho,k):=\big{(}V\oplus V(1)\oplus\cdots\oplus V(k-1),N\big{)},\]
where for \(0\leq i\leq k-2\), the isomorphism \(N:V(i)\cong V(i+1)\) is induced by some choice of a basis of \(\overline{L}(1)\) and \(N_{|V(k-1)}\) is zero. Then every Frobenius semi-simple Weil-Deligne representation of \(W_{L}\) is isomorphic to some \(\bigoplus_{i=1}^{r}\mathrm{Sp}(\rho_{i},k_{i})\), for smooth irreducible representations \(\rho_{i}:W_{L}\longrightarrow\mathrm{GL}(V_{i})\) and integers \(k_{i}\geq 1\). Up to obvious reorderings, such a writing is unique.
Let now \(\rho\) be a continuous representation of \(W_{L}\), or its Weil-Deligne representation \(\mathrm{WD}(\rho)\), and consider its restriction to \(I_{L}\), \(\tau:=\rho_{|I_{L}}\). Such an isomophism class of a finite dimensional continuous representation of \(I_{L}\) is then called _an inertial type_.
**Notation 2.2.1**.: Let \(\mathcal{I}_{0}\) the set of inertial types that extend to a continuous irreducible representation of \(G_{L}\).
_Remark_.: \(\tau\in\mathcal{I}_{0}\) might not be irreducible.
Let Part be the set of decreasing sequences of positive integers \(\underline{d}=(\underline{d}(1)\geq\underline{d}(2)\geq\cdots)\) viewed as a partition of \(\sum\underline{d}:=\sum_{i}\underline{d}(i)\).
**Notation 2.2.2**.: Let \(f:\mathcal{I}_{0}\longrightarrow\operatorname{Part}\) with finite support. We then denote by \(\tau_{f}\) the restriction to \(I_{L}\) of
\[\bigoplus_{\tau_{0}\in\mathcal{I}_{0}}\bigoplus_{i}\operatorname{Sp}(\rho_{\tau _{0}},f(\tau_{0})(i)),\]
where \(\rho_{\tau_{0}}\) is a fixed extension of \(\tau_{0}\) to \(W_{L}\).
_Remark_.: By lemma 3.3 of [MS] the isomorphism class of \(\tau_{f}\) is independent of the choices of the \(\rho_{\tau_{0}}\).
The map from \(\{f:\mathcal{I}_{0}\longrightarrow\operatorname{Part}\}\) to the set of inertial types given by \(f\mapsto\tau_{f}\), is a bijection. The dominance order \(\preceq\) on Part induces a partial order on the set of inertial types.
We let \(\operatorname{rec}_{L}\) denote the local reciprocity map of [11, Theorem A]. Fix an isomorphism \(\imath\overline{\mathbb{Q}}_{\ell}\stackrel{{\sim}}{{\to}} \mathbb{C}\). We normalize the local reciprocity map \(\operatorname{rec}\) of [11, Theorem A], defined on isomorphism classes of irreducible smooth representations of \(\operatorname{GL}_{n}(L)\) over \(\mathbb{C}\) as follows: if \(\pi\) is the isomorphism class of an irreducible smooth representation of \(\operatorname{GL}_{n}(L)\) over \(\overline{\mathbb{Q}}_{\ell}\), then
\[\rho_{\ell}(\pi)\stackrel{{\text{\tiny def}}}{{=}}\imath^{-1} \circ\operatorname{rec}_{L}\circ\imath(\pi\otimes_{\overline{\mathbb{Q}}_{ \ell}}|\det|^{(1-n)/2}).\]
Then \(\rho_{\ell}(\pi)\) is the isomorphism class of an \(n\)-dimensional, Frobenius semisimple-Deligne representation of \(W_{L}\) over \(\overline{\mathbb{Q}}_{\ell}\), independent of the choice of \(\imath\). Moreover, if \(\rho\) is an isomorphism class of an \(n\)-dimensional, Frobenius semisimple-Deligne representation of \(W_{L}\) over \(M\), then \(\rho_{\ell}^{-1}(\rho)\) is defined over \(M\) (cf. [1, SS1.8]).
Recall the following compatibility of the Langlands correspondence.
**Lemma 2.2.3**.: _If \(\pi\) and \(\pi^{\prime}\) are irreducible generic representations of \(\operatorname{GL}_{d}(L)\) such that \(\rho_{\ell}(\pi)|I_{L}\cong\rho_{\ell}(\pi^{\prime})|I_{L}\) then_
\[\pi_{|\operatorname{GL}_{d}(\mathcal{O}_{L})}\cong\pi^{\prime}_{|\operatorname {GL}_{d}(\mathcal{O}_{L})}.\]
### Kottwiz-Harris-Taylor Shimura varieties
Let \(F=F^{+}E\) be a CM field where \(E/\mathbb{Q}\) is a quadratic imaginary extension and \(F^{+}/\mathbb{Q}\) is totally real. We fix a real embedding \(\tau:F^{+}\hookrightarrow\mathbb{R}\). For a place \(v\) of \(F\), we will denote by
* \(F_{v}\) the completion of \(F\) at \(v\),
* \(\mathcal{O}_{v}\) the ring of integers of \(F_{v}\),
* \(\varpi_{v}\) a uniformizer,
* \(q_{v}\) the cardinal of the residue field \(\kappa(v)=\mathcal{O}_{v}/(\varpi_{v})\).
Let \(B\) be a division algebra with center \(F\), of dimension \(d^{2}\) such that at every place \(v\) of \(F\), either \(B_{v}\) is split or a local division algebra and suppose \(B\) provided with an involution of second kind \(*\) such that \(*_{|F}\) is the complex conjugation. For any \(\beta\in B^{*=-1}\), denote by \(\sharp_{\beta}\) the involution \(v\mapsto v^{\sharp_{\beta}}=\beta v^{*}\beta^{-1}\) and let
\(G/\mathbb{Q}\) be the group of similitudes, denoted by \(G_{\tau}\) in [11], defined for every \(\mathbb{Q}\)-algebra \(R\) by
\[G(R)\cong\{(\lambda,g)\in R^{\times}\times(B^{op}\otimes_{\mathbb{Q}}R)^{\times} \text{ such that }gg^{\sharp_{\beta}}=\lambda\}\]
with \(B^{op}=B\otimes_{F,c}F\). If \(x\) is a place of \(\mathbb{Q}\) split \(x=yy^{c}\) in \(E\) then
\[G(\mathbb{Q}_{x})\cong(B^{op}_{y})^{\times}\times\mathbb{Q}_{x}^{\times}\cong \mathbb{Q}_{x}^{\times}\times\prod_{v_{i}^{+}}(B^{op}_{v_{i}^{+}})^{\times}, \tag{2}\]
where \(x=\prod_{i}v_{i}^{+}\) in \(F^{+}\) and we identify places of \(F^{+}\) over \(x\) with places of \(F\) over \(y\).
**Convention 2.3.1**.: For \(x=yy^{c}\) a place of \(\mathbb{Q}\) split in \(M\) and \(v\) a place of \(F\) over \(y\), we shall make throughout the text the following abuse of notation: we denote \(G(F_{v})\) the factor \((B^{op}_{v_{|_{F^{+}}}})^{\times}\) in the formula (2) so that
\[G(\mathbb{A}_{\mathbb{Q}}^{\infty,v}):=G(\mathbb{A}_{\mathbb{Q}}^{\infty,p}) \times\Big{(}\mathbb{Q}_{p}^{\times}\times\prod_{v_{i}^{+}\neq v|F^{+}}(B^{op }_{v_{i}^{+}})^{\times}\Big{)}.\]
In [11], the authors justify the existence of some \(G\) like before such that moreover
* if \(x\) is a place of \(\mathbb{Q}\) non split in \(M\) then \(G(\mathbb{Q}_{x})\) is quasi split;
* the invariants of \(G(\mathbb{R})\) are \((1,d-1)\) for the embedding \(\tau\) and \((0,d)\) for the others.
As in [11, page 90], a compact open subgroup \(K\) of \(G(\mathbb{A}_{\mathbb{Q}}^{\infty})\) is said to be _sufficiently small_ if there exists a place \(x\) of \(\mathbb{Q}\) such that the projection from \(K^{x}\) to \(G(\mathbb{Q}_{x})\) does not contain any element of finite order except identity.
**Notation 2.3.2**.: Denote by \(\mathcal{K}\) the set of sufficiently small compact open subgroups of \(G(\mathbb{A}^{\infty})\). For \(K\in\mathcal{K}\), write \(\operatorname{Sh}_{K,\eta}\longrightarrow\operatorname{Spec}F\) for the associated Shimura variety of Kottwitz-Harris-Taylor type.
**Definition 2.3.3**.: Denote by \(\operatorname{Spl}\) the set of places \(w\) of \(F\) such that \(p_{w}:=w_{|\mathbb{Q}}\neq l\) is split in \(E\) and \(B^{\times}_{w}\cong\operatorname{GL}_{d}(F_{w})\). For each \(K\in\mathcal{K}\), we write \(\operatorname{Spl}(K)\) for the subset of \(\operatorname{Spl}\) of places such that \(K_{v}\) is the standard maximal compact of \(\operatorname{GL}_{d}(F_{v})\).
In the sequel, we fix a place \(v\) of \(F\) in \(\operatorname{Spl}\). The scheme \(\operatorname{Sh}_{K,\eta}\) has a projective model \(\operatorname{Sh}_{K,v}\) over \(\operatorname{Spec}\mathcal{O}_{v}\) with special geometric fiber \(\operatorname{Sh}_{K,\bar{s}_{v}}\). We have a projective system \((\operatorname{Sh}_{K,\bar{s}_{v}})_{K\in\mathcal{K}}\) which is naturally equipped with an action of \(G(\mathbb{A}_{\mathbb{Q}}^{\infty})\times\mathbb{Z}\) such that any \(w_{v}\in W_{F_{v}}\) acts by \(-\deg(w_{v})\in\mathbb{Z}\), where \(\deg=\operatorname{val}\circ\operatorname{Art}_{F_{v}}^{-1}\) and \(\operatorname{Art}_{F_{v}}:F_{v}^{\times}\xrightarrow{\sim}W_{F_{v}}^{ab}\).
**Notation 2.3.4**.: For \(K\in\mathcal{K}\), the Newton stratification of the geometric special fiber \(\operatorname{Sh}_{K,\bar{s}_{v}}\) is denoted by
\[\operatorname{Sh}_{K,\bar{s}_{v}}=:\operatorname{Sh}_{\bar{K},\bar{s}_{v}}^{ \geq 1}\supset\operatorname{Sh}_{\bar{K},\bar{s}_{v}}^{\geq 2}\supset\cdots \supset\operatorname{Sh}_{\bar{K},\bar{s}_{v}}^{\geq d}\]
where \(\mathrm{Sh}^{=h}_{K,\bar{s}_{v}}:=\mathrm{Sh}^{\geq h}_{K,\bar{s}_{v}}-\mathrm{Sh} ^{\geq h+1}_{K,\bar{s}_{v}}\) is an affine scheme, which is smooth and pure of dimension \(d-h\). It is built up by the geometric points such that the connected part of the associated Barsotti-Tate group has rank \(h\) For each \(1\leq h<d\), write
\[i_{h}:\mathrm{Sh}^{\geq h}_{K,\bar{s}_{v}}\hookrightarrow\mathrm{Sh}^{\geq 1 }_{K,\bar{s}_{v}},\quad j^{\geq h}:\mathrm{Sh}^{=h}_{K,\bar{s}_{v}} \hookrightarrow\mathrm{Sh}^{\geq h}_{K,\bar{s}_{v}},\]
and \(j^{=h}=i_{h}\circ j^{\geq h}\).
For \(n\geq 1\), with our previous abuse of notation, consider \(K^{v}(n):=K^{v}K_{v}(n)\) where
\[K_{v}(n):=\ker(\mathrm{GL}_{d}(\mathcal{O}_{v})\twoheadrightarrow\mathrm{GL}_ {d}(\mathcal{O}_{v}/\mathcal{M}_{v}^{n})).\]
Recall that \(\mathrm{Sh}^{=h}_{I^{v}(n),\bar{s}_{v}}\) is geometrically induced under the action of the parabolic subgroup \(P_{h,d}(\mathcal{O}_{v}/\mathcal{M}_{v}^{n})\), defined as the stabilizer of the first \(h\) vectors of the canonical basis of \(F_{v}^{d}\). Concretely this means there exists a closed subscheme \(\mathrm{Sh}^{=h}_{K^{v}(n),\bar{s}_{v},1}\) stabilized by the Hecke action of \(P_{h,d}(F_{v})\) and such that
\[\mathrm{Sh}^{=h}_{K^{v}(n),\bar{s}_{v}}=\mathrm{Sh}^{=h}_{K^{v}(n),\bar{s}_{v },1}\times_{P_{h,d}(\mathcal{O}_{v}/\mathcal{M}_{v}^{n})}\mathrm{GL}_{d}( \mathcal{O}_{v}/\mathcal{M}_{v}^{n}),\]
meaning that \(\mathrm{Sh}^{=h}_{K^{v}(n),\bar{s}_{v}}\) is the disjoint union of copies of \(\mathrm{Sh}^{=h}_{K^{v}(n),\bar{s}_{v},1}\) indexed by \(\mathrm{GL}_{d}(\mathcal{O}_{v}/\mathcal{M}_{v}^{n})/P_{h,d}(\mathcal{O}_{v}/ \mathcal{M}_{v}^{n})\) and exchanged by the action of \(\mathrm{GL}_{d}(\mathcal{O}_{v}/\mathcal{M}_{v}^{n})\). We will denote by \(\mathrm{Sh}^{\geq h}_{K^{v}(n),\bar{s}_{v},1}\) the closure of \(\mathrm{Sh}^{=h}_{K^{v}(n),\bar{s}_{v},1}\) inside \(\mathrm{Sh}_{K^{v}(n),\bar{s}_{v}}\).
**Notation 2.3.5**.: Let \(1\leq h\leq d\) and \(\Pi_{h}\) any representation of \(\mathrm{GL}_{h}(F_{v})\). For \(\chi_{v}\) a character of \(F_{v}^{\times}\), we then denote by
\[\widetilde{HT}_{1}(\chi_{v},\Pi_{h}):=\mathcal{L}(\chi_{v},t)_{1}\otimes\Pi_{ h}^{K_{v}(n)}\otimes\Xi^{\frac{h-d}{2}}\]
the Harris-Taylor local system on the Newton stratum \(\mathrm{Sh}^{=h}_{K^{v}(n),\bar{s}_{v},1}\) where
* \(\mathcal{L}(\chi_{v},t)_{1}\) is the constant sheaf \(\overline{\mathbb{Z}}_{l}\) where the fundamental group acts through \(\pi_{1}\)\(\mathcal{D}^{\times}_{v,h}\)\(\mathcal{O}^{\times}_{v}\)\(\mathcal{O}^{\times}_{v}\) where \(\mathcal{D}_{v,h}\) is the maximal order of the division algebra \(D_{v,h}/F_{v}\) with invariant \(1/h\), and the first surjection is given by the Igusa varieties of [11];
* \(\Xi:\frac{1}{2}\mathbb{Z}\longrightarrow\overline{\mathbb{Z}}^{\times}_{l}\) is defined by \(\Xi(\frac{1}{2})=q^{1/2}\).
We also introduce the induced version
\[\widetilde{HT}(\chi_{v},\Pi_{h}):=\left(\mathcal{L}(\chi_{v},t)_{1}\otimes\Pi_ {h}^{K_{v}(n)}\otimes\Xi^{\frac{h-d}{2}}\right)\times_{P_{h,d}(\mathcal{O}_{v }/\mathcal{M}_{v}^{n})}\mathrm{GL}_{d}(\mathcal{O}_{v}/\mathcal{M}_{v}^{n}),\]
where the unipotent radical of \(P_{h,d}(\mathcal{O}_{v}/\mathcal{M}_{v}^{n})\) acts trivially and the action of
\[(g^{\infty,v},\left(\begin{array}{cc}g^{c}_{v}&*\\ 0&g^{et}_{v}\end{array}\right),\sigma_{v})\in G(\mathbb{A}^{\infty,v})\times P _{h,d}(\mathcal{O}_{v}/\mathcal{M}_{v}^{n})\times W_{v}\]
is given
* by the action of \(g_{v}^{c}\) on \(\Pi_{h}^{K_{v}(n)}\) and \(\deg(\sigma_{v})\in\mathbb{Z}\) on \(\Xi^{\frac{h-d}{2}}\), and
* the action of \((g^{\infty,v},g_{v}^{et},\operatorname{val}(\det g_{v}^{c})-\deg\sigma_{v})\in G (\mathbb{A}^{\infty,v})\times\operatorname{GL}_{d-h}(\mathcal{O}_{v}/\mathcal{ M}_{v}^{n})\times\mathbb{Z}\) on \(\mathcal{L}_{\overline{\mathbb{Q}}_{l}}(\chi_{v})_{1}\otimes\Xi^{\frac{h-d}{2}}\).
We also introduce
\[HT(\chi_{v},\Pi_{h})_{1}:=\widetilde{HT}(\chi_{v},\Pi_{h})_{1}[d-tg],\]
and the perverse sheaf
\[P(h,\chi_{v})_{1}:={}^{p}j^{=h}_{1,\ast}HT(\chi_{v},\operatorname{St}_{h}( \chi_{v}))_{1}\otimes\chi_{v}^{-1},\]
and their induced version, \(HT(\chi_{v},\Pi_{h})\) and \(P(h,\chi_{v})\).
Note that over \(\overline{\mathbb{Z}}_{l}\), there are at least two notions of intermediate extension associated to the two classical \(t\)-structures \(p\) and \(p+\). However in our situation they all coincide. Indeed as \(\operatorname{Sh}^{\geq h}_{K^{v}(n),\bar{s}_{v},1}\) is smooth over \(\operatorname{Spec}\overline{\mathbb{F}}_{p}\), then \(HT(\chi_{v},\Pi_{h})_{1}\) is perverse for the two \(t\)-structures with
\[i^{h\leq+1,\ast}_{1}HT(\chi_{v},\Pi_{h})_{1}\in{}^{p}\mathcal{D}^{<0}\text{ and }i^{h\leq+1,!}_{1}HT(\chi_{v},\Pi_{h})_{1}\in{}^{p+}\mathcal{D}^{\geq 1}.\]
Let now denote by
\[\Psi_{K,v}:=R\Psi_{\eta_{v}}(\overline{\mathbb{Z}}_{l}[d-1])(\frac{d-1}{2})\]
the nearby cycles autodual free perverse sheaf on \(\operatorname{Sh}_{K,\bar{s}_{v}}\). Recall, cf. [1]Proposition 3.1.3, that
\[\Psi_{K,v}\cong\bigoplus_{1\leq g\leq d}\bigoplus_{\varrho\in\operatorname{ Scusp}(g)}\Psi_{K,\varrho}, \tag{3}\]
where
* \(\operatorname{Scusp}(g)\) is the set of equivalence classes of irreducible supercuspidal representations of \(\operatorname{GL}_{g}(F_{v})\).
* The irreducible sub-quotients of \(\Psi_{K,\varrho}\otimes_{\overline{\mathbb{Z}}_{l}}\overline{\mathbb{Q}}_{l}\) are the Harris-Taylor perverse sheaves of \(\Psi_{K,\overline{\mathbb{Q}}_{l}}\) associated to irreducible cuspidal representations \(\pi_{v}\) with modulo \(l\) reduction having supercuspidal support a Zelevinsky segment associated to \(\varrho\).
_Remark_. In the limit case when \(q_{v}\equiv 1\mod l\) and \(l>d\), recall that we do not have to bother about cuspidal \(\overline{\mathbb{F}}_{l}\)-representation which are not supercuspidal. In particular in the previous formula we can
* replace \(\operatorname{Scusp}(g)\) by the set \(\operatorname{Cusp}(g)\) of equivalence classes of cuspidal representations,
* and the Harris-Taylor perverse sheaves of \(\Psi_{K,\varrho}\otimes_{\overline{\mathbb{Z}}_{l}}\overline{\mathbb{Q}}_{l}\) are those associated to \(\pi_{v}\) such that its modulo \(l\) reduction is isomorphic to \(\varrho\).
Moreover regarding the main statement about Ihara's lemma, we will only be concerned by \(\Psi_{K,\varrho}\) for \(\varrho\) a character.
### Filtrations of stratification
Using the Newton stratification and following the constructions of [1], we can define a \(\overline{\mathbb{Z}}_{l}\)-filtration
\[\operatorname{Fil}^{0}_{!}(\Psi_{K,\varrho})\hookrightarrow\cdots\hookrightarrow \operatorname{Fil}^{d}_{!}(\Psi_{K,\varrho})=\Psi_{K,\varrho}\]
where each graded part \(\operatorname{gr}^{k}_{!}(\Psi_{K,\varrho})\) admits a filtration, cf. [1] corollary 3.4.5
\[\operatorname{Fil}^{-d}(\operatorname{gr}^{k}_{!}(\Psi_{K,\varrho})) \hookrightarrow\cdots\hookrightarrow\operatorname{Fil}^{-k}(\operatorname{gr}^ {k}_{!}(\Psi_{K,\varrho}))=\operatorname{gr}^{k}_{!}(\Psi_{K,\varrho})\]
with
\[\operatorname{gr}^{-i}(\operatorname{gr}^{k}_{!}(\Psi_{K,\varrho}))\otimes_{ \overline{\mathbb{Z}}_{l}}\overline{\mathbb{Q}}_{l}\cong\bigoplus_{\pi_{v} \in\operatorname{Cusp}(\varrho)}P(i,\pi_{v})(\frac{i+1-2k}{2}),\]
where \(\operatorname{Cusp}(\varrho)\) is the set of equivalence classes of irreducible cuspidal representations with modulo \(l\) reduction isomorphic to \(\varrho\). We then have spectral sequences
\[E^{p,q}_{1}=H^{p+q}(\operatorname{Sh}_{K,\bar{s}_{v}},\operatorname{gr}^{-p}( \operatorname{gr}^{k}_{!}(\Psi_{K,\varrho})))\Rightarrow H^{p+q}(\operatorname {Sh}_{K,\bar{s}_{v}}\operatorname{gr}^{k}_{!}(\Psi_{K,\varrho})), \tag{4}\]
and
\[E^{p,q}_{1}=H^{p+q}(\operatorname{Sh}_{K,\bar{s}_{v}},\operatorname{gr}^{-p}_ {!}(\Psi_{K,\varrho}))\Rightarrow H^{p+q}(\operatorname{Sh}_{K,\bar{s}_{v}}, \Psi_{K,\varrho}). \tag{5}\]
**Definition 2.4.1**.: For a finite set \(S\) of places of \(\mathbb{Q}\) containing the places where \(G\) is ramified, denote by \(\mathbb{T}^{S}_{abs}:=\prod_{x\not\in S}^{\prime}\mathbb{T}_{x,abs}\) the abstract unramified Hecke algebra where \(\mathbb{T}_{x,abs}\cong\overline{\mathbb{Z}}_{l}[X^{un}(T_{x})]^{W_{x}}\) for \(T_{x}\) a split torus, \(W_{x}\) the spherical Weyl group and \(X^{un}(T_{x})\) the set of \(\overline{\mathbb{Z}}_{l}\)-unramified characters of \(T_{x}\).
_Example_.: For \(w\in\operatorname{Spl}\), we have
\[\mathbb{T}_{v|_{Q},abs}=\overline{\mathbb{Z}}_{l}[T_{v^{\prime},i}:\ i=1, \cdots,d,\ v^{\prime}|(v|_{\mathbb{Q}})]\]
where \(T_{v^{\prime},i}\) is the characteristic function of
\[\operatorname{GL}_{d}(\mathcal{O}_{v^{\prime}})\operatorname{diag}( \overbrace{\varpi_{v^{\prime}},\cdots,\varpi_{v^{\prime}}}^{i},\overbrace{1, \cdots,1}^{d-i})\operatorname{GL}_{d}(\mathcal{O}_{v^{\prime}})\subseteq \operatorname{GL}_{d}(F_{v^{\prime}}).\]
Recall that \(\mathbb{T}^{S}_{abs}\) acts through correspondences on each of the \(H^{i}(\operatorname{Sh}_{K,\bar{\eta}},E)\) where \(K\in\mathcal{K}\) is maximal at each places outside \(S\).
**Notation 2.4.2**.: We denote by \(\mathbb{T}(K)\) the image of \(\mathbb{T}^{S}_{abs}\) inside \(\operatorname{End}_{\overline{\mathbb{Z}}_{l}}(H^{d-1}(\operatorname{Sh}_{K, \bar{\eta}},\overline{\mathbb{Z}}_{l}))\).
We also denote
\[H^{d-1}(\operatorname{Sh}_{K^{v}(\infty),\bar{\eta}},\overline{\mathbb{Z}}_{l}): =\lim_{\underset{K_{v}}{\longrightarrow}}H^{d-1}(\operatorname{Sh}_{K^{v}K_{ v},\bar{\eta}},\overline{\mathbb{Z}}_{l}),\]
where \(K_{v}\) describe the set of open compact subgroup of \(\operatorname{GL}_{d}(\mathcal{O}_{v})\). We also use similar notation for others cohomology groups.
**Theorem 2.4.3**.: _Let \(\mathfrak{m}\) be a maximal ideal of \(\mathbb{T}(K^{v}(\infty))\) such that \(\overline{\rho}_{\mathfrak{m}}\) is irreducible and the irreducible constituents of its restriction to the decomposition group at the place \(v\) are characters. Recall also that \(q_{v}\equiv 1\mod l\) and \(l>d\). Then_
1. \(H^{i}(\mathrm{Sh}_{K^{v}(\infty),\bar{\eta}},\overline{\mathbb{Z}}_{l})_{ \mathfrak{m}}\) _is zero if_ \(i\neq d-1\) _and otherwise torsion free._
2. _Moreover the spectral sequences (_4_) and (_5_), localized at_ \(\mathfrak{m}\)_, degenerate at_ \(E_{1}\) _and the_ \(E_{1,\mathfrak{m}}^{p,q}\) _are zero for_ \(p+q\neq 0\) _and otherwise torsion free._
Proof.: (i) It is the main theorem of [1].
(ii) From (3) we are led to study the initial terms of the spectral sequence given by the filtration of \(\Psi_{K^{v}(\infty),\varrho}\) for \(\varrho\) a character which is a constituant of \(\overline{\rho}_{\mathfrak{m},v}\). Recall also, as we are in the limit case, that
* as there do not exist irreducible \(\overline{\mathbb{Q}}_{l}\)-cuspidal representation of \(\mathrm{GL}_{g}(F_{v})\) for \(g\leq d\) with modulo \(l\) reduction not supercuspidal, the irreducible constituents of \(\Psi_{K,\varrho}\otimes_{\overline{\mathbb{Z}}_{l}}\overline{\mathbb{Q}}_{l}\) are the Harris-Taylor perverse sheaves \(P(h,\chi_{v})(\frac{h-1-2k}{2})\) where the modulo \(l\) reduction of \(\chi_{v}\) is isomorphic to \(\varrho\) and \(0\leq k<h\).
* Over \(\overline{\mathbb{Z}}_{l}\), we do not have to worry about the difference between \(p\) and \(p+\) intermediate extensions.
From [1] SS2.3, consider the following equivariant resolution
\[0\to j_{!}^{=d}HT(\chi_{v},\Pi_{h}\{\frac{h-d}{2}\}\times\mathrm{ Speh}_{d-h}(\chi_{v}\{h/2\}))\otimes\Xi^{\frac{d-h}{2}}\longrightarrow\cdots\\ \longrightarrow j_{!}^{=h+1}HT(\chi_{v},\Pi_{h}\{-1/2\}\times \chi_{v}\{h/2\})\otimes\Xi^{\frac{1}{2}}\longrightarrow\\ j_{!}^{=h}HT(\chi_{v},\Pi_{h})\longrightarrow{}^{p}j_{!*}^{=h}HT( \chi_{v},\Pi_{h})\to 0, \tag{6}\]
where \(\Pi_{h}\) is any representation of \(\mathrm{GL}_{h}(F_{v})\), also called the infinitesimal part of the perverse sheaf \({}^{p}j_{!*}^{=h}HT(\chi_{v},\Pi_{h})\).4
Footnote 4: In \(P(h,\chi_{v})\) the infinitesimal part \(\Pi_{h}\) is \(\mathrm{st}_{h}(\chi_{v})\).
By adjunction property, for \(1\leq\delta\leq d-h\), the map
\[j_{!}^{=h+\delta}HT(\chi_{v},\Pi_{h}\{\frac{-\delta}{2}\}\times \mathrm{Speh}_{\delta}(\chi_{v}\{h/2\}))\otimes\Xi^{\delta/2}\\ \longrightarrow j_{!}^{=h+\delta-1}HT(\chi_{v},\Pi_{h}\{\frac{1- \delta}{2}\}\times\mathrm{Speh}_{\delta-1}(\chi_{v}\{h/2\}))\otimes\Xi^{ \frac{\delta-1}{2}} \tag{7}\]
is given by
\[HT(\chi_{v},\Pi_{h}\{\frac{-\delta}{2}\}\times\mathrm{Speh}_{ \delta}(\chi_{v}\{h/2\}))\otimes\Xi^{\delta/2}\longrightarrow\\ j^{=h+\delta,*}({}^{p}i_{!}^{h+\delta,!}(j_{!}^{=h+\delta-1} HT(\chi_{v},\Pi_{h}\{\frac{1-\delta}{2}\}\times\mathrm{Speh}_{\delta-1}(\chi_{v}\{h/2\})) \otimes\Xi^{\frac{\delta-1}{2}})) \tag{8}\]
To compute this last term we use the resolution (6) for \(h+\delta-1\). Precisely denote by \(\mathcal{H}:=HT(\chi_{v},\mathrm{st}_{h}(\chi\{\frac{1-\delta}{2}\})\times \mathrm{Speh}_{\delta-1}(\chi_{v}\{h/2\}))\otimes\Xi^{\frac{\delta-1}{2}}\), and write the previous resolution for \(h+\delta-1\) as follows
\[0\to K\longrightarrow j_{!}^{=h+\delta}\mathcal{H}^{\prime}\longrightarrow Q\to 0,\]
\[0\to Q\longrightarrow j_{!}^{=h+\delta-1}\mathcal{H}\longrightarrow{}^{p}j_{! *}^{=h+\delta-1}\mathcal{H}\to 0,\]
with
\[\mathcal{H}^{\prime}:=HT\Big{(}\chi_{v},\Pi_{h}\{\frac{1-\delta}{2}\}\times( \operatorname{Speh}_{\delta-1}(\chi_{v}\{-1/2\})\times\chi_{v}\{\frac{\delta- 1}{2}\})\{h/2\}\Big{)}\otimes\Xi^{\delta/2}.\]
As the support of \(K\) is contained in \(\operatorname{Sh}_{I,\bar{s}_{v}}^{\geq h+\delta+1}\) then \({}^{p}i^{h+\delta,!}K=K\) and \(j^{=h+\delta,*}({}^{p}i^{h+\delta,!}K)\) is zero. Moreover \({}^{p}i^{h+\delta,!}({}^{p}j_{!*}^{=h+\delta-1}\mathcal{H})\) is zero by construction of the intermediate extension. We then deduce that
\[j^{=h+\delta,*}({}^{p}i^{h+\delta,!}(j_{!}^{=h+\delta-1}HT(\chi_ {v},\Pi_{h}\{\frac{1-\delta}{2}\}\times\operatorname{Speh}_{\delta-1}(\chi_{ v}\{h/2\}))\otimes\Xi^{\frac{\delta-1}{2}}))\\ \cong HT\Big{(}\chi_{v},\Pi_{h}\{\frac{1-\delta}{2}\}\\ \times(\operatorname{Speh}_{\delta-1}(\chi_{v}\{-1/2\})\times \chi_{v}\{\frac{\delta-1}{2}\})\{h/2\}\Big{)}\otimes\Xi^{\delta/2} \tag{9}\]
In particular, up to homothety, the map (8), and so (7), is unique. Finally as the maps of (6) are strict, the given maps (7) are uniquely determined, that is, if we forget the infinitesimal parts, these maps are independent of the chosen \(h\) in (6), i.e. only depends on \(h+\delta\).
For every \(1\leq h\leq d\), let denote by \(i(h)\) the smallest index \(i\) such that \(H^{i}(\operatorname{Sh}_{K^{v}(\infty),\bar{s}_{v}},{}^{p}j_{!*}^{=h}HT(\chi_ {v},\Pi_{h})_{\mathfrak{m}}\) has non trivial torsion: if it does not exist then we set \(i(h)=+\infty\) and note that it does not depend on the choice of the infinitesimal part \(\Pi_{h}\). By duality, as \({}^{p}j_{!*}={}^{p+}j_{!*}\) for Harris-Taylor local systems associated to characters, note that when \(i(h)\) is finite then \(i(h)\leq 0\). Suppose by absurdity there exists \(h\) with \(i(h)\) finite and denote \(h_{0}\) the biggest such \(h\).
**Lemma 2.4.4**.: _For \(1\leq h\leq h_{0}\) then \(i(h)=h-h_{0}\)._
Note that a similar result is proved in [1] when the level is maximal at \(v\).
Proof.: a) We first prove that for every \(h_{0}<h\leq d\), the cohomology groups of \(j_{!}^{=h}HT(\chi_{v},\Pi_{h})\) are torsion free. Consider the following strict filtration in the category of free perverse sheaves
\[(0)=\operatorname{Fil}^{-1-d}(\chi_{v},h)\lhook\joinrel\to \operatorname{Fil}^{-d}(\chi_{v},h)\lhook\joinrel\to\cdots\\ \lhook\joinrel\to\operatorname{Fil}^{-h}(\chi_{v},h)=j_{!}^{=h}HT( \chi_{v},\Pi_{h}) \tag{10}\]
where the symbol \(\lhook\joinrel\to\) means a strict monomorphism, with graded parts
\[\operatorname{gr}^{-k}(\chi_{v},h)\cong{}^{p}j_{!*}^{=k}HT(\chi_{v},\Pi_{h} \{\frac{h-k}{2}\}\otimes\operatorname{st}_{k-h}(\chi_{v}\{h/2\}))(\frac{h-k}{ 2}).\]
Over \(\overline{\mathbb{Q}}_{l}\), the result is proved in [1] SS4.3. Over \(\overline{\mathbb{Z}}_{l}\), the result follows from the general constructions of [1] and the fact that the \(p\) and \(p+\) intermediate extensions are isomorphic for Harris-Taylor perverse sheaves associated to characters. The associated spectral sequence localized at \(\mathfrak{m}\), is then concentrated in middle degree and torsion free which gives the claim.
b) Before watching the cases \(h\leq h_{0}\), note that the spectral sequence associated to (6) for \(h=h_{0}+1\), has all its \(E_{1}\) terms torsion free and degenerates at its \(E_{2}\) terms. As by hypothesis the aims of this spectral sequence is free and equals to only one \(E_{2}\) terms, we deduce that all the maps
\[H^{0}\big{(}\mathrm{Sh}_{K^{v}(\infty),\bar{s}_{v}},j_{!}^{=h+ \delta}HT_{\xi}(\chi_{v},\mathrm{st}_{h}(\chi_{v}\{\frac{-\delta}{2}\})\times \mathrm{Speh}_{\delta}(\chi_{v}\{h/2\}))\otimes\Xi^{\delta/2}\big{)}_{\mathfrak{ m}}\\ \longrightarrow\\ H^{0}\big{(}\mathrm{Sh}_{K^{v}(\infty),\bar{s}_{v}},j_{!}^{=h+ \delta-1}HT_{\xi}(\chi_{v},\mathrm{st}_{h}(\chi_{v}\{\frac{1-\delta}{2}\})\\ \times\mathrm{Speh}_{\delta-1}(\chi_{v}\{h/2\}))\otimes\Xi^{ \frac{\delta-1}{2}}\big{)}_{\mathfrak{m}} \tag{11}\]
are saturated, i.e. their cokernel are free \(\overline{\mathbb{Z}}_{l}\)-modules. Then from the previous fact stressed after (9), this property remains true when we consider the associated spectral sequence for \(1\leq h^{\prime}\leq h_{0}\).
c) Consider now \(h=h_{0}\) and the spectral sequence associated to (6) where
\[E_{2}^{p,q}=H^{p+2q}(\mathrm{Sh}_{K^{v}(\infty),\bar{s}_{v}},j_{! }^{=h+q}\\ HT_{\xi}(\chi_{v},\mathrm{st}_{h}(\chi_{v}(-q/2))\times\mathrm{ Speh}_{q}(\chi_{v}\{h/2\}))\otimes\Xi^{\frac{q}{2}})_{\mathfrak{m}} \tag{12}\]
By definition of \(h_{0}\), we know that some of the \(E_{\infty}^{p,-p}\) should have a non trivial torsion subspace. We saw that
* the contributions from the deeper strata are torsion free and
* \(H^{i}(\mathrm{Sh}_{K^{v}(\infty),\bar{s}_{v}},j_{!}^{=h_{0}}HT_{\xi}(\chi_{v},\Pi_{h_{0}}))_{\mathfrak{m}}\) are zero for \(i<0\) and is torsion free for \(i=0\), whatever is \(\Pi_{h_{0}}\).
* Then there should exist a non strict map \(d_{1}^{p,q}\). But, we have just seen that it can not be maps between deeper strata.
* Finally, using the previous points, the only possibility is that the cokernel of \[H^{0}\big{(}\mathrm{Sh}_{K^{v}(\infty),\bar{s}_{v}},j_{!}^{=h_{0} +1}HT_{\xi}(\chi_{v},\mathrm{st}_{h_{0}}(\chi_{v}\{\frac{-1}{2}\})\times\chi_ {v}\{h_{0}/2\}))\otimes\Xi^{1/2}\big{)}_{\mathfrak{m}}\\ \longrightarrow\\ H^{0}\big{(}\mathrm{Sh}_{K^{v}(\infty),\bar{s}_{v}},j_{!}^{=h_{0} }HT_{\xi}(\chi_{v},\mathrm{st}_{h_{0}}(\chi_{v}))\big{)}_{\mathfrak{m}}\] (13) has a non trivial torsion subspace.
In particular we have \(i(h_{0})=0\).
d) Finally using the fact 2.18 and the previous points, for any \(1\leq h\leq h_{0}\), in the spectral sequence (12)
* by point a), \(E_{2}^{p,q}\) is torsion free for \(q\geq h_{0}-h+1\) and so it is zero if \(p+2q\neq 0\);
* by affiness of the open strata, cf. [1] theorem 1.8, \(E_{2}^{p,q}\) is zero for \(p+2q<0\) and torsion free for \(p+2q=0\);
* by point b), the maps \(d_{2}^{p,q}\) are saturated for \(q\geq h_{0}-h+2\);
* by point c), \(d_{2}^{-2(h_{0}-h+1),h_{0}-h+1}\) has a cokernel with a non trivial torsion subspace.
* Moreover, over \(\overline{\mathbb{Q}}_{l}\), the spectral sequence degenerates at \(E_{3}\) and \(E_{3}^{p,q}=0\) if \((p,q)\neq(0,0)\).
We then deduce that \(H^{i}(\operatorname{Sh}_{K^{v}(\infty),\bar{s}_{v}},{}^{p}_{j!*}{}^{=h}_{ \uparrow}HT_{\xi}(\chi_{v},\Pi_{h}))_{\mathfrak{m}}\) is zero for \(i<h-h_{0}\) and for \(i=h-h_{0}\) it has a non trivial torsion subspace.
Consider now the filtration of stratification of \(\Psi_{\varrho}\) constructed using the adjunction morphisms \(j_{!}^{=h}j^{=h,*}\) as in [1]
\[\operatorname{Fil}_{!}^{1}(\Psi_{\varrho})\hookrightarrow\dashv\operatorname {Fil}_{!}^{2}(\Psi_{\varrho})\hookrightarrow\dashv\dots\hookrightarrow \dashv\operatorname{Fil}_{!}^{d}(\Psi_{\varrho}) \tag{14}\]
where \(\operatorname{Fil}_{!}^{h}(\Psi_{\varrho})\) is the saturated image of \(j_{!}^{=h}j^{=h,*}\Psi_{\varrho}\longrightarrow\Psi_{\varrho}\). For our fixed \(\chi_{v}\), let denote \(\operatorname{Fil}_{!,\chi_{v}}^{1}(\Psi)\hookrightarrow\operatorname{Fil}_{! }^{1}(\Psi_{\varrho})\) such that \(\operatorname{Fil}_{!,\chi_{v}}^{1}(\Psi)\otimes_{\overline{\mathbb{Z}}_{l}} \overline{\mathbb{Q}}_{l}\cong\operatorname{Fil}_{!}^{1}(\Psi_{\chi_{v}})\) where \(\Psi_{\chi_{v}}\) is the direct factor of \(\Psi\otimes_{\overline{\mathbb{Z}}_{l}}\overline{\mathbb{Q}}_{l}\) associated to \(\chi_{v}\), cf. [1]. From [1] 3.3.5, we have the following resolution of \(\operatorname{gr}_{!,\chi_{v}}^{h}(\Psi)\)
\[0\to j_{!}^{=d}HT(\chi_{v},LT_{h,d}(\chi_{v}))\otimes\chi_{v}^{*}(\frac{d-h} {2})\longrightarrow\\ j_{!}^{=d-1}HT(\chi_{v},LT_{h,d-1}(\chi_{v}))\otimes\chi_{v}^{*}( \frac{d-h-1}{2})\longrightarrow\\ \dots\longrightarrow j_{!}^{=h}HT(\chi_{v},\operatorname{st}_{h} (\chi_{v}))\otimes\chi_{v}^{*}\longrightarrow\operatorname{gr}_{!,\chi_{v}}^{ h}(\Psi)\to 0, \tag{15}\]
where \(LT_{h,h+\delta}(\chi_{v})\hookrightarrow\operatorname{st}_{h}(\chi_{v}\{- \delta/2\})\times\operatorname{Speh}_{\delta}(\chi_{v}\{h/2\})\), is the only irreducible sub-space of this induced representation,
We can then apply the previous arguments a)-d) above: for \(h\leq h_{0}\) (resp. \(h>h_{0}\)) the torsion of \(H^{i}(\operatorname{Sh}_{K^{v}(\infty),\bar{s}_{v}},\operatorname{gr}_{!,\chi _{v}}^{h}(\Psi_{v,\xi}))_{\mathfrak{m}}\) is trivial for any \(i\leq h-h_{0}\) (resp. for all \(i\)) and the free parts are concentrated for \(i=0\). Using the spectral sequence associated to the previous filtration, we can then conclude that \(H^{1-t_{0}}(\operatorname{Sh}_{K^{v}(\infty),\bar{s}_{v}},\Psi_{v,\xi})_{ \mathfrak{m}}\) would have non trivial torsion which is false as \(\mathfrak{m}\) is supposed to be KHT-free.
In particular the previous spectral sequence gives us a filtration of \(H^{d-1}(\operatorname{Sh}_{K^{v}(\infty),\bar{\eta}_{v}},\overline{\mathbb{F}} _{l})_{\mathfrak{m}}\) whose graded parts are
\[H^{0}(\operatorname{Sh}_{K^{v}(\infty),\bar{s}_{v}},\operatorname{gr}^{-p}( \operatorname{gr}_{!}^{k}(\Psi_{K,\varrho})))_{\mathfrak{m}}\otimes_{\overline {\mathbb{Z}}_{l}}\overline{\mathbb{F}}_{l},\]
for \(\varrho\) describing the equivalence classes of irreducible \(\overline{\mathbb{F}}_{l}\)-supercuspidal representation of \(\operatorname{GL}_{g}(F_{v})\) with \(1\leq g\leq d\), and then \(1\leq k\leq p\leq\lfloor\frac{d}{g}\rfloor\).
## 3 Genericity for KHT-Shimura varieties
As explained in the introduction, we follow the strategy of [1] which consists to transfer the genericity property of Ihara's lemma concerning \(\overline{G}\) to the genericity of the cohomology of KHT-Shimura varieties.
Let \(\overline{G}\) be a similitude group as in the introduction such that moreover there exists a prime number \(p_{0}\) split in \(E\) and \(v_{0}^{+}\) a place of \(F^{+}\) above \(p_{0}\), identified as before to a place \(v_{0}\) of \(F\), such that \(\overline{B}_{v_{0}}\) is a division algebra: in particular \(v_{0}\neq v\). Consider then, with the usual abuse of notation, \(G/\mathbb{Q}\) such that \(G(\mathbb{A}_{\mathbb{Q}}^{\infty,v_{0}})\cong\overline{G}(\mathbb{A}_{ \mathbb{Q}}^{\infty,v_{0}})\) with \(G(F_{v_{0}})\cong\operatorname{GL}_{d}(F_{v_{0}})\) and \(G(\mathbb{R})\) of signatures \((1,n-1),(0,n)^{r}\). The KHT Shimura variety \(\operatorname{Sh}_{K,v_{0}}\to\operatorname{spec}\mathcal{O}_{v_{0}}\) associated to \(G\) with level \(K\), has a Newton stratification of its special fiber with
\[\operatorname{Sh}_{K,\bar{s}_{v_{0}}}^{=d}=\coprod_{i\in\ker^{1}(\mathbb{Q},G )}\operatorname{Sh}_{K,\bar{s}_{v_{0}},i}^{=d}\,.\]
For a equivariant sheaf \(\mathcal{F}_{K,i}\) on \(\operatorname{Sh}_{K^{v}(\infty),\bar{s}_{v_{0}},i}^{=d}\) seen as compatible system of \(\operatorname{Sh}_{K^{v}K_{v},\bar{s}_{v_{0}},i}^{=d}\) for \(K_{v}\) describing the set of open compact subgroups of \(\operatorname{GL}_{d}(\mathcal{O}_{v})\), its fiber at a compatible system \(z_{K^{v}(\infty),i}\) of supersingular point \(z_{K^{v}K_{v},i}\), has an action of \(\overline{G}(\mathbb{A}_{\mathbb{Q}}^{\infty})\times\operatorname{GL}_{d}(F_ {v})^{0}\) where \(\operatorname{GL}_{d}(F_{v})^{0}\) is the kernel of the valuation of the determinant so that, cf. [1] proposition 5.1.1, as a \(\operatorname{GL}_{d}(F_{v})\)-module, we have
\[H^{0}(\operatorname{Sh}_{K^{v}(\infty),\bar{s}_{v_{0}},i}^{=d},\mathcal{F}_{ K^{v}(\infty),i})\cong\Big{(}\mathrm{ind}_{\overline{G}(\mathbb{Q})}^{ \overline{G}(\mathbb{A}^{\infty,v})\times\mathbb{Z}}\,z_{K^{v_{0}}(\infty),i} ^{*}\mathcal{F}_{K^{v_{0}}(\infty),i}\Big{)}^{K^{v}},\]
with \(\delta\in\overline{G}(\mathbb{Q})\mapsto(\delta^{\infty,v_{0}},\mathrm{val} \circ\mathrm{rn}(\delta_{v_{0}}))\in\overline{G}(\mathbb{A}^{\infty,v_{0},v}) \times\mathbb{Z}\) and where the action of \(g_{v_{0}}\in\operatorname{GL}_{d}(F_{v_{0}})\) is given by those of \((g_{0}^{-\mathrm{val}\det g_{v_{0}}}g_{v_{0}},\mathrm{val}\det g_{v_{0}})\in \operatorname{GL}_{d}(F_{v_{0}})^{0}\times\mathbb{Z}\) where \(g_{0}\in\operatorname{GL}_{d}(F_{v_{0}})\) is any fixed element with \(\mathrm{val}\det g_{0}=1\). Moreover, cf. [1] corollaire 5.1.2, if \(z_{K^{v_{0}}(\infty),i}^{*}\mathcal{F}_{K^{v_{0}}(\infty),i}\) is provided with an action of the kernel \((D_{v_{0},d}^{\times})^{0}\) of the valuation of the reduced norm, action compatible with those of \(\overline{G}(\mathbb{Q})\hookrightarrow D_{v_{0},d}^{\times}\), then as a \(G(\mathbb{A}^{\infty})\)-module, we have
\[H^{0}(\operatorname{Sh}_{K^{v}(\infty),\bar{s}_{v_{0}},i}^{=d},\mathcal{F}_{ K^{v}(\infty),i})\cong\Big{(}\mathcal{C}^{\infty}(\overline{G}(\mathbb{Q}) \backslash\overline{G}(\mathbb{A}^{\infty}),\Lambda)\otimes_{D_{v_{0},d}^{ \times}}\mathrm{ind}_{(D_{v_{0},d}^{\times})^{0}}^{D_{v_{0},d}^{\times}}z_{i} ^{*}\mathcal{F}_{\mathcal{I},i}\Big{)}^{K^{v}} \tag{16}\]
In particular, cf. lemma 2.3.1 of [1], let \(\overline{\pi}\) be an irreducible sub-\(\overline{\mathbb{F}}_{l}\)-representation of \(\mathcal{C}^{\infty}(\overline{G}(\mathbb{Q})\backslash\overline{G}(\mathbb{A })/K^{v},\overline{\mathbb{F}}_{l})_{\mathfrak{m}}\) for \(\mathfrak{m}\) such that \(\overline{\rho}_{\mathfrak{m}}\) is irreducible. Write its local component \(\bar{\pi}_{v_{0}}\cong\pi_{v_{0}}[s]_{D}\) with \(\pi_{v_{0}}\) an irreducible cuspidal representation of \(\operatorname{GL}_{g}(F_{v_{0}})\) with \(d=sg\). Then \((\overline{\pi}^{v_{0}})^{K^{v}}\) is a sub-representation of \(H^{0}(\operatorname{Sh}_{K^{v}(\infty),\bar{s}_{v_{0}}}^{=d},HT(\pi_{v_{0}}^{ \vee},s))_{\mathfrak{m}}\otimes_{\overline{\mathbb{Z}}_{l}}\overline{\mathbb{ F}}_{l}\) and, cf. proposition 2.3.2 of [1], a sub-\(\overline{\mathbb{F}}_{l}\)-representation of \(H^{d-1}(\operatorname{Sh}_{K^{v}(\infty),\bar{\eta}_{v_{0}}},\overline{ \mathbb{F}}_{l})_{\mathfrak{m}}\). Indeed, cf. theorem 2.4.3,
* by the main result of [1], as \(l>d\geq 2\) and \(\overline{\rho}_{\mathfrak{m}}\) is irreducible, then \(\mathfrak{m}\) is KHT free so that hypothesis (H1) of [1] is fulfilled.
* Theorem 2.4.3 gives us that the filtration of \(H^{d-1}(\operatorname{Sh}_{K^{v}(\infty),\bar{\eta}_{v_{0}}},\overline{ \mathbb{Z}}_{l})_{\mathfrak{m}}\) induced by the filtration of the nearby cycles at \(v_{0}\) is strict, which was the reason to introduced hypothesis (H3) in [1].
Finally if the analog of Ihara's lemma for \(H^{d-1}(\operatorname{Sh}_{K^{v}(\infty),\bar{\eta}},\overline{\mathbb{F}}_{l}) _{\mathfrak{m}}\) is true for the action of \(\operatorname{GL}_{d}(F_{v})\), then this is also the case for \(\overline{G}\). We now focus on the genericity of irreducible sub-\(\operatorname{GL}_{d}(F_{v})\)-modules of \(H^{0}(\operatorname{Sh}_{K^{v}(\infty),\bar{\eta}},\overline{\mathbb{F}}_{l}) _{\mathfrak{m}}\) using the nearby cycles at the place \(v\).
Using proposition 3.7 (3) of [11], through Verdier duality, this is also equivalent to prove the genericity of irreducible quotient of \(\mathrm{GL}_{d}(F_{v})\)-modules of \(H^{0}(\mathrm{Sh}_{K^{v}(\infty),\widetilde{\eta}_{v}},\overline{\mathbb{F}}_{l })_{\mathfrak{m}}\), which is the content of proposition 3.3.4 using 3.3.3.
### Level raising
To a cohomological minimal prime ideal \(\widetilde{\mathfrak{m}}\) of \(\mathbb{T}(K)\), which corresponds to a maximal ideal of \(\mathbb{T}(K)[\frac{1}{l}]\), is associated both a near equivalence class of \(\overline{\mathbb{Q}}_{l}\)-automorphic representation \(\Pi_{\widetilde{\mathfrak{m}}}\) and a Galois representation
\[\rho_{\widetilde{\mathfrak{m}}}:G_{F}:=\mathrm{Gal}(\bar{F}/F)\longrightarrow \mathrm{GL}_{d}(\overline{\mathbb{Q}}_{l})\]
such that the eigenvalues of the Frobenius morphism at an unramified place \(w\) are given by the Satake parameters of the local component \(\Pi_{\widetilde{\mathfrak{m}},w}\) of \(\Pi_{\widetilde{\mathfrak{m}}}\). The semi-simple class \(\overline{\rho}_{\mathfrak{m}}\) of the reduction modulo \(l\) of \(\rho_{\widetilde{\mathfrak{m}}}\) depends only of the maximal ideal \(\mathfrak{m}\) of \(\mathbb{T}_{K}^{S}\) containing \(\widetilde{\mathfrak{m}}\).
We now allow infinite level at \(v\) and we denote by \(\mathbb{T}(K^{v}(\infty))\) the associated Hecke algebra. We fix a maximal ideal \(\mathfrak{m}\) in \(\mathbb{T}(K^{v}(\infty))\) such that
* the associated Galois representation \(\overline{\rho}_{\mathfrak{m}}:G_{F}\rightarrow\mathrm{GL}_{d}(\mathbb{F})\) is irreducible;
* \(\overline{\rho}_{\mathfrak{m}}|W_{F_{v}}\), after semi-simplification, is a direct sum of characters.
_Remark_.: For every minimal prime \(\widetilde{\mathfrak{m}}\subseteq\mathfrak{m}\), note that \(\Pi_{\widetilde{\mathfrak{m}},v}\) looks like \(\mathrm{st}_{s_{1}}(\chi_{v,1})\times\cdots\times\mathrm{st}_{s_{r}}(\chi_{ v,r})\) with \(s_{1}+\cdots+s_{r}=d\).
Let \(\mathcal{S}_{v}(\mathfrak{m})\) be the supercuspidal support of the modulo \(l\) reduction of any \(\Pi_{\widetilde{m},v}\) in the near equivalence class associated to a minimal prime ideal \(\widetilde{\mathfrak{m}}\subseteq\mathfrak{m}\). Recall that \(\mathcal{S}_{v}(\mathfrak{m})\) is a multi-set, i.e. a set with multiplicities which only depends on \(\mathfrak{m}\). We decompose it according to the set \(\mathcal{Z}\) of Zelevinsky lines: as we supposed \(q_{v}\equiv 1\mod l\) then every Zelevinsky line is reduced to a single equivalence class of an irreducible (super)cuspidal \(\overline{\mathbb{F}}_{l}\)-representations \(\varrho\) of some \(\mathrm{GL}_{g(\varrho)}(F_{v})\) with \(1\leq g(\varrho)\leq d\). Moreover our second hypothesis tells us that we are only concerned with \(\varrho\) being a character:
\[\mathcal{S}_{v}(\mathfrak{m})=\coprod_{\varrho\in\mathrm{Cusp}_{\overline{ \mathbb{F}}_{l}}(1,v)}\mathcal{S}_{\varrho}(\mathfrak{m}),\]
where \(\mathrm{Cusp}_{\overline{\mathbb{F}}_{l}}(1,v)\) is the set of \(\overline{\mathbb{F}}_{l}\)-characters of \(F_{v}^{\times}\).
**Notation 3.1.1**.: We denote by \(l_{\varrho}(\mathfrak{m})\) the multiplicity of \(\mathcal{S}_{\varrho}(\mathfrak{m})\).
For \(\widetilde{\mathfrak{m}}\subseteq\mathfrak{m}\), the local component \(\Pi_{\widetilde{\mathfrak{m}},v}\) of \(\Pi_{\widetilde{\mathfrak{m}}}\) can then be written as a full induced representation \(\underset{\varrho\in\mathrm{Cusp}_{\overline{\mathbb{F}}_{l}}(1,v)}{ \bigtimestimestimes}\Pi_{\widetilde{\mathfrak{m}},\varrho}\) where each \(\Pi_{\widetilde{\mathfrak{m}},\varrho}\) is also a full induced representation
\[\Pi_{\widetilde{\mathfrak{m}},\varrho}\cong\underset{i=1}{\overset{r(\varrho) }{\bigtimestimestimes}}\mathrm{St}_{l_{\varrho,i}(\widetilde{\mathfrak{m}})}( \pi_{v,i})\]
where \(r_{l}(\pi_{v,i})\cong\varrho\), \(l_{\varrho,1}(\varrho)\geq\cdots l_{\varrho,r(\varrho)}(\varrho)\) and \(\sum_{i=1}^{r}l_{\varrho,i}(\widetilde{\mathfrak{m}})=l_{\varrho}(\mathfrak{m})\).
Suppose now that there exists \(\varrho\in\mathrm{Cusp}_{\overline{\mathbb{F}}_{l}}(1,v)\) with \(r(\varrho)\geq 2\), then
\[H^{0}(\mathrm{Sh}_{K,\bar{s}_{v}},\mathrm{gr}^{-l_{\varrho,1}(\varrho)}( \mathrm{gr}_{!}^{1}(\Psi_{K,\varrho})))_{\mathfrak{m}}\otimes_{\overline{ \mathbb{Z}}_{l}}\overline{\mathbb{F}}_{l}\]
is a subspace of \(H^{0}(\mathrm{Sh}_{K,\bar{s}_{v}},\Psi_{K,v}))_{\mathfrak{m}}\otimes_{ \overline{\mathbb{Z}}_{l}}\overline{\mathbb{F}}_{l}.\) Moreover this subspace, as a \(\overline{\mathbb{F}}_{l}\)-representation of \(\mathrm{GL}_{d}(F_{v})\), has a subspace of the following shape \(\mathrm{st}_{l_{1}(\varrho)}(\varrho)\times\tau\) where the supercuspidal support of \(\tau\) contains \(\varrho\). In particular as \(q_{v}\equiv 1\mod l\) and \(l>d\), this induced representation has both a generic and a non generic subspace.
We can then conclude that for the genericity property to be true for KHT Shimura varieties, one needs a level raising property as in proposition 3.3.1 of [1]. Hopefully such statements exist under some rather mild hypothesis as for example the following result of T. Gee.
**Theorem 3.1.2**.: _([1] theorem 5.1.5) Let \(F=F^{+}E\) be a CM field where \(F^{+}\) is totally real and \(E\) is imaginary quadratic. Let \(d>1\) and \(l>d\) be a prime which is unramified in \(F^{+}\) and split in \(E\). Suppose that_
\[\overline{\rho}:G_{F}\longrightarrow\mathrm{GL}_{n}(\overline{\mathbb{F}}_{l})\]
_is an irreducible representation which is unramified at all places of \(F\) lying above primes which do not split in \(E\) and which satisfies the following properties._
* \(\overline{\rho}\) _is automorphic of weight_ \(\underline{a}\)_, where we assume that for all_ \(\tau\in(\mathbb{Z}^{d})^{\mathrm{hom}(F,\mathbb{C})}\) _we have either_ \[l-1-d\geq a_{\tau,1}\geq\cdots\geq a_{\tau,d}\geq 0,\] _or_ \[l-1-d\geq a_{c\tau,1}\geq\cdots\geq a_{c\tau,d}\geq 0.\] _Note in particular that these conditions imply_ \(\overline{\rho}^{c}\cong\overline{\rho}^{\vee}\epsilon^{1-d}\)_._
* \(\overline{F}^{\mathrm{ker}\,\mathrm{ad}\,\overline{\rho}}\) _does not contain_ \(F(\zeta_{l})\)_._
* \(\overline{\rho}(G_{F^{+}(\zeta_{l})})\) _is big._
_Let \(u\) be a finite place of \(F^{+}\) which split in \(F\) and not dividing \(l\). Choose an inertial type \(\tau_{v}\) and a place \(v\) of \(F\) above \(u\). Assume that \(\overline{\rho}_{|G_{F_{v}}}\) has a lift to characteristic zero of type \(\tau_{v}\)._
_Then there is an automorphic representation \(\pi\) of \(\mathrm{GL}_{n}(\mathbb{A}_{F})\) of weight \(\underline{a}\) and level prime to \(l\) such that_
* \(\overline{\tau}_{l,\iota}(\pi)\cong\overline{\rho}\)_._
* \(r_{l,\iota}(\pi)_{|G_{F_{v}}}\) _has type_ \(\tau_{v}\)_._
* \(\pi\) _is unramified at all places_ \(w\neq v\) _of_ \(F\) _at which_ \(\overline{\rho}\) _is unramified._
_Remark_.: In this text we focus only on the trivial coefficients \(\overline{\mathbb{Z}}_{l}\), i.e. to the case \(a_{\tau,1}=\cdots=a_{\tau,d}=a_{c\tau,1}=\cdots=a_{c\tau,d}=0\), but we could also deals with others weights as in the previous theorem.
### Local and global monodromy
The previous filtrations on \(\Psi_{K,\varrho}\) are compatible with the action of the monodromy operator \(N_{v}\) at \(v\). More precisely, over \(\overline{\mathbb{Q}}_{l}\), for \(i\geq 2\), we have isomorphisms
\[N_{v}:\mathrm{gr}^{-i}(\mathrm{gr}_{!}^{k+1}(\Psi_{K,\varrho}))\otimes_{ \overline{\mathbb{Z}}_{l}}\overline{\mathbb{Q}}_{l}\longrightarrow\mathrm{gr}^ {-i}(\mathrm{gr}_{!}^{k}(\Psi_{K,\varrho}(1)))\otimes_{\overline{\mathbb{Z}}_ {l}}\overline{\mathbb{Q}}_{l}.\]
As explained in the introduction, we seek for a geometric definition of \(N_{v}\). In the semi-stable case, i.e. when the level \(K\) is such that \(K_{v}\) is the Iwahori subgroup \(\mathrm{Iw}_{v}\) of upper triangular matrices modulo \(q_{v}\), following Rapoport-Zink we obtained a geometric nilpotent monodromy operator \(N_{v}^{geo}:\Psi_{K^{v}\mathrm{Iw}_{v},v}\longrightarrow\Psi_{K^{v}\mathrm{Iw }_{v},v}(1)\) over \(\overline{\mathbb{Z}}_{l}\) whose tensor product with \(\overline{\mathbb{Q}}_{l}\) coincides with the usual arithmetic nilpotent monodromy operator and such that, cf. [11] 3.6.13, its modulo \(l\) reduction \(\overline{N}_{v}^{geo}\) share the same order of nilpotency with \(N_{v}^{geo}\).
We want to obtain a similar definition in infinite level at \(v\). Recall that, concerning our Ihara's lemma statement, we are only interested in automorphic representations \(\Pi\) such that \(\Pi_{v}\) looks like an irreducible parabolic induction \(\mathrm{st}_{s_{1}}(\chi_{v,1})\times\cdots\times\mathrm{st}_{s_{r}}(\chi_{v,r})\) for \(\chi_{v,i}\) characters. We then know that these representations appears in the cohomology of \(\mathrm{Sh}_{K,\bar{s}_{v}}\) with coefficient in \(\Psi_{\varrho}\) for \(\varrho\) a \(\overline{\mathbb{F}}_{l}\)-character. Recall that for an open compact subgroup \(K\), \(\Psi_{K,\chi_{v}}\) is a direct factor of \(\Psi_{K,\varrho}\otimes_{\overline{\mathbb{Z}}_{l}}\overline{\mathbb{Q}}_{l}\). We then consider the pullback
where the cokernel \(\Psi_{K,\chi_{v},\overline{\mathbb{Z}}_{l}}^{\chi_{v}}\) of \(\Psi_{K,\chi_{v},\overline{\mathbb{Z}}_{l}}\hookrightarrow\Psi_{K,\varrho}\), is a free perverse sheaf. From [1], \(\Psi_{K,\chi_{v},\overline{\mathbb{Z}}_{l}}\) has, a \(\overline{\mathbb{Z}}_{l}\)-bi-filtration with graded parts \(\mathrm{gr}^{-i}(\mathrm{gr}_{!}^{k}(\Psi_{K,\chi_{v},\overline{\mathbb{Z}}_ {l}}))\) where \(1\leq k\leq i\leq d\) such that5
Footnote 5: Recall that, as \(q_{v}\equiv 1\mod l\) with \(l>d\), then there exists no cuspidal representation whose modulo \(l\) reduction has a supercuspidal support made of character except than characters.
\[\mathrm{gr}^{-i}(\mathrm{gr}_{!}^{k}(\Psi_{k,\chi_{v},\overline{\mathbb{Z}}_{l }}))\otimes_{\overline{\mathbb{Z}}_{l}}\overline{\mathbb{Q}}_{l}\cong P(i, \chi_{v})(\frac{1-i+2(k-1)}{2}).\]
Note that, up to homothety and because the modulo \(l\) reduction of \(\mathrm{st}_{k+i}(\chi_{v})\) is irreducible for \(k+i\leq d<l\), \(P(k+i,\chi_{v})\) admits a unique stable \(\overline{\mathbb{Z}}_{l}\)-lattice. There exists then a unique \(\overline{\mathbb{Z}}_{l}\)-monodromy operator \(N_{v}(\chi_{v})\) acting on \(\Psi_{K,\chi_{v},\overline{\mathbb{Z}}_{l}}\) such that
* it coincides with the arithmetic monodromy operator over \(\overline{\mathbb{Q}}_{l}\),
* it is of nilpotency order \(d\) modulo \(l\), inducing isomorphisms \[\mathrm{gr}^{-i}(\mathrm{gr}_{!}^{k+1}(\Psi_{K,\chi_{v},\overline{\mathbb{Z}}_ {l}}))\longrightarrow\mathrm{gr}^{-i}(\mathrm{gr}_{!}^{k}(\Psi_{K,\chi_{v}, \overline{\mathbb{Z}}_{l}}(1))).\]
Consider now the pushout \(\widetilde{\Psi}_{K,\chi_{v},\overline{\mathbb{Z}}_{l}}\)
As explained in the introduction of [11], both \(\Psi_{K,\chi_{v},\overline{\mathbb{Z}}_{l}}\) and \(\widetilde{\Psi}_{K,\chi_{v},\overline{\mathbb{Z}}_{l}}\) are constructed from the locally constant sheaf \(j^{=1,*}\Psi_{K,\chi_{v},\overline{\mathbb{Z}}_{l}}\) (resp. \(j^{=1,*}\widetilde{\Psi}_{K,\chi_{v},\overline{\mathbb{Z}}_{l}}\)) using the adjonction morphism \(j^{=h}_{i}j^{=h,*}\) without any saturation process6 Then as these two local systems are \(\overline{\mathbb{Z}}_{l}\)-lattices of the \(F_{v}^{\times}\times F_{v}^{\times}\times W_{v}\)-representation, \(\chi_{v}\otimes\chi_{v}\otimes\chi_{v}\), they are obviously isomorphic. We then deduce that \(\Psi_{K,\chi_{v},\overline{\mathbb{Z}}_{l}}\) and \(\widetilde{\Psi}_{K,\chi_{v},\overline{\mathbb{Z}}_{l}}\) are isomorphic. Consider then the following nilpotent monodromy operator
Footnote 6: In our case of a character, the proof is quite simple as we do not have to bother about the \(p+\)-intermediate extensions, cf. lemma 2.1.6 of loc. cit., which is the main point to deal with in [11].
\[N_{v,\chi_{v}}:\widetilde{\Psi}_{K,\chi_{v},\overline{\mathbb{Z}}_{l}} \overset{\sim}{\longrightarrow}\Psi_{K,\chi_{v},\overline{\mathbb{Z}}_{l}} \overset{N_{v}(\chi_{v})}{\longrightarrow}\Psi_{K,\chi_{v},\overline{\mathbb{ Z}}_{l}}.\]
Consider then the compositum \(N^{geo}_{v}\)
where \(\operatorname{Cusp}(\varrho)\) is the set of character whose modulo \(l\) reduction is isomorphic to \(\varrho\). Note that the two horizontal maps are monomorphism which becomes isomorphisms after tensoring with \(\overline{\mathbb{Q}}_{l}\) and \(N^{geo}_{v}\otimes_{\overline{\mathbb{Z}}_{l}}\overline{\mathbb{Q}}_{l}\) coincides with the usual arithmetic nilpotent monodromy operator.
**Lemma 3.2.1**.: _The order of nilpotency of \(N^{geo}_{v}\otimes_{\overline{\mathbb{Z}}_{l}}\overline{\mathbb{F}}_{l}\) is \(d\)._
Proof.: Over \(\overline{\mathbb{Z}}_{l}\), for any \(\chi_{v}\in\operatorname{Cusp}(\varrho)\) we have
\[\Psi_{K,\varrho}\twoheadrightarrow\widetilde{\Psi}_{K,\chi_{v},\overline{ \mathbb{Z}}_{l}}\overset{N^{d}_{v,\chi_{v}}}{\longrightarrow}\Psi_{K,\chi_{v },\overline{\mathbb{Z}}_{l}}\hookrightarrow\Psi_{K,\varrho}.\]
The results then follows from the fact that all the above maps are strict, i.e. their cokernels are torsion free.
Consider a maximal ideal \(\mathfrak{m}\) of \(\mathbb{T}^{S}(K^{v}(\infty))\) given by the level raising statement of theorem 3.1.2 and such that \(H^{d-1}(\operatorname{Sh}_{K^{v}\operatorname{Iw}_{v},\overline{\eta}}, \overline{\mathbb{Z}}_{l})_{\mathfrak{m}}\) is torsion free and
non zero. In [10] over \(\overline{\mathbb{Q}}_{l}\), we proved that the filtration \(\operatorname{Fil}_{l}^{\bullet}(\Psi_{K,v})\) coincides with the kernel filtration of monodromy on \(\Psi_{K,v}\). The spectral sequence
\[E_{1,!,\mathfrak{m}}^{i,j}:=H^{i+j}(\operatorname{Sh}_{K,\bar{s}_{v}}, \operatorname{gr}_{!}^{-i}(\Psi_{K,v}))_{\mathfrak{m}}\Rightarrow H^{i+j}( \operatorname{Sh}_{K,\bar{s}_{v}},\Psi_{K,v})_{\mathfrak{m}}\]
is concentrated in \(i+j=0\) and so degenerates in \(E_{1}\). We denote by \(N^{coho}_{\mathfrak{m},\varrho}\) the monodromy operator associated to \(N^{geo}_{v}\) on \(H^{*}(\operatorname{Sh}_{K^{v}(\infty),\bar{s}_{v}},\Psi_{K^{v}(\infty), \varrho})_{\mathfrak{m}}\) as well as \(\overline{N}^{coho}_{\mathfrak{m},\varrho}\) for its modulo \(l\) version.
_Main observation_: let \(\varrho\) be a character which is a subquotient of \(\overline{\rho}_{\mathfrak{m},v}\), and consider \(\widetilde{\mathfrak{m}}\subseteq\mathfrak{m}\) such that, thanks to theorem 3.1.2:
* \((\Pi^{\infty}_{\mathfrak{m}})^{K^{v}}\neq 0\) so that it appears in \(H^{d-1}(\operatorname{Sh}_{K^{v}\operatorname{Iw}_{v},\bar{\eta}},\overline{ \mathbb{Z}}_{l})_{\mathfrak{m}}\),
* \(\Pi^{\infty}_{\widetilde{\mathfrak{m}},v}\cong\operatorname{st}_{l_{\varrho}( \mathfrak{m})}(\chi_{v})\times\tau_{v}\), where \(\chi_{v}\in\operatorname{Cusp}(\varrho)\) and the supercuspidal support of the modulo \(l\) reduction of \(\tau_{v}\) does not hold \(\varrho\).
Recall that from theorem 2.4.3, the filtration of \(\Psi_{K,\varrho}\) gives us a filtration of \(H^{d-1}(\operatorname{Sh}_{K^{v}(\infty),\bar{\eta}},\overline{\mathbb{Z}}_{ l})_{\mathfrak{m}}\otimes_{\overline{\mathbb{Z}}_{l}}\overline{\mathbb{F}}_{l}\) where the graded parts are the \(H^{0}(\operatorname{Sh}_{K^{v}(\infty),\bar{s}_{v}},\operatorname{gr}^{-i}( \operatorname{gr}_{!}^{k}(\Psi_{K,\varrho})))_{\mathfrak{m}}\otimes_{ \overline{\mathbb{Z}}_{l}}\overline{\mathbb{F}}_{l}\). From the definition of \(N^{geo}_{v}\), we then observe that the image of \((\overline{N}^{coho}_{v})^{l_{\varrho}(\mathfrak{m})-1}\)
* is, by the previous lemma, non zero
* and its irreducible \(\overline{\mathbb{F}}_{l}[\operatorname{GL}_{d}(F_{v})]\)-irreducible subquotients are, up to multiplicities, also subquotients of \(H^{0}(\operatorname{Sh}_{K^{v}(\infty),v},P(\chi_{v},l_{\varrho}(\mathfrak{m}) )(\frac{l_{\varrho}(\mathfrak{m})-1}{2}))_{\mathfrak{m}}\otimes_{\overline{ \mathbb{Z}}_{l}}\overline{\mathbb{F}}_{l}\), for any \(\chi_{v}\in\operatorname{Cusp}(\varrho)\) as this modulo \(l\) reduction does not depend on the choice of such a lift of \(\varrho\).
### Typicality and monodromy
As explained in [11], the \(\overline{\mathbb{Q}}_{l}\)-cohomology of \(\operatorname{Sh}_{K,\bar{\eta}}\) can be written as
\[H^{d-1}(\operatorname{Sh}_{K,\bar{\eta}},\overline{\mathbb{Q}}_{l})_{ \mathfrak{m}}\cong\bigoplus_{\pi\in\mathcal{A}_{\xi,K}(\mathfrak{m})}(\pi^{ \infty})^{K}\otimes V(\pi^{\infty}),\]
where
* \(\mathcal{A}_{K}(\mathfrak{m})\) is the set of equivalence classes of automorphic representations of \(G(\mathbb{A})\) with non trivial \(K\)-invariants and such that its modulo \(l\) Satake's parameters outside \(S\) are prescribed by \(\mathfrak{m}\),
* and \(V(\pi^{\infty})\) is a representation of \(\operatorname{Gal}_{F,S}\).
As \(\overline{\rho}_{\mathfrak{m}}\) is supposed to be absolutely irreducible, then as explained in chapter VI of [11], if \(V(\pi^{\infty})\) is non zero, then \(\pi\) is a weak transfer of a cohomological automorphic representation \((\Pi,\psi)\) of \(\operatorname{GL}_{d}(\mathbb{A}_{F})\times\mathbb{A}_{F}^{\times}\) with \(\Pi^{\vee}\cong\Pi^{c}\) where \(c\) is the complex conjugation. Attached to such a \(\Pi\) is a global Galois representation \(\rho_{\Pi,l}:\operatorname{Gal}_{F,S}\longrightarrow\operatorname{GL}_{d}( \overline{\mathbb{Q}}_{l})\) which is irreducible.
**Theorem 3.3.1**.: _(cf. [10] theorem 2.20) If \(\rho_{\Pi,l}\) is strongly irreducible, meaning it remains irreducible when it is restricted to any finite index subgroup, then \(V(\pi^{\infty})\) is a semi-simple representation of \(\operatorname{Gal}_{F,S}\)._
_Remark_.: The Tate conjecture predicts that \(V(\pi^{\infty})\) is always semi-simple.
**Definition 3.3.2**.: (cf. [11] SS5) We say that \(\mathfrak{m}\) is KHT-typic for \(K\) if, as a \(\mathbb{T}(K)_{\mathfrak{m}}[\operatorname{Gal}_{F,S}]\)-module,
\[H^{d-1}(\operatorname{Sh}_{K,\bar{\eta}_{l}},\overline{\mathbb{Z}}_{l})_{ \mathfrak{m}}\cong\sigma_{\mathfrak{m},K}\otimes_{\mathbb{T}(K)_{\mathfrak{m }}}\rho_{\mathfrak{m},K},\]
for some \(\mathbb{T}(K)_{\mathfrak{m}}\)-module \(\sigma_{\mathfrak{m},K}\) on which \(\operatorname{Gal}_{F,S}\) acts trivially and
\[\rho_{\mathfrak{m},K}:\operatorname{Gal}_{F,S}\longrightarrow\operatorname{ GL}_{d}(\mathbb{T}(K)_{\mathfrak{m}})\]
is the stable lattice of \(\bigoplus_{\widetilde{\mathfrak{m}}\subseteq\mathfrak{m}}\rho_{\widetilde{ \mathfrak{m}}}\) introduced in the introduction.
**Proposition 3.3.3**.: _We suppose that for all \(\pi\in\mathcal{A}_{K}(\mathfrak{m})\), the Galois representation \(V(\pi^{\infty})\) is semi-simple. Then \(\mathfrak{m}\) is KHT-typic for \(K\)._
Proof.: By proposition 5.4 of [11] it suffices to deal with \(\overline{\mathbb{Q}}_{l}\)-coefficients. From [15] proposition VII.1.8 and the semi-simplicity hypothesis, then \(V(\pi^{\infty})\cong\widetilde{R}(\pi)^{\bigoplus n(\pi)}\) where \(\widetilde{R}(\pi)\) is of dimension \(d\). We then write
\[(\pi^{\infty})^{K}\otimes_{\overline{\mathbb{Q}}_{l}}R(\pi)\cong(\pi^{\infty })^{K}\otimes_{\mathbb{T}(K)_{\mathfrak{m},\overline{\mathbb{Q}}_{l}}}( \mathbb{T}(K)_{\mathfrak{m},\overline{\mathbb{Q}}_{l}})^{d},\]
and \((\pi^{\infty})^{K}\otimes_{\overline{\mathbb{Q}}_{l}}V(\pi^{\infty})\cong(( \pi^{\infty})^{K})\bigoplus{}^{n(\pi)}\otimes_{\mathbb{T}(K)_{\mathfrak{m}, \overline{\mathbb{Q}}_{l}}}(\mathbb{T}(K)_{\mathfrak{m},\overline{\mathbb{Q}} _{l}})^{d}\) and finally
\[H^{d-1}(\operatorname{Sh}_{K,\bar{\eta}},\overline{\mathbb{Q}}_{l})_{ \mathfrak{m}}\cong\sigma_{\mathfrak{m},K,\overline{\mathbb{Q}}_{l}}\otimes_{ \mathbb{T}(K)_{\mathfrak{m},\overline{\mathbb{Q}}_{l}}}(\mathbb{T}(K)_{ \mathfrak{m},\overline{\mathbb{Q}}_{l}})^{d},\]
with \(\sigma_{\mathfrak{m},K,\overline{\mathbb{Q}}_{l}}\cong\bigoplus_{\pi\in \mathcal{A}_{K}(\mathfrak{m})}((\pi^{\infty})^{I})\bigoplus{}^{n(\pi)}\). The result then follows from [15] theorem VII.1.9 which insures that \(R(\pi)\cong\rho_{\widetilde{\mathfrak{m}}}\), if \(\widetilde{\mathfrak{m}}\) is the prime ideal associated to \(\pi\),
Let \(\varrho\) be a \(\overline{\mathbb{F}}_{l}\)-character with \(l_{\varrho}(\mathfrak{m})>0\). Then \(H^{0}(\operatorname{Sh}_{K,\bar{\eta}_{v}},\Psi_{K,\varrho})_{\mathfrak{m}}\) as a direct factor of \(H^{d-1}(\operatorname{Sh}_{K,\bar{\eta}},\overline{\mathbb{Q}}_{l})_{ \mathfrak{m}}\), is also typic, i.e.
\[H^{0}(\operatorname{Sh}_{K,\bar{\eta}_{v}},\Psi_{K,\varrho})_{\mathfrak{m}} \cong\sigma_{\mathfrak{m},K,\varrho}\otimes_{\mathbb{T}(K)_{\mathfrak{m}}} \rho_{\mathfrak{m},K,\varrho}.\]
The monodromy operator \(N^{coho}_{\mathfrak{m},\varrho}\) acting on \(H^{0}(\operatorname{Sh}_{K,\bar{\eta}_{v}},\Psi_{K,\varrho})_{\mathfrak{m}}\) is such that
\[N^{coho}_{\mathfrak{m},\varrho}\otimes_{\overline{\mathbb{Z}}_{l}}\overline{ \mathbb{Q}}_{l}\cong\operatorname{Id}\otimes N_{\mathfrak{m},\overline{ \mathbb{Q}}_{l}},\]
i.e. it acts trivially on the first factor \(\sigma_{\mathfrak{m},K,\varrho}\). We then deduce that \(N^{coho}_{\mathfrak{m},\varrho}\) induces a nilpotent operator \(N_{\mathfrak{m},\varrho}\) (resp. \(\overline{N}_{\mathfrak{m},\varrho}\)) on \(\rho_{\mathfrak{m}}\) (resp. \(\rho_{\mathfrak{m}}\otimes_{\overline{\mathbb{Z}}_{l}}\overline{\mathbb{F}}_{l}\)).
_Remark_.: For a \(K^{v}\) and \(\mathfrak{m}\) such that the irreducible constituent of \(\overline{\rho}_{\mathfrak{m},v}\) are characters, we notice that \(\rho_{\mathfrak{m},K}\) does not depend on the level \(K_{v}\) and we then denote it simply by \(\rho_{\mathfrak{m}}\).
**Proposition 3.3.4**.: _Let \(\tau_{v}\) be an irreducible \(\overline{\mathbb{F}}_{l}[\operatorname{GL}_{d}(F_{v})]\)-module which is a quotient of \(\sigma_{\mathfrak{m},K^{v}(\infty)}\otimes_{\overline{\mathbb{Z}}_{l}} \overline{\mathbb{F}}_{l}\). Then \(\tau_{v}\) is generic._
Proof.: Recall that \(\tau_{v}\) can be written as a full parabolic induced representation
\[\raisebox{-14.226378pt}{\includegraphics[]{14.226378pt}{\includegraphics[]{14.226 378pt}{\includegraphics[]{14.226378pt}{\includegraphics[]{14.226378pt}{ \includegraphics[]{14.226378pt}{\includegraphics[]{14.226378pt}{\includegraphics []{14.226378pt}{\includegraphics[]{14.226378pt}{\includegraphics[]{14.226378 pt}{\includegraphics[]{14.226378pt}{\includegraphics[]{14.226378pt}{ \includegraphics[]{14.226378pt}{\includegraphics[]{14.226378pt}{\includegraphics []{14.226378pt}{\includegraphics[]{14.226378pt}{\includegraphics[]{14.226378 pt}{\includegraphics[]{14.226378pt}{\includegraphics[]{14.226378pt}{ \includegraphics[]{14.226378pt}{\includegraphics[]{14.226378pt}{\includegraphics []{14.226378pt}{\includegraphics[]{14.226378pt}{\includegraphics[]{14.226378 pt}{\includegraphics[]{14.226378pt}{\includegraphics[]{14.226378 pt}{\includegraphics[]{14.226378pt}{\includegraphics[]{14.226378 pt}{\includegraphics[]{14.226378pt}{\includegraphics[]{14.226378 pt}{\includegraphics[]{14.226378pt}{\includegraphics[]{14.226378 pt}{\includegraphics[]{14.226378pt}{\includegraphics[]{14.226378 pt}{\includegraphics[]{14.226378pt}{\includegraphics[]{14.226378 pt}{\includegraphics[]{14.226378pt}{\includegraphics[]{14.226378 pt}{\includegraphics[]{14.226378pt}{\includegraphics[]{14.226378 pt}{\includegraphics[]{14.226378pt}{\includegraphics[]{14.226378 pt}{\includegraphics[]{14.226378pt}{\includegraphics[]{14.226378 pt}{\includegraphics[]{14.226378pt}{\includegraphics[]{14.226378 pt}{\includegraphics[]{14.226378pt}{\includegraphics[]{14.226378 pt}{\includegraphics[]{14.226378pt}{\includegraphics[]{14.226378 pt}{\includegraphics[]{14.226378pt}{\includegraphics[]{14.226378 pt}{\includegraphics[]{14.226378pt}{\includegraphics[]{14.226378 pt}{\includegraphics[]{14.226378pt}{\includegraphics[]{14.226378 pt}{\includegraphics[]{14.226378pt}{\includegraphics[]{14.26378 pt}{\includegraphics[]{14.26378pt}{\includegraphics[]{14.26378 pt}{\includegraphics[]{14.226378pt}{\includegraphics[]{14.26378 pt}{\includegraphics[]{14.26378pt}{\includegraphics[]{14.26378 pt}{\includegraphics[]{14.26378pt}{\includegraphics[]{14.26378 pt}{\includegraphics[]{14.26378pt}{\includegraphics[]{14.26378 pt}{\includegraphics[]{14.26378pt}{\includegraphics[]{14.26378 pt}{\includegraphics[]{14.26378pt}{\includegraphics[]{14.26378 pt}{\includegraphics[]{14.26378pt}{\includegraphics[]{14.26378 pt}{\includegraphics[]{14.26378pt}{\includegraphics[]{14.26378 pt}{\includegraphics[]{14.26378pt}{\includegraphics[]{14.26378pt}{\includegraphics[]{14.26378 pt}{\includegraphics[]{14.26378pt}{\includegraphics[]{14.26378 pt}{\includegraphics[]{14.26378pt}{\includegraphics[]{14.26378 pt}{\includegraphics[]{14.26378pt}{\includegraphics[]{14.26378pt}{\includegraphics[]{14.26378 pt}{\includegraphics[]{14.26378pt}{\includegraphics[]{14.26378 pt}{\includegraphics[]{14.26378pt}{\includegraphics[]{14.26378pt}{\includegraphics[]{14.26378 pt}{\includegraphics[]{14.26378pt}{\includegraphics[]{14.26378pt}{\includegraphics[]{14.26378 pt}{\includegraphics[]{14.26378pt}{\includegraphics[]{14.26378pt}{\includegraphics[]{14.26378 pt}{\includegraphics[]{14.26378pt}{\includegraphics[]{14.26378pt}{\includegraphics[]{14.26378 pt}{\includegraphics[]{14.26378pt}{\includegraphics[]{14.26378pt}{\includegraphics[]{14.26378 pt}{\includegraphics[]{14.26378pt}{\includegraphics[]{14.26378pt}{\includegraphics[]{14.
of any of the irreducible constituants \(\tau_{v}\cong\bigtimes_{\theta\in\operatorname{Cusp}(1,v)}\tau_{\varrho}\) of \(\operatorname{Top}_{1}(\sigma_{\mathfrak{m},K^{v}(\infty)})\) are such that \(\tau_{\varrho}\) is generic. As it is true for all \(\varrho\in\operatorname{Cusp}(1,v)\), then \(\tau_{v}\) is generic.
We now want to prove, by induction on \(\delta\), that \(\operatorname{Top}\Bigl{(}\sigma_{\mathfrak{m},K^{v}(\infty)}\otimes_{ \overline{\mathbb{Z}}_{l}}\overline{\mathbb{F}}_{l}\Bigr{)}/\mathfrak{N}^{\delta -1}\) is generic as a \(\overline{\mathbb{F}}_{l}[\operatorname{GL}_{d}(F_{v})]\)-module, meaning that all its irreducible sub-quotients are generic. We just proved that the statement is true for \(\delta=1\). Consider
\[0\to\mathfrak{N}^{\delta-1}\operatorname{Top}\Bigl{(}\sigma_{ \mathfrak{m},K^{v}(\infty)}\otimes_{\overline{\mathbb{Z}}_{l}}\overline{ \mathbb{F}}_{l}\Bigr{)}/\mathfrak{N}^{\delta} \longrightarrow\operatorname{Top}\Bigl{(}\sigma_{\mathfrak{m},K^{ v}(\infty)}\otimes_{\overline{\mathbb{Z}}_{l}}\overline{\mathbb{F}}_{l} \Bigr{)}/\mathfrak{N}^{\delta}\] \[\longrightarrow\operatorname{Top}\Bigl{(}\sigma_{\mathfrak{m},K^ {v}(\infty)}\otimes_{\overline{\mathbb{Z}}_{l}}\overline{\mathbb{F}}_{l} \Bigr{)}/\mathfrak{N}^{\delta-1}\to 0.\]
As \(\operatorname{Top}\Bigl{(}\sigma_{\mathfrak{m},K^{v}(\infty)}\otimes_{ \overline{\mathbb{Z}}_{l}}\overline{\mathbb{F}}_{l}\Bigr{)}\) is semi-simple as a \(\overline{\mathbb{F}}_{l}[\operatorname{GL}_{d}(F_{v})]\)-module, let \(M_{\delta}\), which is no more a \(R\)-module, such that
\[\operatorname{Top}\Bigl{(}\sigma_{\mathfrak{m},K^{v}(\infty)}\otimes_{ \overline{\mathbb{Z}}_{l}}\overline{\mathbb{F}}_{l}\Bigr{)}/\mathfrak{N}^{ \delta}=\mathfrak{N}^{\delta-1}\operatorname{Top}\Bigl{(}\sigma_{\mathfrak{m},K ^{v}(\infty)}\otimes_{\overline{\mathbb{Z}}_{l}}\overline{\mathbb{F}}_{l} \Bigr{)}/\mathfrak{N}^{\delta}\oplus M_{\delta}.\]
Consider then any \(r\in\mathfrak{N}^{\delta-1}\): then multiplication by \(r\) in \(\operatorname{Top}\Bigl{(}\sigma_{\mathfrak{m},K^{v}(\infty)}\otimes_{ \overline{\mathbb{Z}}_{l}}\overline{\mathbb{F}}_{l}\Bigr{)}/\mathfrak{N}^{\delta}\) induces a map
\[M_{\delta}\smash{\mathop{\longrightarrow}\limits^{\times r}}\mathfrak{N}^{ \delta-1}\operatorname{Top}\Bigl{(}\sigma_{\mathfrak{m},K^{v}(\infty)}\otimes _{\overline{\mathbb{Z}}_{l}}\overline{\mathbb{F}}_{l}\Bigr{)}/\mathfrak{N}^{ \delta},\]
with image \(M_{\delta}(r)\) which, as a \(\overline{\mathbb{F}}_{l}[\operatorname{GL}_{d}(F_{v})]\)-module is generic because \(M_{\delta}\) is, by our induction hypothesis. Now as \(\mathfrak{N}^{\delta-1}\operatorname{Top}\Bigl{(}\sigma_{\mathfrak{m},K^{v}( \infty)}\otimes_{\overline{\mathbb{Z}}_{l}}\overline{\mathbb{F}}_{l}\Bigr{)}/ \mathfrak{N}^{\delta}\) is generated by the \(M_{\delta}(r)\) for \(r\) varying inside \(\mathfrak{N}^{\delta-1}\), we then deduce that it is generic and \(\operatorname{Top}\Bigl{(}\sigma_{\mathfrak{m},K^{v}(\infty)}\otimes_{ \overline{\mathbb{Z}}_{l}}\overline{\mathbb{F}}_{l}\Bigr{)}/\mathfrak{N}^{\delta}\) also.
Finally we conclude using the fact that, as \(\mathfrak{N}\) is topologically nilpotent, every quotient of \(\operatorname{Top}\Bigl{(}\sigma_{\mathfrak{m},K^{v}(\infty)}\otimes_{ \overline{\mathbb{Z}}_{l}}\overline{\mathbb{F}}_{l}\Bigr{)}\) has to be a quotient of some \(\operatorname{Top}\Bigl{(}\sigma_{\mathfrak{m},K^{v}(\infty)}\otimes_{ \overline{\mathbb{Z}}_{l}}\overline{\mathbb{F}}_{l}\Bigr{)}/\mathfrak{N}^{\delta}\).
|
2306.17159 | Greedy Gradient-free Adaptive Variational Quantum Algorithms on a Noisy
Intermediate Scale Quantum Computer | Hybrid quantum-classical adaptive Variational Quantum Eigensolvers (VQE)
already hold the potential to outperform classical computing for simulating
quantum many-body systems. However, their practical implementation on current
quantum processing units (QPUs) is very challenging due to the noisy evaluation
of a polynomially scaling number of observables, undertaken for operator
selection and optimisation of a high-dimensional cost function. To overcome
this, we propose new techniques to execute adaptive algorithms on a 25-qubit
error-mitigated QPU coupled to a GPU-accelerated HPC simulator. Targeting
physics applications, we compute the ground state of a 25-body Ising model
using the newly introduced Greedy Gradient-free Adaptive VQE (CGA-VQE)
requiring only five circuit measurements per iteration, regardless of the
number of qubits and size of the operator pool. Towards chemistry, we combine
the GGA-VQE and Overlap-ADAPT-VQE algorithms to approximate a molecular system
ground state. We show that the QPU successfully executes the algorithms and
yields the correct choice of parametrised unitary operators. While the QPU
evaluation of the resulting ansatz wave-function is polluted by hardware noise,
a single final evaluation of the sought-after observables on a classical
GPU-accelerated/noiseless simulator allows the recovery of the correct
approximation of the ground state, thus highlighting the need for hybrid
quantum-classical observable measurement. | César Feniou, Baptiste Claudon, Muhammad Hassan, Axel Courtat, Olivier Adjoua, Yvon Maday, Jean-Philip Piquemal | 2023-06-29T17:58:02Z | http://arxiv.org/abs/2306.17159v5 | Greedy Gradient-free Adaptive Variational Quantum Algorithms on a Noisy Intermediate Scale Quantum Computer
###### Abstract
Hybrid quantum-classical algorithms hold the potential to outperform classical computing methods for simulating quantum many-body systems. Adaptive Variational Quantum Eigensolvers (VOE) in particular have demonstrated an ability to generate highly accurate ansatz wave-functions using compact quantum circuits. However, the practical implementation of these methods on current quantum processing units (QPUs) faces a significant challenge: the need to measure a polynomially scaling number of observables during the operator selection step so as to optimise a high-dimensional, noisy cost function. In this study, we introduce new techniques to overcome these difficulties and execute hybrid adaptive algorithms on a 25-qubit error-mitigated quantum hardware coupled to a high performance GPU-accelerated quantum simulator. As a physics application, we compute the ground state of a 25-body Ising model using a greedy gradient-free adaptive VQE that requires only five circuit measurements for each iteration, regardless of the number of qubits and the size of the operator pool. As a chemistry application, we combine this greedy, gradient-free approach with the Overlap-ADAPT-VOE algorithm to approximate the ground state of a molecular system. The successful implementation of these hybrid QPU/simulator computations enhances the applicability of adaptive VGEs on QPUs and instills further optimism regarding the near-term advantages of quantum computing.
## 1 Introduction
Quantum computing has gained considerable interest due to its potential to solve complex computational problems that are intractable on classical devices. Finding the ground state of a many-body quantum system is one such problem as it suffers from an exponentially scaling complexity in the system size [1]. Quantum computing provides an appealing solution as it allows, in principle, the encoding of the exponentially scaling many-body wave-function onto a linearly scaling qubit register. In the context of quantum chemistry, extensive efforts have been made to develop quantum algorithms for ground and excited state preparation of molecular systems with the goal of ultimately surpassing classical techniques [2, 3]. In order to be able to take advantage of near-term quantum devices in the noisy intermediate-scale quantum (NISQ) era, emphasis has been placed on the development of hybrid quantum-classical algorithms such as the Variational Quantum Eigensolver (VQE) which incorporates a quantum subroutine within a classical optimization loop thereby reducing the quantum computer workload and mitigating the effects of hardware noise and measurement errors [4].
The core idea of the variational quantum eigensolver is to generate a parameterised wave-function, known as the ansatz, and then variationally tune this ansatz so as to minimise the expectation value of some relevant Hermitian operator, typically the system Hamiltonian. The fundamental challenge in implementing the VQE methodology on NISQ devices is thus to construct an ansatz wave-function that can accurately describe the ground-state of the Hermitian operator under study and, at the same time, can be represented on shallow quantum circuits which are not dominated by noise. Most commonly used VGEs for quantum chemistry are so-called "fixed-ansatz" methods wherein the ansatz wave-function consists of a predetermined product of parametrised unitary operators acting on an initial (usually Hartree-Fock reference) state [5, 6, 7, 8, 9, 10]. Whether hardware-efficient or chemically-inspired, these "fixed" ansatz methods have a limited accuracy and do not provide a route for exact simulations of strongly correlated systems on near-term quantum hardware [6, 8]. Since fixed ansatze are by definition system-agnostic, they are also likely to contain superfluous operators that do not contribute to a better approximation of the ground
state of the Hermitian operator under study [11]. Such redundant operators needlessly increase the length of the ansatz circuit as well as the number of variational parameters to be tuned, both of which are serious problems for NISQ-era devices.
Unsurprisingly therefore, recent works have proposed iterative VQE protocols that construct system-tailored ansatze using some kind of quasi-greedy strategy. The ADAPT-VQE algorithm [12] has made a notable impact in the field by demonstrating a significant reduction in the redundant terms in the ansatz circuits for a range of molecules, thus enhancing the accuracy and efficiency of the VQE. The core idea of the ADAPT-VQE algorithm is to begin with, for instance, an initial Hartree-Fock state which is easy to implement on a quantum circuit, and then to progressively grow the ansatz wave-function by carefully selecting, at each iteration, a parametrised unitary operator from a pre-selected operator pool and appending this parameterised unitary operator to the current ansatz. More precisely, given a pool \(\mathbb{U}\) of parameterised unitary operators, a Hermitian operator \(\widehat{A}\) whose ground state is being prepared, and the current ansatz \(\left|\Psi_{\text{curr}}\right\rangle\), the ADAPT-VQE selection criterion consists of identifying the unitary operator \(\mathcal{U}^{*}\in\mathbb{U}\) such that
\[\mathcal{U}^{*}=\operatorname*{argmax}_{\mathcal{U}\in\mathbb{U}}\left|\frac{ d}{d\theta}\langle\Psi_{\text{curr}}\left|\mathcal{U}(\theta)^{\dagger} \widehat{A}\mathcal{U}(\theta)\right|\Psi_{\text{curr}}\rangle\right|_{ \theta=0}\right|. \tag{1}\]
In other words, the ADAPT-VQE selection criterion is based on evaluating the magnitude of a certain gradient of the expectation value of the Hermitian operator whose ground state is being prepared (see Section 2.1 for further details). Of course, once the correct parameterised unitary operator has been identified and appended to the current ansatz wave-function, one still has to run a classical VQE procedure to variationally tune all parameters appearing in the new ansatz wave-function so as to minimise the expectation value of the Hermitian operator \(\widehat{A}\). A detailed workflow of this procedure can be found in Section 2.1.
Clearly, the first computational bottleneck in the ADAPT-VQE methodology is the operator selection procedure which requires measurements on the quantum device on the order of the size of the operator pool multiplied with the number of terms in the definition of the Hermitian operator. An even bigger obstacle, which typically arises in both "fixed-ansatz" and adaptive methods, is the actual VQE procedure itself wherein the ansatz wave-function is variationally tuned to minimise the associated expectation value. Indeed, the cost function for this second step, since it arises from measurements on a NISQ device, is both high-dimensional and extremely noisy, thus often rendering the associated optimisation problem computationally intractable [13].
The combination of these two difficulties is a major obstacle in the practical implementation of ADAPT-VQE-type algorithms on the current generation of quantum hardware. It is worth noting that several improvements to the original ADAPT procedure have been suggested, including a more structured approach to operator selection [14], a gradient screening process [15], and an energy evaluation scheme [16, 17, 18] that reduces the need for numerous observable evaluations. However, despite these notable advancements, the practical realisation of ADAPT-like algorithms on quantum hardware continues to present a persistent challenge. In order to expedite the practical implementations of these promising adaptive algorithms on near-term quantum devices, we believe it is necessary to develop noise-resistant variants of these methods that can reasonably approximate ground state wave-functions using a level of quantum resources that is realistic in the near term. Thus current study aims to address the challenge by presenting a resource-efficient, greedy gradient-free adaptive variational quantum algorithm that can successfully be implemented on the quantum devices of today.
The resource-efficient adaptive algorithm that we introduce is motivated by the gradient-free, analytical optimisation approaches that have been proposed in the VQE literature on 'fixed-ansatz' methods. Indeed, it is well-known that the expectation value of a Hermitian operator with respect to an ansatz wave-function \(\Psi(\theta)\) parametrised by a single quantum gate is simply an elementary trigonometric function of \(\theta\)[19, 20, 21, 22, 23, 24]. Thus, for a wave-function parametrised by a single rotation gate, for instance, only two measurements of the expectation value for judiciously chosen values of \(\theta\), allow an exact reconstruction of the full expectation value as a parametrised function of \(\theta\). Using this insight, we have replaced the conventional ADAPT-VQE operator selection criterion with a newly developed gradient-free energy sorting approach that allows us to identify _the locally optimal_ parametrised unitary operator and the optimal angle, which when appended to the current ansatz wavefunction, will produce a new ansatz wave-function with the biggest drop in expectation value. In other words, in contrast to the ADAPT-VQE gradient-based criterion (1), our adaptive algorithm selects a locally optimal unitary operator that satisfies
\[\mathcal{U}^{*}=\operatorname*{argmin}_{\mathcal{U}\in\mathbb{U}}\min_{ \theta\in[-\pi,\pi)}\langle\Psi_{\text{curr}}\left|\mathcal{U}(\theta)^{ \dagger}\widehat{A}\mathcal{U}(\theta)\right|\Psi_{\text{curr}}\rangle. \tag{2}\]
Indeed, as we discuss in Section 2.3 below, the one-dimensional objective functions appearing in the optimisation problem (2), also known as _landscape_ functions, can be expressed as analytical functions of \(\theta\) for any unitary operator \(\mathcal{U}\) belonging to popular choices of operator pools. Moreover, these analytical landscape functions can be determined explicitly using a fixed number of measurements that depends on the nature of the operator pool and the Hermitian operator \(\widehat{A}\). Consequently, using a minimal number of measurements of the quantum device, we can immediately determine both the best unitary operator
and the optimal angle that should be used to update the current ansatz wave-function. By iteratively growing an ansatz wave-function using only such locally optimal parametrised unitary operators and not re-optimising the "frozen-core" of the previous ansatz, we are able to eschew entirely the need to optimise a multi-dimensional noisy objective function. We refer to this adaptive algorithm as the greedy, gradient-free adaptive variational quantum eigensolver or GGA-VQE algorithm for short (see Section 2.3 below for a detailed description). Let us remark here that while gradient-free, analytical optimisation approaches for VQEs have been explored in the 'fixed-ansatz' literature [19, 20, 21, 22] and energy-sorting algorithms to improve the ADAPT-VQE operator selection criterion have also been proposed [25, 26], to the best of our knowledge, our work is the first attempt to combine both approaches and develop a greedy gradient-free adaptive variational quantum VQE.
Equipped with this resource-efficient methodology, we explore practical implementations of such greedy gradient-free adaptive algorithms on quantum devices. For our first numerical experiment, we consider the ground state preparation of an open boundary, one-dimensional transverse-field Ising model. We show that for Ising Hamiltonians of this nature, using a minimal hardware-efficient operator pool [27], each iteration of the GGA-VQE algorithm requires measuring only _five_ observables on quantum circuits, regardless of the system size (i.e., the number of qubits involved). As a proof of concept, we run the GGA-VQE algorithm for such an Ising model on a 25-qubit register on a state-of-the-art, trapped ion quantum computer and successfully achieve a ground state fidelity of over 98%.
Our second numerical experiment, on the same trapped ion quantum computer, pertains to the recently developed Overlap-ADAPT-VQE algorithm [28] that seeks to generate a compact approximation of a target wave-function through an iterative, adaptive overlap maximisation procedure. We consider a stretched hydrogen fluoride (HF) molecular system, and we take as the target wave-function, an approximate ground-state generated through a classical QEB-ADAPT-VQE procedure [29]. We then apply the Overlap-GGA-VQE algorithm (described in detail in Section 2.5) to progressively grow an ansatz wave-function that achieves an overlap of over 99% with the target wave-function. Since the Overlap-ADAPT-VQE algorithm requires measuring wave-function overlaps on quantum devices, we consider two possible methods- each requiring different quantum resources- that may be employed for this purpose. These are the so-called compute-uncompute approach, which requires a deeper circuit but no additional qubits to perform overlap measurements, and the Swap test method, which utilises a second qubit register to compute the wave-function overlaps but has the advantage of not increasing the circuit depth [30]. Thus, our work also provides an empirical investigation of the merits of each approach for overlap computations.
## 2 Methods
### The Adaptive Derivative-Assembled Pseudo-Trotter Variational Quantum Eigensolver
The adaptive derivative-assembled pseudo-Trotter variational quantum eigensolver (ADAPT-VQE) [12] is a VQE-inspired algorithm designed to approximate the ground state wave-function and ground state energy of a given Hamiltonian. Unlike many other classical variational quantum eigensolvers such as the various flavours of trtotterised unitary coupled cluster [5, 6, 7, 8, 9, 10] however, ADAPT-VQE does not specify a fixed ansatz for the sought-after ground state at the beginning of the algorithm. Instead, ADAPT-VQE functions by first fixing a set of admissible Hermitian generators (the so-called operator pool). The ansatz wave-function is then grown iteratively by parametrically exponentiating a carefully selected Hermitian generator, appending this exponentiated generator to the previous ansatz wave-function, and then variationally tuning the new ansatz wave-function. Since the selection procedure is tailored to the specific Hamiltonian system under consideration (see below), one usually hopes to obtain a more compact ansatz than the one generated by non-adaptive VQEs while still retaining the practical advantages of the VQE for near-term quantum hardware.
The general workflow of the ADAPT-VQE algorithm is as follows. Given the qubit representation of an input Hamiltonian \(H\), a pool of admissible Hermitian generators \(\mathbb{P}\), and a stopping criterion:
1. [label=0., ref=0]
2. Root the qubits to an initial state \(|\Psi^{(0)}\rangle\).
3. At the \(m^{\text{th}}\) iteration, identify the Hermitian generator \(B_{m}\in\mathbb{P}\) such that the action of the parameterised unitary operator \(\exp(-\imath\theta_{m}B_{m}),\ \theta_{m}\in[-\pi,\pi)\) on the current ansatz \(|\Psi^{(m-1)}\rangle\) is _likely_ to produce a new wave-function with the largest drop in energy. This identification is done by computing the gradient, at \(\theta=0\), of the expectation value of the Hamiltonian, i.e., \[B_{m}=\operatorname*{arg\,max}_{B\in\mathbb{P}}\left|\frac{\partial}{\partial \theta}\langle\Psi^{(m-1)}|\exp(\imath\theta B)H\exp(-\imath\theta B)|\Psi^{ (m-1)}\rangle\,\right|_{\theta=0}\right|.\] (3) Note that the criterion (3) is simply a heuristic, and there is no guarantee that the Hermitian generator \(B_{m}\) selected through this criterion will indeed lead to the parameterised unitary operator whose action on the current ansatz \(|\Psi^{(m-1)}\rangle\) results in the largest drop in energy. This point will be the subject of further discussion in Section 2.3.
3. Exit the iterative process if the stopping criterion is met (see below for more explanation). Otherwise, append the resulting parametrised unitary operator to the left of the current ansatz wave-function \(\ket{\Psi^{(m-1)}}\), i.e., define \[\ket{\widetilde{\Psi^{(m)}}}:= \exp(-t\theta_{m}B_{m})\ket{\Psi^{(m-1)}}=\exp(-t\theta_{m}B_{m}) \exp(-t\theta_{m-1}^{\prime}B_{m-1})\ldots\exp(-t\theta_{1}^{\prime}B_{1})\ket{ \Psi^{(0)}}.\]
4. Run a classical VQE routine by optimising all parameters \(\theta_{m},\theta_{m-1},\ldots,\theta_{1}\) in the new ansatz wave-function \(\ket{\widetilde{\Psi^{(m)}}}\) so as to minimize the expectation value of the Hamiltonian, i.e., solve the optimisation problem \[\widetilde{\theta}^{\text{opt}}:= (\theta_{1}^{\prime},\ldots,\theta_{m-1}^{\prime},\theta_{m}^{ \prime})\] (4) \[:= \operatorname*{argmin}_{\theta_{1},\ldots,\theta_{m-1},\theta_{m }}\left\langle\Psi^{(0)}\right|\prod_{k=1}^{k=m}\exp(t\theta_{k}B_{k})H\prod_{ k=m}^{k=1}\exp(-t\theta_{k}B_{k})\Big{|}\Psi^{(0)}\Big{\rangle},\] and define the new ansatz wave-function \(\ket{\Psi^{(m)}}\) using the newly optimized parameters \(\theta_{1}^{\prime},\ldots,\theta_{m}^{\prime}\), i.e., define \[\ket{\Psi^{(m)}}:= \prod_{k=m}^{k=1}\exp(-t\theta_{k}^{\prime}B_{k})\ket{\Psi^{(0)}}.\] Let us emphasize that although we also denote the newly optimized parameters at the current \(m^{\text{th}}\) iteration by \(\theta_{1}^{\prime},\ldots\theta_{m}^{\prime}\), these optimized values are not necessarily the same as those used to define \(\ket{\Psi^{(m-1)}}\) and referenced in Step 3 above.
5. Return to Step 2 with the updated ansatz \(\ket{\Psi^{(m)}}\).
Let us remark here that a common choice of stopping criterion is to impose a pre-defined threshold tolerance \(\epsilon>0\) on the magnitude of the gradients computed in Step 2 above, i.e., exit the ADAPT-VQE algorithm at iteration \(m\) if
\[\max_{B\in\mathbb{P}}\left|\frac{\partial}{\partial\theta}\left\langle\Psi^{( m-1)}\right|\exp(t\theta B)H\exp(-t\theta B)\ket{\Psi^{(m-1)}}\big{|}_{ \theta=0}\right|<\epsilon.\]
An obvious alternative option is to impose a maximal iteration count on the number of ADAPT-VQE steps or a minimal decrease of the expectation value between two iterates
\[\left\langle\Psi^{(0)}\right|\prod_{k=1}^{k=m-1}\exp(t\theta_{k}^{\prime}B_{k} )H\prod_{k=m-1}^{k=1}\exp(-t\theta_{k}^{\prime}B_{k})\Big{|}\Psi^{(0)}\Big{>} -\left\langle\Psi^{(0)}\right|\prod_{k=1}^{k=m}\exp(t\theta_{k}^{\prime}B_{k}) H\prod_{k=m}^{k=1}\exp(-t\theta_{k}^{\prime}B_{k})\Big{|}\Psi^{(0)}\Big{\rangle}<\epsilon\]
Next, let us discuss some commonly used operator pools for ADAPT-VQE.
### Operator Pools for ADAPT-VQE
As one may expect by studying the ADAPT-VQE workflow, the success of the algorithm is strongly impacted by the choice of the Hermitian generator pool \(\mathbb{P}\). As an extreme example, if all generators in the operator pool commute with the Hamiltonian, then the algorithm will terminate at the first iteration thus resulting in no improvement of the initial guess. The goal of this section is to briefly present some popular operator pools. While a great variety of operator pools have been introduced in the literature [8, 31, 32], we will limit ourselves to a 'chemically-inspired' pool which is popular for simulating quantum chemical systems [29], a simplified version of this chemically-inspired pool which (empirically) leads to lower quantum gate counts [27], and finally a so-called _minimal_ operator pool which possesses some useful mathematical properties. Before doing so however, let us first clarify some additional details concerning the ADAPT-VQE algorithm.
ADAPT-VQE was primarily developed for quantum chemistry applications, i.e., for application to molecular systems typically described by a second-quantized molecular Hamiltonian mapped to a qubit representation through the Jordan-Wigner transformation. In this setting, an obvious choice for the initial state in the ADAPT-VQE algorithm is the Hartree-Fock wavefunction. Indeed, in the standard formalism where each qubit is used to represent a specific spin-orbital, we can write \(\ket{0}_{p}\) and \(\ket{1}_{p}\) to denote states corresponding to an _empty_ and _occupied_ spin-orbital \(p\) respectively. With this notation, the reference Hartree-Fock state for a system having \(n\) electrons in \(N\) spin-orbitals can be expressed as \(\ket{\Psi_{\text{HF}}}:=\ket{1_{0}\ldots 1_{n-1}0_{n}\ldots 0_{N-1}}\), which is straightforward to represent on quantum circuits. Note that this is an example of a case where we have access to an initial state that is both simple to represent on quantum architecture and also yields a wave-function having a reasonable
overlap with the sought-after ground state wave-function. Of course, for arbitrary Hamiltonians, such an efficient choice for the initial state might not be possible, in which case we might have to rely on random initialisations, for instance.
**The Qubit Excitation-based Pool [29]**
The first commonly used operator pool is the Qubit excitation-based (QEB) pool which is inspired by the popular coupled cluster method from computational quantum chemistry. The QEB pool consists of so-called single-qubit and double-qubit excitation operators which take the form
\[A_{pq}=\frac{1}{2}\left(X_{q}Y_{p}-Y_{p}X_{q}\right). \tag{5}\]
and
\[A_{pqrs}=\frac{1}{8}\left(X_{r}Y_{s}X_{p}X_{q}+Y_{r}X_{s}X_{p}X_{q}+Y_{r}Y_{s}Y _{p}X_{q}+Y_{r}Y_{s}X_{p}Y_{q}-X_{r}X_{s}Y_{p}X_{q}-X_{r}X_{s}X_{p}Y_{q}-Y_{r}X _{s}Y_{p}Y_{q}-X_{r}Y_{s}Y_{p}Y_{q}\right). \tag{6}\]
Here \(p,q,r,\) and \(s\) denote qubit indices and \(X_{p}\) and \(Y_{p}\) are the usual one-qubit Pauli gates acting on qubit \(p\). Thus, the single-qubit generator \(A_{pq}\) acts between the single qubits \(p\) and \(q\) while the double-qubit generator \(A_{pqrs}\) acts between the qubit pairs \((p,q)\) and \((r,s)\).
Given an \(N\)-qubit system, it is easy to see that the QEB operator pool a priori has \(\mathcal{O}(N^{4})\) elements. In practice however, not all possible excitation operators of the form (5)-(6) are included in the operator pool. Instead, the QEB operator pool is limited to those single-qubit and double-qubit excitation operators which preserve important symmetries in the system such as spin or the number of particles. Additionally, it is readily checked that the parametric exponentiation of a QEB operator is easy to calculate, and the resulting unitary operators have well-known CNOT-optimised circuits (see, e.g., Figure 10in the appendix).
**The Qubit Hardware-efficient Pool [27]**
While the Qubit excitation-based pool provides excellent performance in numerical simulations on quantum simulators, the practical implementation of QEB-based ansatz wave-functions on near-term quantum hardware remains challenging. This is primarily due to the fact that the number of CNOT gates required to construct the associated QEB circuits, while significantly smaller than the CNOT counts for classical "fixed-ansatz" approaches, is still far too high. The so-called Qubit hardware-efficient pool [7] addresses this issue by considering instead a pool consisting of _modified_ single and double excitation operators of the form
\[\widetilde{X}_{pq}=\frac{1}{2}X_{q}Y_{p} \tag{7}\]
and
\[\begin{split}\widetilde{X}^{(1)}_{pqrs}=&\frac{1}{8 }X_{r}Y_{s}X_{p}X_{q},\quad\widetilde{X}^{(2)}_{pqrs}=\frac{1}{8}Y_{r}X_{s}X_{ p}X_{q},\quad\widetilde{X}^{(3)}_{pqrs}=\frac{1}{8}Y_{r}Y_{s}Y_{p}X_{q},\quad \widetilde{X}^{(4)}_{pqrs}=\frac{1}{8}Y_{r}Y_{s}X_{p}Y_{q},\\ \widetilde{X}^{(5)}_{pqrs}=&\frac{1}{8}X_{r}X_{s}Y_{ p}X_{q},\quad\widetilde{X}^{(6)}_{pqrs}=\frac{1}{8}X_{r}X_{s}X_{p}Y_{q},\quad \widetilde{X}^{(7)}_{pqrs}=\frac{1}{8}Y_{r}X_{s}Y_{p}Y_{q},\quad\widetilde{X}^ {(8)}_{pqrs}=\frac{1}{8}X_{r}Y_{s}Y_{p}Y_{q},\end{split} \tag{8}\]
where \(p,q,r,s\) again denote qubit indices and \(X_{p}\) and \(Y_{p}\) are one-qubit Pauli gates acting on qubit \(p\). Note that not all _modified_ double-excitation operators of the form (8) are added to the operator pool since, for instance, the operator \(\widetilde{X}^{(1)}_{pqrs}=X_{r}Y_{s}X_{p}X_{q}\) and \(\widetilde{X}^{(7)}_{pqrs}=Y_{r}X_{s}Y_{p}Y_{q}\) are related by a global rotation. Let us also emphasize here that the qubit hardware-efficient pool is _not particle conserving_ and thus violates an important symmetry of the system.
Numerical experiments involving the qubit hardware-efficient pool demonstrate that the resulting ansatz wave-functions can indeed be represented on quantum architectures using a lower CNOT count than that required for the QEB-pool based ansatze. The tradeoff, however, is that the qubit hardware-efficient pool is several times larger than than QEB-pool, and this produces a significant computational overhead when evaluating operator gradients for use in the operator selection criterion. The next operator pool attempts to resolve this issue.
**Minimal Hardware-efficient Pool [27]**
It is well-known that when the Hamiltonian under study is real-valued, then the ground state eigenfunction can be expressed as a _real_ linear combination of _real_ basis vectors. For such settings therefore, it is possible to show the existence of
a so-called minimal operator pool that allows the transformation of any real-valued wave-function (in particular, the Hartree-Fock reference state) to another real-valued wave-function (in particular, the sought-after ground state eigenfunction). More precisely, given an \(N\)-qubit system, we may define the operator pool
\[\mathbb{P}=\{Y_{p}\}_{p=0}^{N-1}\,\cup\,\{Z_{p}Y_{p+1}\}_{p=0}^{N-1},\]
where \(Y_{p}\) and \(Z_{p}\) are the usual one-qubit Pauli gates acting on qubit \(p\). Then, for any two real-valued wave-functions \(|\Phi\rangle\) and \(|\Psi\rangle\), there exist \(\theta_{1},\ldots,\theta_{M}\in[-\pi,\pi)\) and Hermitian generators \(B_{1},\ldots,B_{M}\in\mathbb{P}\) such that
\[|\Phi\rangle=\prod_{k=M}^{1}\exp(-\imath\theta_{k}B_{k})\,|\Psi\rangle\,.\]
In other words, for a precise choice of parameters and generators in the pool \(\mathbb{P}\), we can construct the sought-after ground-state eigenfunction by applying the parametrised, exponentiated generators to the Hartree-Fock reference state.
The key advantage of this so-called minimal hardware-efficient pool \(\mathbb{P}\) is that it consists of only \(2N-2\) elements, and the operators in this pool can be parametrically exponentiated using very simple circuits that require a minimal number of CNOT gates.
### Resource Saving Enhancements and the GGA-VQE Algorithm
As described in detail in Section 2.1, a core step in the ADAPT-VQE algorithm is the selection of an optimal Hermitian generator \(B\) from the operator pool whose addition to the current ansatz can produce a new ansatz wave-function with the largest drop in energy. Current implementations of ADAPT-VQE make this choice through a heuristic criterion based on evaluating certain gradients of the expectation value of the Hamiltonian. More precisely, for a given pool of operators \(\mathbb{P}\), at the \(m^{\text{th}}\) iteration, one computes (c.f., Equation (3))
\[B_{m}=\operatorname*{argmax}_{B\in\mathbb{P}}\left|\frac{\partial}{\partial \theta}\,\langle\Psi^{(m-1)}|\exp(\imath\theta B)H\exp(-\imath\theta B)|\Psi^{ (m-1)}\rangle\,\right|_{\theta=0}\right|, \tag{9}\]
where \(|\Psi^{(m-1)}\rangle\) denotes the ansatz wave-function at the \(m-1\) iteration. Of course, Equation (9) is still a heuristic, and it is conceivable that there exists another Hermitian generator \(\widetilde{B}_{m}\) such that
\[\min_{\theta\in[-\pi,\pi)}\,\langle\Psi^{(m-1)}|\exp(\imath\theta\widetilde{B }_{m})H\exp(-\imath\theta\widetilde{B}_{m})|\Psi^{(m-1)}\rangle<\min_{\theta \in[-\pi,\pi)}\,\langle\Psi^{(m-1)}|\exp(\imath\theta B_{m})H\exp(-\imath \theta B_{m})|\Psi^{(m-1)}\rangle\]
while
\[\left|\frac{\partial}{\partial\theta}\,\langle\Psi^{(m-1)}|\exp(\imath\theta \widetilde{B}_{m})H\exp(-\imath\theta\widetilde{B}_{m})|\Psi^{(m-1)}\rangle \,\right|_{\theta=0}\bigg{|}<\left|\frac{\partial}{\partial\theta}\,\langle \Psi^{(m-1)}|\exp(\imath\theta B_{m})H\exp(-\imath\theta B_{m})|\Psi^{(m-1)} \rangle\,\right|_{\theta=0}\bigg{|}.\]
A representative example of this situation is displayed in Figure 1.
Figure 1: Illustration of a situation where the ADAPT-VQE selection criterion does not pick the optimal operator leading to the largest energy drop.
Given that at each iteration of ADAPT-VQE, we face the task of optimising a multi-dimensional objective function which is very noisy due to the quality of the current quantum hardware, the selection of the wrong operator to append to the current ansatz wave-function can be a costly mistake [33]. In this section, we introduce an energy sorting algorithm that allows the exact selection of the _locally, optimal_ Hermitian generator from any of the three operator pools that we have introduced in Section 2.2. In other words, we show that it is possible, using a few measurements on the quantum device, to exactly solve the optimisation problem
\[B_{m}=\operatorname*{argmin}_{B\in\mathbb{P}}\min_{\theta\in[-\pi,\pi)} \mathcal{L}(B,\theta,|\Psi^{(m-1)}\rangle):=\operatorname*{argmin}_{B\in \mathbb{P}}\min_{\theta\in[-\pi,\pi)}\left\langle\Psi^{(m-1)}\right|\exp(t \theta B)H\exp(-t\theta B)|\Psi^{(m-1)}\rangle\,, \tag{10}\]
where, for notational convenience, we have introduced the objective function \(\mathcal{L}(B,\theta,|\Psi^{(m-1)}\rangle)\), which, in some literature, is also referred to as a _landscape_ function.
The main idea of our energy sorting algorithm is to take advantage of a result that is largely known in the VQE literature on 'fixed-ansatz' methods, namely that the expectation value of a Hermitian operator with respect to an ansatz wave-function \(|\Psi(\theta)\rangle\) parametrised by a single quantum gate can be expressed in terms of elementary trigonometric functions of \(\theta\)[19, 20]. While this result is typically expressed for ansatz wave-function parametrised by rotation gates, the core arguments can be extended to wave-functions parametrised by Hermitian generators belonging to any of the three operator pools that we have introduced in Section 2.2. Indeed, taking advantage of the precise functional form of the Hermitian generators introduced in Section 2.2, a simple calculation shows that
1. For any generator \(B\) in the Qubit-Excitation-Based (QEB) operator pool, it holds that \(B^{3}=B\) with \(I\) denoting the identity matrix (see Appendix 5.2).
2. For any generator \(B\) in the Qubit hardware-efficient and minimal hardware-efficient pools, it holds that \(B^{2}=I\).
The above simple relations now imply (see also Appendix 5.2) that for any generator \(B\) in the Qubit-Excitation-Based (QEB) operator pool and any \(\theta\in[-\pi,\pi)\), it holds that
\[\exp(-\imath\theta B)=I+(\cos(\theta)-1)B^{2}-\imath\sin(\theta)B, \tag{11}\]
and for any generator \(B\) in the Qubit hardware-efficient and minimal hardware-efficient pools and any \(\theta\in[-\pi,\pi)\), it holds that
\[\exp(-\imath\theta B)=\cos(\theta)I-\imath\sin(\theta)B. \tag{12}\]
Using Equations (11) and (12), it is easy to establish that for any wave-function \(|\phi\rangle\) and any \(\theta\in[-\pi,\pi)\) the objective function \(\mathcal{L}(B,\theta,|\phi\rangle)\) has the analytical form
\[\mathcal{L}(B,\theta,|\phi\rangle)=\begin{cases}\langle\phi|H|\phi\rangle+ \big{(}\cos(\theta)-1\big{)}(\langle\phi|\{H,B^{2}\}|\phi\rangle-2\langle\phi |BHB|\phi\rangle)&\text{if }B^{3}=B,\\ +\big{(}1-\cos(\theta)\big{)}^{2}\big{(}\langle\phi|B^{2}HB^{2}|\phi\rangle- \langle\phi|BHB|\phi\rangle\big{)}\\ +\sin(\theta)(\cos(\theta)-1)\,\langle\phi|\imath B[H,B]B|\phi\rangle\\ +\sin(\theta)\,\langle\phi|\imath[B,H]|\phi\rangle\\ \\ \cos^{2}(\theta)\,\langle\phi|H|\phi\rangle+\frac{\sin(2\theta)}{2}\,\langle \phi|\imath[B,H]|\phi\rangle&\text{if }B^{2}=I.\\ +\sin^{2}(\theta)\,\langle\phi|BHB|\phi\rangle\,,\end{cases} \tag{13}\]
where \(\{\cdot,\cdot\}\) and \([\cdot,\cdot]\) denote the anti-commutator and commutator respectively. A demonstration of this result can be found in Appendix 5.2.
Equation (13) implies that for any Hermitian generator \(B\) from our operator pools and any arbitrary wave-function \(|\phi\rangle\), the objective function \(\mathcal{L}(B,\theta,|\phi\rangle)\) can be expressed in terms of elementary trigonometric functions of \(\theta\). An important consequence of this expression is that, if we now evaluate the objective function \(\mathcal{L}(B,\theta,|\phi\rangle)\) at certain well-chosen angles, we can obtain a linear system of equations for the unknown operator expectation values. More precisely,
**For the QEB pool (\(B^{3}=B\)):**
Since \(\langle\phi|H|\phi\rangle\) can be measured directly on the quantum device, four measurements of \(\mathcal{L}(B,\theta,|\phi\rangle)\) at well-chosen \(\theta=\theta^{(1)},\theta^{(2)},\theta^{(3)},\theta^{(4)}\) yields a linear system for the four unknowns \(\langle\phi|\{H,B^{2}\}|\phi\rangle-2\,\langle\phi|BHB|\phi\rangle\), \(\langle\phi|B^{2}HB^{2}|\phi\rangle-\langle\phi|BHB|\phi\rangle\), \(\langle\phi|Bi[H,B]B|\phi\rangle\) and \(\langle\phi|i[B,H]|\phi\rangle\).
**For the hardware efficient pools (\(B^{2}=I\)):**
Since \(\langle\phi|H|\phi\rangle\) can be measured directly on the quantum device, two measurements of \(\mathcal{L}(B,\theta,|\phi\rangle)\) at well-chosen \(\theta=\theta^{(1)},\theta^{(2)}\) yields a linear system for the two unknowns \(\langle\phi|i[B,H]|\phi\rangle\) and \(\langle\phi|BHB|\phi\rangle\).
In other words, using a minimal number of measurements and by solving a very small linear system, we can compute all terms involving \(B,H\), and \(|\phi\rangle\) in the expression (13) for the objective function \(\mathcal{L}(B,\theta,|\phi\rangle)\). Since the dependency of this function on \(\theta\) is through elementary trigonometric functions, we can thus express the objective function \(\mathcal{L}(B,\theta,|\phi\rangle)\) analytically for any generator \(B\), any angle \(\theta\), and any wave-function \(|\phi\rangle\). This allows us to solve the optimisation problem (10) up to arbitrary precision for any Hermitian generator \(B\) from our operator pool, and thereby obtain the _locally, optimal_ generator that should be added to the current ansatz wave-function \(|\Psi^{(m-1)}\rangle\). Note that if we assume an operator pool of size \(M\), a total of \(4M+1\) measurements will be required to screen all Hermitian generators from the qubit-excitation based pool while a total of \(2M+1\) measurements will be required to screen all Hermitian generators from the two hardware efficient pools.
Two important remarks are now in order. First, as mentioned previously, analytical expressions such as (13) for landscape functions \(\mathcal{L}(B,\theta,|\phi\rangle)\) of the form (10) are present in the existing literature on 'fixed-ansatz' methods [19, 20, 21, 22, 23, 24], albeit not- to the best of our knowledge- for the specific operator pools that we have considered in the current study. However, these landscape functions are almost exclusively used to perform an operator-by-operator, analytical optimisation of a structurally fixed, parameterised quantum circuit. Indeed, the only numerical method we are aware of that extends the applicability of analytical landscapes functions beyond simple iterative optimisation of a fixed ansatz circuit is the Rotoselect algorithm [20], and Rotoselect still assumes a fixed structure for the parametrised quantum circuit, in which only the choice of \(X,Y,\) or \(Z\) rotation gate (and not the 'location') can be varied according to the associated landscape function.
Second, let us point out that the landscape function (10) assumes the addition of a single operator to the current ansatz wave-function at each iteration of the adaptive algorithm. If the pool of potential unitary operators is commutative, then the specific order in which operators are chosen is unimportant, and it is therefore sufficient to consider a sequential application of the representation (13) of \(\mathcal{L}(B,\theta,|\phi\rangle)\) to determine, at each iteration, the optimal operator to append to the current ansatz. On the other hand, if the Hermitian generators belonging to the operator pool do not commute (which is often the case), then the ordering of the operators is important, and it is potentially useful to consider landscape functions based on the simultaneous addition of \(d>1\) operators to the current ansatz wave-function at each iteration. We now briefly discuss this generalisation.
**Multi-dimensional analytical landscape functions**
Let us consider an adaptive procedure in which \(d\) unitary operators, constructed using \(d\) Hermitian generators from a given operator pool \(\mathbb{P}\) are to be appended to the current ansatz wave-function \(|\Psi^{(m-1)}\rangle\) at iteration \(m\). We are now interested in determining the ordered \(d\)-tuple of Hermitian generators \((B_{m_{d}},\ldots,B_{m_{1}})\) such that
\[(B_{m_{d}},\ldots,B_{m_{1}}) =\underset{B_{d},\ldots,B_{1}\in\mathbb{P}}{\operatorname{argmin} }\quad\underset{\begin{subarray}{c}\theta_{1},\ldots,\theta_{1}\\ \in[-\pi,\pi)\end{subarray}}{\operatorname{min}}\quad\mathcal{L}\Big{(}(B_{d},\theta_{d}),\ldots,(B_{1},\theta_{1}),|\Psi^{(m-1)}\rangle\Big{)} \tag{14}\] \[:=\underset{B_{d},\ldots,B_{1}\in\mathbb{P}}{\operatorname{argmin} }\quad\underset{\begin{subarray}{c}\theta_{d},\ldots,\theta_{1}\\ \in[-\pi,\pi)\end{subarray}}{\operatorname{min}}\quad\langle\Psi^{(m-1)}| \exp(t\theta_{1}B_{1})\ldots\exp(-t\theta_{d}B_{d})H\exp(t\theta_{d}B_{d}) \ldots\exp(-t\theta_{1}B_{1})|\Psi^{(m-1)}\rangle\,.\]
In order to obtain an analytical representation of the \(d\)-dimensional objective function \(\mathcal{L}\Big{(}(B_{d},\theta_{d}),\ldots,(B_{1},\theta_{1}),|\Psi^{(m-1)} \rangle\Big{)}\), the fundamental idea is to appeal once again to Equations (11) and (12) and expand each exponential in \(\theta_{j},\ j\in\{1,\ldots,d\}\) as a sum of a sine and cosine function of \(\theta_{j}\). This expansion allows us to conclude that the \(d\)-dimensional objective function
\(\mathcal{L}\Big{(}(B_{d},\theta_{d}),\ldots,(B_{1},\theta_{1}),|\Psi^{(m-1)})\Big{)}\) can be written in the general form
\[\mathcal{L}\Big{(}(B_{d},\theta_{d}),\ldots,(B_{1},\theta_{1}),| \Psi^{(m-1)})\Big{)}=\] \[\left\{\begin{aligned} &\left\langle\Psi^{(m-1)}\Big{|} \prod\limits_{j=1}^{j=d}\Big{(}I+(\cos(\theta_{j})-1)B_{j}^{2}+\imath\sin( \theta_{j})B_{j}\Big{)}\,H\prod\limits_{j=d}^{j=1}\Big{(}I+(\cos(\theta_{j})- 1)B_{j}^{2}-\imath\sin(\theta_{j})B_{j}\Big{)}\,\Big{|}\Psi^{(m-1)}\right\rangle &\text{if }B^{3}=B,\\ &\left\langle\Psi^{(m-1)}\Big{|}\prod\limits_{j=1}^{j=d}(\cos( \theta_{j})I+\imath\sin(\theta_{j})B_{j})\,H\prod\limits_{j=d}^{j=1}(\cos( \theta_{j})I-\imath\sin(\theta_{j})B_{j})\,\Big{|}\Psi^{(m-1)}\right\rangle &\text{if }B^{2}=I.\end{aligned}\right.\]
In other words, the \(d\)-dimensional objective function \(\mathcal{L}\Big{(}(B_{d},\theta_{d}),\ldots,(B_{1},\theta_{1}),|\Psi^{(m-1)} )\Big{)}\) can be written as a polynomial of the variables \(\big{\{}1,\cos(\theta_{j}),\sin(\theta_{j})\colon j\in\{1,\ldots,d\}\big{\}}\) with the exact structure of the polynomial depending on the properties of the operator pool \(\mathbb{P}\). As a representative example, the landscape function for hardware efficient pools with \(d=2\), after some simplifications, is of the form:
\[\mathcal{L}\big{(}(B_{2},\theta_{2}),(B_{1},\theta_{1}),|\phi \big{)}= \langle\phi|H|\phi\rangle \tag{15}\] \[+ \frac{\cos(2\theta_{1})}{2}\Big{(}\langle\phi|H-B_{1}HB_{1}|\phi \rangle+\frac{\cos(2\theta_{2})}{2}\langle\phi|H-B_{1}HB_{1}-B_{2}HB_{2}+B_{2} B_{1}HB_{1}B_{2}|\phi\rangle\] \[+ \frac{\sin(2\theta_{2})}{2}\langle\phi|i[B_{2},H-B_{1}HB_{1}]| \phi\rangle\Big{)}\] \[+ \frac{\sin(2\theta_{1})}{2}\Big{(}\langle\phi|i[B_{1},H]|\phi \rangle+\frac{\cos(2\theta_{2})}{2}\langle\phi|i[B_{1},H]-iB_{2}[B_{1},H]B_{2 }|\phi\rangle\] \[-\frac{\sin(2\theta_{2})}{2}\langle\phi|[B_{2},[B_{1},H]]|\phi \rangle\Big{)}\]
Consequently, a total of 7 measurements on a quantum device are required to deduce an analytical expression for the two-dimensional landscape function \(\mathcal{L}\big{(}(B_{2},\theta_{2}),(B_{1},\theta_{1}),|\phi\big{)}\) for any Hermitian generators \(B_{1},B_{2}\) belonging to either of the two hardware efficient operator pools. Since the selection of the best two operator to append to the current ansatz wave-function requires comparing all pairs of Hermitian generators, we conclude that for an operator pool of size \(M\), at most \(6M^{2}+1\) measurements are required to determine the locally optimal pair of unitary operators that should be appended to the current ansatz wave-function at each iteration in order to achieve the largest drop in expectation value of the underlying Hermitian operator.
In the case of a general \(d\)-dimensional objective function \(\mathcal{L}\Big{(}(B_{d},\theta_{d}),\ldots,(B_{1},\theta_{1}),|\Psi^{(m-1)} )\Big{)}\), similar arguments yield that
* for the Qubit-Excitation-Based (QEB) operator pool of size \(M\), we require \(\mathcal{O}(5^{d}M^{d})\) measurements to determine the locally optimal \(d\)-tuple of unitary operators that should be appended to the current ansatz wave-function at each iteration in order to achieve the largest drop in expectation value of the underlying Hermitian operator;
* for the Qubit hardware-efficient and minimal hardware-efficient pools of size \(M\), we require \(\mathcal{O}(3^{d}M^{d})\) measurements to determine the locally optimal \(d\)-tuple of unitary operators that should be appended to the current ansatz wave-function at each iteration in order to achieve the largest drop in expectation value of the underlying Hermitian operator.
While this procedure can become computationally intractable for moderately large \(d\), various simplifications are possible that can lead to more tractable gradient-free adaptive algorithms involving multi-operator selection and optimisation, and it is likely that such methods offer an advantage when formulating a greedy gradient-free adaptive VQE for a complex Hamiltonian using a non-commutative operator pool. As representative examples, given an ansatz wave-function \(|\Psi^{(m-1)})\), at iteration \(m\):
1. We may use the energy sorting algorithm based on _one-dimensional landscape functions_ to classify, in descending order of importance, the best \(d\) Hermitian generators \((B_{d},\ldots,B_{1})\) whose addition to the current ansatz wave-function can result in the largest drops in the expectation value of the underlying Hermitian operator, i.e., \[\min_{\theta_{d}\in[-\pi,\pi)}\langle\Psi^{(m-1)}|\exp(\imath\theta B_{d})H\exp (-\imath\theta B_{d})|\Psi^{(m-1)}\rangle\geq\ldots\geq\min_{\theta_{1}\in[- \pi,\pi)}\langle\Psi^{(m-1)}|\exp(\imath\theta B_{1})H\exp(-\imath\theta B_{1} )|\Psi^{(m-1)}\rangle.\]
We can then switch to the analytical expression of \(d\)-dimensional landscape function \(\mathcal{L}\Big{(}(B_{d},\theta_{d}),\ldots,(B_{1},\theta_{1}),|\Psi^{(m-1)})\,\Big{)}\) in order to compute the optimal parameters \((\theta_{d}^{*},\ldots,\theta_{1}^{*})\) such that \[(\theta_{d}^{*},\ldots,\theta_{1}^{*})= \mathrm{argmin}_{\begin{subarray}{c}\theta_{d},\ldots,\theta_{1} \\ \in[-\pi,\pi)\end{subarray}}\mathcal{L}\Big{(}(B_{d},\theta_{d}),\ldots,(B_{1}, \theta_{1}),|\Psi^{(m-1)})\,\Big{)}\] \[= \mathrm{argmin}_{\begin{subarray}{c}\theta_{d},\ldots,\theta_{1} \\ \in[-\pi,\pi)\end{subarray}}\langle\Psi^{(m-1)}|\exp(t\theta_{1}B_{1})\ldots\exp (-t\theta_{d}B_{d})H\exp(t\theta_{d}B_{d})\ldots\exp(-t\theta_{1}B_{1})|\Psi^{( m-1)}\rangle\] In other words, we may use one-dimensional landscape functions to _identity_ the best \(d\) Hermitian generators to add to the current ansatz wave-function and we may employ the \(d\)-dimensional landscape functions to perform the _analytical optimisation_. The number of quantum measurements required by this procedure scales as \(3^{d}M\) (resp. \(5^{d}M\)) for a hardware efficient (resp. QEB) operator pool of size \(M\).
2. Taking the newly obtained ansatz wave-function \(|\Psi^{(m)}\rangle\) after iteration \(m\) as structurally fixed, we may perform Rotoselect-style [20] backwards and forwards optimisation sweeps over all parameterised unitary operators \(\exp(t\theta_{1}^{*}B_{d}),\ldots\exp(t\theta_{d}^{*}B_{1})\). In particular, thanks to the analytical expression (15) for the \(d\)-dimensional landscape function, each iteration in these optimisation sweeps can involve \(d\) parametrised unitary operators simultaneously. The number of quantum measurements required by a single sweep utilising \(d\)-dimensional landscape functions scales as \(3^{d}M\) (resp. \(5^{d}M\)) for a hardware efficient (resp. QEB) operator pool of size \(M\).
Such considerations will be the subject of numerical investigations in forthcoming works. For the purpose of this study, we will consider the computationally cheap case of one-dimensional landscape functions since this approach suffices for the relatively simple Hamiltonians that we consider in the sequel. In this one-dimensional setting, the energy sorting algorithm that we have introduced is the basis of the following greedy gradient-free adaptive VQE, which we dub GGA-VQE.
**The Greedy Gradient-free Adaptive Variational Quantum Eigensolver (GGA-VQE)**
Given the qubit representation of an input Hamiltonian \(H\), a pool of admissible Hermitian generators \(\mathbb{P}\), and a stopping criterion:
1. [leftmargin=*]
2. Root the qubits to an initial state \(|\Psi^{(0)}\rangle\).
3. At the \(m^{\text{th}}\) iteration, use the energy sorting algorithm detailed above to identify the Hermitian generator \(B_{m}\in\mathbb{P}\) that solves the optimisation problem (10), i.e., \[B_{m}=\operatorname*{argmin}_{B\in\mathbb{P}}\,\min_{\theta\in[-\pi,\pi)} \langle\Psi^{(m-1)}|\exp(t\theta\widetilde{B})H\exp(-t\theta\widetilde{B})| \Psi^{(m-1)}\rangle\,.\] (16)
4. Exit the iterative process if the stopping criterion is met. Otherwise, append the resulting parametrised unitary operator to the left of the current ansatz wave-function \(|\Psi^{(m-1)}\rangle\), i.e., define the new ansatz wave-function \[|\Psi^{(m)}\rangle:= \exp(-t\theta_{m}^{\prime}B_{m})\,|\Psi^{(m-1)}\rangle=\exp(-t \theta_{m}^{\prime}B_{m})\exp(-t\theta_{m-1}^{\prime}B_{m-1})\ldots\exp(-t \theta_{1}^{\prime}B_{1})\,|\Psi^{(0)}\rangle\,,\] where the angle \(\theta_{m}^{\prime}\) is obtained in the process of solving the optimisation problem (16).
5. Return to Step 2 with the updated ansatz \(|\Psi^{(m)}\rangle\).
It is important to emphasise that, in contrast to the classical ADAPT-VQE procedure, the GGA-VQE algorithm described above _does not involve_ a global optimisation of all parameters in the current ansatz at each iteration. Instead, at each iteration, we use the energy sorting algorithm to identify the locally optimal Hermitian generator \(B\) as well as the optimal angle \(\theta\) which should be used to construct the new ansatz wave-function- a process which involves the optimisation of one-dimensional, elementary trigonometric functions. In particular, in contrast to the classical ADAPT-VQE, we avoid entirely the need to optimise a multi-dimensional and extremely noisy cost function involving the system Hamiltonian. The resulting huge savings in quantum resources suggest that the GGA-VQE is particular suited for implementation on near-term quantum devices. Let us also note that the \(d\)-dimensional landscape functions introduced in Section 2.3 above can be used to develop natural generalisations of the GGA-VQE algorithm, which we dub GGA-VQE(d), that are likely to be particularly suited for the ground state preparation of strongly correlated systems.
It can now readily be seen that the main computational bottle-neck in the GGA-VQE algorithm is the energy sorting procedure which requires \(\mathcal{O}(M)\) measurements of the system Hamiltonian for an operator pool of size \(M\). It is therefore
natural to ask if the number of measurements required to perform the energy sorting can be further reduced, at least for certain types of Hamiltonians. In the next section, we answer this question affirmatively by showing that for a certain class of Ising Hamiltonians, it is possible to perform energy sorting using a number of measurements _independent of both the size of the operator pool and the number of qubits_.
### GGA-VQE for a Transverse-field Ising Model
While the ADAPT-VQE algorithm is predominantly applied to compute the ground state energies of molecular systems, there is, in principle, no restriction in applying the method to obtain ground state energies for more general Hamiltonians [32]. The goal of this section is to describe in detail, the application of the GGA-VQE algorithm that we have introduced in Section 2.3 to an open boundary transverse-field Ising Hamiltonian [34]. Ising Hamiltonians of this type are of great importance in condensed-matter physics since they are among the simplest models capable of representing different phases of matter, depending on the value of various systems parameters [35]. As the Ising Hamiltonian is well-known theoretically, it also presents a good first test for computational experiments prior to tackling more complex molecular Hamiltonians.
Given an \(N\)-qubit register, we consider the transverse-field Ising Hamiltonian given by
\[H=h\sum_{p=0}^{N-1}X_{p}+J\sum_{p=0}^{N-2}Z_{p}Z_{p+1}, \tag{17}\]
where \(X_{p}\) and \(Z_{p}\) denote the usual \(Y\) and \(Z\) Pauli matrices acting on qubit \(p\), and \(h,J>0\) are system parameters. The physical constant \(h\) models the intensity of a magnetic field directed along the \(x\)-axis, whereas the constant \(J\) models the strength of the nearest-neighbour interactions. If \(J<0\), neighbouring spins tend to align, and the opposite is true if \(J>0\). Note that in this model, each qubit represents a spin-state.
Since the Ising Hamiltonian is real valued, a natural choice of operator pool is the minimal hardware-efficient pool introduced in Section 2.2, which is given by
\[\mathbb{P}=\{Y_{p}\}_{p=0}^{N-1}\;\cup\;\{Z_{p}Y_{p+1}\}_{p=0}^{N-1}. \tag{18}\]
Let us now recall from Section 2.3 that implementing the GGA-VQE algorithm requires us to solve, at each iteration, a minimisation problem so as to identify the optimal Hermitian generator which should be used to construct the new ansatz wave-function. The objective function associated with this minimisation problem (see Equation (10)) is given by
\[\mathcal{L}(B,\theta,|\Psi^{(m-1)}\rangle)=\langle\Psi^{(m-1)}|\exp(t\theta B) H\exp(-t\theta B)|\Psi^{(m-1)}\rangle\,,\]
where \(B\in\mathbb{P}\) is any Hermitian generator from our operator pool, the parameter \(\theta\in[-\pi,\pi)\), and \(|\Psi^{(m-1)}\rangle\) denotes the previous ansatz wave-function.
It can now be shown (see the Appendix for a detailed demonstration) that for the Ising Hamiltonian defined through Equation (17) and the minimal hardware-efficient pool \(\mathbb{P}\) given by (18), the objective function \(\mathcal{L}(B,\theta,|\Psi^{(m-1)}\rangle)\) has the following simple structure:
\[\mathcal{L}(B,\theta,|\Psi^{(m-1)}\rangle)=\begin{cases}\Big{\langle}\Psi^{(m- 1)}\Big{|}H\Big{|}\Psi^{(m-1)}\Big{\rangle}&\text{if}\;\;B=Y_{p},\\ +\sin(2\theta)\Big{\langle}\Psi^{(m-1)}\Big{|}hZ_{p}-J(X_{p}Z_{p+1}+Z_{p-1}X_{ p}\delta_{p>0})\Big{|}\Psi^{(m-1)}\Big{\rangle}&\\ -2\sin^{2}(\theta)\Big{\langle}\Psi^{(m-1)}\Big{|}hX_{p}+J(Z_{p}Z_{p+1}+Z_{p-1 }Z_{p}\delta_{p>0})\Big{|}\Psi^{(m-1)}\Big{\rangle}.&\\ \\ \Big{\langle}\Psi^{(m-1)}\Big{|}H\Big{|}\Psi^{(m-1)}\Big{\rangle}&\text{if}\; \;B=Z_{p}Y_{p+1}.\\ +\sin(2\theta)\Big{\langle}\Psi^{(m-1)}\Big{|}h(Z_{p}Z_{p+1}-Y_{p}Y_{p+1}) \Big{|}\Psi^{(m-1)}\Big{\rangle}&\\ -\sin(2\theta)\Big{\langle}\Psi^{(m-1)}\Big{|}J(X_{p+1}+Z_{p}X_{p+1}Z_{p+2} \delta_{p+2<n})\Big{|}\Psi^{(m-1)}\Big{\rangle}&\\ -2\sin^{2}(\theta)\Big{\langle}\Psi^{(m-1)}\Big{|}hX_{p}+hX_{p+1}+JZ_{p}Z_{p+ 1}+JZ_{p+1}Z_{p+2}\delta_{p+2<n}\Big{|}\Psi^{(m-1)}\Big{\rangle}.&\end{cases} \tag{19}\]
A close study of the right-hand side of Equation (19) now indicates that many terms involving the expectation values of the Pauli matrices can be measured directly and simultaneously on the quantum device without the need to run over all possible Hermitian generators in \(\mathbb{P}\).
* The terms containing only tensor products of \(Z\) operators can readily be measured in the computational basis.
* The terms containing only tensor products of \(X\) (resp. \(Y\)) operators can be measured by applying a Hadamard (resp. \(S^{\dagger}\equiv\text{diag}(1,-\imath)\) and a Hadamard) gate on each qubit.
* The remaining terms are of the form \(X_{p}Z_{p+1}\) or \(Z_{p-1}X_{p}Z_{p+1}\). Terms of this form can be measured by applying a Hadamard gate on qubit \(p\). The terms corresponding to \(p\) even commute and can therefore be measured simultaneously. The same holds true for the \(p\) odd terms which can thus also be measured simultaneously.
Consequently, it is possible, at each iteration of the Greedy-ADAPT-VQE algorithm to construct exactly five quantum circuits whose measurements allow us to recover an analytical expression for all objective functions \(\mathcal{L}(B,\theta,|\Psi^{(m-1)}\rangle)\), \(B\in\mathbb{P}\) in terms of elementary trigonometric functions of \(\theta\). We have thus achieved a radical reduction in the number of required measurements from \(4N-3\) (for the minimal hardware-efficient pool of size \(2N-2\)) to _five_.
We end this section by noting that a simple choice of initial state for the energy-sorting frozen core ADAPT-VQE procedure is given by the ground-state of the non-interacting Hamiltonian \(\sum_{p=0}^{N-1}X_{p}\), i.e.,
\[|\Psi^{(0)}\rangle=|-\rangle^{\otimes n}\]
Assuming that the system parameters satisfy \(|\imath|>|J|\), it is not unreasonable to expect the ground state of the true _interacting_ Hamiltonian to be a perturbation of \(|\Psi^{(0)}\rangle\).
### Overlap-GGA-VQE for Molecular Systems
Unfortunately, the application of the GGA-VQE algorithm to molecular systems of chemical interest seems to be out of reach on current quantum hardware. Indeed, in contrast to the Ising model introduced in Section 2.4 for which the expectation value of the Hamiltonian can be be computed through measurements of just two quantum circuits, computing the expectation value of an \(N\)-spin orbital molecular Hamiltonian typically requires \(\mathcal{O}(N^{4})\) measurements- a process which introduces an overwhelming amount of hardware and measurement noise. Fortunately, we have at our disposal an alternative and considerably simpler adaptive algorithm that can be used to explore the limits of the current quantum hardware for the simulation of molecular systems. This is the so-called Overlap-ADAPT-VQE algorithm introduced in [28].
Overlap-ADAPT-VQE is a hybrid quantum/classical algorithm in the spirit of ADAPT-VQE which aims to construct compact approximations of target wave-functions through an iterative procedure. In contrast to ADAPT, the Overlap-ADAPT procedure does not require the measurement of the expectation value of the Hamiltonian. Instead, at each iteration of Overlap-ADAPT, we measure the _overlap_ between the current ansatz wave-function and the target wave-function to be approximated-a measurement that is much simpler to achieve. To be more precise, the general workflow of the Overlap-ADAPT-VQE algorithm is as follows.
Given a target wave-function \(|\Psi_{\text{ref}}\rangle\), a pool of admissible Hermitian generators \(\mathbb{P}\), and a maximal operator count \(p\):
1. Boot the qubits to an initial state \(|\Psi^{(0)}\rangle\).
2. At the \(m^{\text{th}}\) iteration, identify the Hermitian generator \(B_{m}\in\mathbb{P}\) such that the action of the parameterised unitary operator \(\exp(\imath\theta_{m}B_{m}),\ \theta_{m}\in[-\pi,\pi)\) on the current ansatz \(|\Psi^{(m-1)}\rangle\) is _likely_ to produce a new wave-function having the largest overlap with the target wave-function. This identification is done by maximising a specific gradient involving the current ansatz wave-function at \(\theta_{m}=0\), i.e., \[B_{m}=\operatorname*{arg\,max}_{B\in\mathbb{P}}\left|\frac{\partial}{\partial \theta}\left|\left\langle\exp(\imath\theta B)\Psi^{(m-1)}\left|\Psi_{\text{ ref}}\right\rangle\right|^{2}\right|_{\theta=0}\right|.\] (20) Note that, as in the classical ADAPT-VQE procedure, the criterion (20) is a heuristic, and there is no guarantee that the Hermitian generator \(B_{m}\) selected through this criterion will indeed lead to the parametrised unitary operator whose action on the current ansatz \(|\Psi^{(m-1)}\rangle\) results in the greatest increase in overlap.
3. Append the resulting parametrised unitary operator to the left of the current ansatz wave-function \(|\Psi^{(m-1)}\rangle\), i.e., define \[\widetilde{|\Psi^{(m)}\rangle}:= \exp(-\imath\theta_{m}B_{m})|\Psi^{(m-1)}\rangle\] \[= \exp(-\imath\theta_{m}B_{m})\exp(-\imath\theta_{m-1}^{\prime}B_{ m-1})\ldots\exp(-\imath\theta_{1}^{\prime}B_{1})\left|\Psi^{(0)}\right\rangle.\]
4. Run a classical VQE routine by optimising all parameters \(\theta_{m},\theta_{m-1},\ldots,\theta_{1}\) in the new ansatz wave-function \(\left|\Psi^{(m)}\right\rangle\) so as to maximise its overlap with the target wave-function i.e., solve the optimisation problem \[\vec{\theta}^{\text{opt}}:= (\theta_{1}^{\prime},\ldots,\theta_{m-1}^{\prime},\theta_{m}^{ \prime})\] (21) \[:= \operatorname*{argmax}_{\theta_{1},\ldots,\theta_{m-1},\theta_{m}} \left|\left\langle\prod_{k=m}^{k=1}\exp(-t\theta_{k}B_{k})\Psi^{(0)}\middle| \Psi_{\text{ref}}\right\rangle\right|^{2},\] and define the new ansatz wave-function \(\left|\Psi^{(m)}\right\rangle\) using the newly optimised parameters \(\theta_{1}^{\prime},\ldots,\theta_{m}^{\prime}\), i.e., define \[\left|\Psi^{(m)}\right\rangle:=\prod_{k=m}^{k=1}\exp(-t\theta_{k}^{\prime}B_ {k})\left|\Psi^{(0)}\right\rangle.\] Let us emphasise that although we also denote the newly optimised parameters at the current \(m^{\text{th}}\) iteration by \(\theta_{1}^{\prime},\ldots\theta_{m}^{\prime}\), these optimised values are not necessarily the same as those used to define \(\left|\Psi^{(m-1)}\right\rangle\) and referenced in Step 3 above.
5. If the total number of operators in the updated ansatz is equal to \(p\), exit the iterative process. Otherwise go to Step 2 with the updated ansatz \(\left|\Psi^{(m)}\right\rangle\).
It is not difficult to see that the Overlap-ADAPT-VQE procedure can also be viewed as finding the _maximizer_ of a Hamiltonian \(H\) given by \(H=\left|\Psi\right\rangle_{\text{ref}}\left\langle\Psi\right|_{\text{ref}}\). Thus, the formalism developed in Sections 2.1-2.3 can readily be adapted to fit the framework of Overlap-ADAPT-VQE, and in particular, we can define an Overlap-GGA-VQE algorithm. In order to be able to take advantage of the energy sorting algorithm in this setting, we will, additionally, describe how to compute the expectation of this type of Hamiltonian, i.e., how to compute the overlap between two arbitrary states.
**The Compute-Uncompute Method**
One method to compute the overlap between two states represented on \(N\)-qubit quantum registers is to use the so-called compute-uncompute method. Indeed, assume we have knowledge of two quantum circuits \(U_{\Psi}\) and \(U_{\Phi}\) such that \(U_{\Psi}\left|0\right\rangle=\left|\Psi\right\rangle\) and \(U_{\Phi}\left|0\right\rangle=\left|\Phi\right\rangle\), where \(\left|0\right\rangle\) denotes the initial (usually Hartree-Fock) state. Then, the overlap \(\left|\left\langle\Phi\middle|\Psi\right\rangle\right|^{2}\) can be computed as the expectation value of the projector on the zero state \(\left|0\right\rangle\left\langle 0\right|=\left(\frac{I+Z}{2}\right)^{\otimes n}\) with respect to the state \(U_{\Phi}^{\dagger}\left|\Psi\right\rangle\). Indeed, we have
The compute-uncompute method has the advantage of not requiring any additional qubits beyond those required to represent the circuits \(U_{\Psi}\) and \(U_{\Phi}\). It does, however, require combining the individual circuits \(U_{\Phi}\) and \(U_{\Psi}\) into a single quantum circuit which therefore has twice the depth of the initial circuits.
**The Hadamard SWAP-Test**
The overlap \(\left|\left\langle\Phi\middle|\Psi\right\rangle\right|^{2}\) may also be computed through the so-called SWAP test method for which the associated circuit is shown below. The essential idea of this method is to construct a circuit containing an ancillary qubit such that the probability \(p(0)\) of measuring \(0\) on the ancillary qubit is related to the overlap through the relation
\[p(0)=\frac{1+\left|\left\langle\Phi\middle|\Psi\right\rangle\right|^{2}}{2}.\]
The SWAP test circuit has the advantage of having the same circuit depth as that of the individual circuits \(U_{\Psi}\) and \(U_{\Phi}\) representing the states \(\left|\Psi\right\rangle\) and \(\left|\Phi\right\rangle\) respectively. On the other hand, this efficiency in circuit depth comes at the cost of doubling the number of qubits and requiring \(N\) SWAP gates.
## 3 Results
The algorithmic procedures in this research have been performed using an in-house code developed within the Amazon Braket SDK. All quantum computations were performed using Amazon Braket and have been executed on an IonQ Aria 25-qubits trapped-ion quantum computer which incorporates built-in error mitigation techniques, and each observable is evaluated using 2500 shots. Classical simulations were conducted using our internal Hyperion multi-GPU-accelerated quantum simulator [36] with all simulator computations being run on a single NVIDIA DGX A100 node.
Before proceeding to the actual numerical results, let us briefly describe the specific outcomes that we wish to explore using QPU implementations of these adaptive algorithms. It is crucial to emphasise that the primary objective of such adaptive procedures is to yield a wave-function ansatz that accurately represents the ground state of the physical system under study. Our goal in executing such adaptive algorithms on the QPU therefore is to obtain an ordered set of operators (together with corresponding optimal parameters) whose application to the initial state, yield a state that exhibits a high fidelity to the true ground state of the physical system under study. To achieve this state preparation, the GGA-VQE algorithm minimises the variational energy of the ansatz wave-function, while the Overlap-GGA-VQE algorithm maximises the overlap (fidelity) of the ansatz with an accurate target state.
The first aim of our QPU implementations of these algorithms therefore is to determine if the measurements performed on the QPU during the operator selection step enable us to obtain, at each iteration, an optimal operator whose addition to the current ansatz wave-function leads to either the largest decrease in the variational energy (for GGA-VQE) or the largest increase in the fidelity with the target state (for Overlap-GGA-VQE). An obvious strategy to make this determination, is to retrieve the ansatz wave-function yielded by the QPU-implemented GGA-VQE or Overlap-GGA-VQE methods, represent this ansatz wave-function using classical methods on a HPC simulator, and measure the sought-after observables. This approach allows an unbiased evaluation of the quality of the ansatz generated by the GGA-VQE procedure on the QPU, and we refer to this approach as 'hybrid' observable evaluation in the sequel. The natural follow-up to this first approach is then to evaluate, where possible, the energy or fidelity of the generated ansatz wave-function directly on the quantum computer since this measurement will be a strong indicator of the error induced by noise in observable measurements. In the remainder of this section, we will frequently refer to the _fidelity_ of two quantum states \(|\psi\rangle\) and \(|\phi\rangle\), which we define as the overlap squared of these two states, i.e., \(F(|\psi\rangle,|\phi\rangle)=|\langle\psi|\phi\rangle|^{2}\).
### The GGA-VQE algorithm applied to the Ising Model
For our first set of numerical experiments, we apply the GGA-VQE algorithm described in Section 2.3 to the transverse-field Ising model described in Section 2.4. We set the system parameters of the Ising Hamiltonian to \(h=0.5\), \(J=0.2\), which ensures that the two-body interactions in this Ising Hamiltonian play an important role.
Figure 3 illustrates the convergence of the hybrid energy evaluations of the GGA-VQE ansatz wave-function with respect to the number of algorithm iterations. We remind the reader that these hybrid energy evaluations are obtained by first running the GGA-VQE algorithm on the IonQ Aria QPU, retrieving the resulting ansatz wave-function and re-implementing it on the Hyperion HPC simulator, and then evaluating the variational energy on the HPC simulator. For reference, we also plot the corresponding energy curve obtained by executing the GGA-VQE ansatz directly on the HPC Hyperion simulator using \(10^{6}\) samples per measured circuit. In addition, as an indication of the measurement and hardware noise, we also plot the GGA-VQE energies obtain by direct measurement on the QPU.
Figure 4: A convergence plot of the fidelity of the GGA-VQE ansatz wave-function produced by the QPU and re-implemented in the Hyperion HPC simulator (hybrid evaluation approach) with the exact ground state of this Ising model obtained using a diagonalisation procedure on the Hyperion simulator. The figure on the right is a zoomed-in version of the figure on the left to better appreciate the fidelity of the GGA-VQE ansatz wave-function.
Figure 3: Energy convergence of the GGA-VQE algorithm with respect to the number of iterations. The blue reference curve denotes the energy of a classically simulated ansatz. The green and orange curves denote the hybrid and QPU energy evaluations of the GGA-VQE ansatz wave-function respectively. Note that the hybrid evaluation is carried out by retrieving the GGA-VQE ansatz wave-function generated by the QPU, re-implementing it on the Hyperion HPC simulator, and then evaluating the variational energy.
Figure 4 clearly indicates that QPU-implemented GGA-VQE procedure successfully provides an ansatz wave-function that closely matches the ground state. Moreover, the QPU implementation and HPC simulator implementation of the GGA-VQE algorithm seem highly consistent _despite_ significant noise in the quantum evaluation of observables, as noticeable from the QPU energy evaluation curve in Figure 3. Indeed, the greedy, gradient-free operator selection procedure that we have introduced in this study, which relies on a function extrapolation using five noisy evaluations on the QPU, is able to build an ansatz with an energy error below \(2.50\times 10^{-2}\) eV and a fidelity exceeding 98% with the exact ground state (see Figure 4).
To better illustrate the outstanding robustness of the GGA-VQE procedure with respect to QPU noise, we depict in Figure 5 the expected maximal energy drop of each Hermitian generator from the chosen minimal operator pool throughout the iterative procedure. We observe consistent maximal energy drops of approximately \(1.5\times 10^{-2}\) eV for the first 24 iterations followed by a great decrease in the potential energy drops from iteration 25 onwards. This is consistent with the energy curve displayed in Figure 3 which steadily decreases for the first 24 iterations and then reaches a plateau.
For further confirmation, we examine the energy landscapes associated with certain Hermitian generators from the operator pool, extrapolated using five noisy measurements on the QPU. We compare these noisy QPU-based landscapes with the reference landscapes obtained from the Hyperion HPC simulator. Our results, displayed in Figure 6, indicate a nearly perfect match for an operator that enables an energy drop (\(Z_{0}Y_{1}\)), as well as for an operator that does not improve the ansatz (\(Y_{0}\)) at the first GGA-VQE iteration. This finding explains the remarkable resilience to QPU noise of operator selection in the GGA-VQE procedure, and suggests that the QPU-implemented algorithm can consistently pick the optimal operator and associated parameter, resulting in a gradual reduction of the variational energy of the ansatz wave-function and thus convergence towards the ground state.
Figure 5: Expected energy drop of each Hermitian generator from the operator pool during the GGA-VQE iterative procedure on the QPU. The minimal pool operators, numbered from 1 to 48, are listed in the same order as defined in Equation (18). All energy differences are expressed in eV.
### Overlap-GGA-VQE for a Stretched HF molecule
For our next set of numerical experiments, we consider the application of the Overlap-GGA-VQE algorithm for the approximation of the ground state eigenfunction of the HF molecule at a bond distance of 2.5 A. We consider an active space of 8 electrons in 10 spin orbitals in the minimal STO-3g basis set, thus freezing the lowest \(1s\) orbital as doubly occupied. The Hartree-Fock state can therefore be represented as \(|\phi\rangle=|1111111100\rangle\), which requires 10 qubits. The target wave-function for the Overlap-GGA-VQE process is obtained using a QEB-ADAPT-VQE procedure carried out on the Hyperion HPC simulator until convergence at the chemical accuracy level. The resulting QEB-ADAPT-VQE target wave-function, which has an error of about 1.4 mHa, is constructed using four generators from the Qubit Excitation-based pool (see Section 2.2) leading to a total CNOT circuit count of 32. The purpose of applying the Overlap-GGA-VQE algorithm is to obtain a high-fidelity approximation of this target wave-function using fewer CNOT gates.
We employ a subset \(\mathbb{P}\) of the qubit hardware-efficient pool introduced in Section 2.2. More precisely, we define an index set \(P\) for pairs of qubits given by
\[P=\{(4,0),(8,0),(5,1),(9,1),(5,0),(7,0),(7,1)\}.\]
Corresponding to this index set \(P\), we define the sub-pool \(\mathbb{P}\) of qubit hardware-efficient operators as
\[\mathbb{P}=\{X_{\mathbf{p}}=\frac{1}{2}X_{p_{2}}X_{p_{1}}:\mathbf{p}=(p_{1},p_ {2})\in P\}.\]
In other words, \(\mathbb{P}\) consists of a collection of _single excitation_ qubit hardware-efficient operators (recall Equation (7)). Equipped with the operator pool \(\mathbb{P}\), we apply the Overlap-GGA-VQE algorithm to the target QEB-ADAPT-VQE wave-function. It is important to note that, for the current HF system, the initial Hartree-Fock state exhibits no overlap with the QEB-ADAPT-VQE target.
Figure 6: One-dimensional energy landscapes of the operators \(Y_{0}\) and \(Z_{0}Z_{1}\) when applied to the initial state. The orange curve is extrapolated from five noisy circuit evaluations following the method described in 2.4. The black curve is the exact energy landscape obtained from an HPC simulation. The energy differences are expressed in eV, and the angles are given in radians.
Figure 7a shows the maximal overlap increases that can be achieved by the application of different operators from the pool on the initial Hartree-Fock state. These overlap values are obtained through extrapolation of the one-dimensional objective function defined in Equation (13) for each operator from the pool \(\mathbb{P}\). The extrapolations require, for each operator, two noisy overlap evaluations in the QPU. It is readily seen however, that despite the possible hardware and measurement noise on the QPU, the QPU extrapolations are nearly identical to the extrapolations obtained using an HPC simulator implementation of the Overlap-GGA-VQE algorithm displayed in Figure 7b. This numerical result supports the noise robustness of our operator selection approach. Indeed, we see that in both the QPU implementation and the simulator implementation, the Overlap-GGA-VQE algorithm correctly identifies the operators acting on the qubit pair (5, 0) as the ones that should lead to the largest increase in the overlap, and thus the largest increase in fidelity with the target. For this simple example, convergence is reached after a single iteration as, at the second step, no generator from the operator pool can further improve the overlap with the target state.
The final fidelities of the QPU-implemented and simulator-implemented Overlap-GGA-VQE ansatz wave-functions are
Figure 8: Ansatz fidelity of the Overlap-GGA-VQE ansatz with the target state. The blue bars correspond to the fidelity of a classically simulated ansatz with the target state. The green and orange bars denote the hybrid and QPU fidelity evaluations respectively. Note that the hybrid evaluation is carried out by retrieving the Overlap-GGA-VQE ansatz wave-function generated by the QPU, re-implementing it on the Hyperion HPC simulator, and then evaluating the variational energy.
Figure 7: Overlap difference induced by each operator from the operator pool \(\mathbb{P}\) when applied to the ansatz in the QPU execution (Fig. 7a), and in a classical simulation (Fig. 7b). The steps correspond to the iterations in the Overlap-GGA-VQE procedure. At step 2, none of the operators in the pool can significantly improve the overlap with the target state, indicating that convergence has been achieved.
plotted in Figure 8. We remind the reader that the term 'hybrid fidelity evaluation' refers to the classically recomputed fidelity of the QPU-generated ansatz (see the discussion at the start of this section). This hybrid fidelity evaluation precisely matches the value of the fidelity obtained through the pure HPC simulator implementation of Overlap-GGA-VQE, regardless of the choice of overlap measurement technique.
For completeness, we have also plotted the fidelities obtained from the QPU implementation of the Overlap-GGA-VQE procedure through direct measurement on the QPU. In this case, we see that the Swap test exhibits a higher noise level than the compute-uncompute method which indicates that- at least for this hardware- a deeper circuit involving only 10 qubits is less affected by device noise than a shorter circuit that requires gates and connectivity across 20 qubits. Note, however, that in both cases, the hybrid evaluation of the Overlap-GGA-VQE ansatz is highly accurate as indicated in the green bars in Figure 8. Indeed, the QPU implementation of the Overlap-GGA-VQE procedure manages to provided an ansatz wave-function that achieves a fidelity of over 99% with a chemically accurate target wave-function while using only 2 CNOT gates.
## 4 Discussion
In this study, we have developed new resource-saving strategies to execute, for the first time, adaptive variational quantum algorithms on a state-of-the-art, 25-qubit trapped ion, error-mitigated quantum computer. Our purpose in doing so was to explore the suitability of such algorithms for state preparation, which can, for instance, be followed by a more accurate Quantum Phase Estimation (QPE) procedure [37, 5, 38] to evaluate the fundamental energy of our Hamiltonian. Since the probability of success for QPE is directly proportional to the fidelity between the approximate eigenstate and the true eigenstate, accurate, adaptive hybrid algorithms can play an important role in the pre-processing step for quantum phase estimation. Independent of such an pre-processing application, let us also point out that certain interesting studies have demonstrated potential applications of adaptive algorithms to dynamic simulation problems [39, 40].
As a physics application, we have used the novel greedy gradient-free adaptive variational quantum eigensolver (GGA-VQE) introduced in this paper to successfully compute the ground state of an open boundary 25-qubit transverse-field Ising Hamiltonian, achieving a ground state fidelity of over 98%. The GGA-VQE algorithm that we have developed for the Ising model is also highly scalable since each iteration of this method requires a fixed number of circuit measurements, regardless of the number of qubits or the size of the operator pool. Ising models have already been studied using various methods in quantum regimes that claim to surpass the memory capacity of classical computers [41, 42], and these studies, in combination with ours, demonstrate promising results in the potential of useful quantum computation before the era of fault-tolerance.
As an additional application targeted at chemistry applications, we have combined our greedy approach with the Overlap-ADAPT-VQE algorithm introduced in [28] to compute compact approximations of a target wave-function through an iterative, adaptive overlap maximisation procedure. We have applied this novel Overlap-GGA-VQE algorithm to a stretched 10-qubit hydrogen fluoride (HF) molecular system and shown that the algorithm is able to generate a highly compact approximation of a target approximate ground-state that achieves a fidelity of over 99%. The approximate ground-state for this numerical experiment was generated through a QEB-ADAPT-VQE [29] procedure on a classical simulator while the wave-function overlaps- required by the Overlap-GGA-VQE procedure- were measured on the QPU using two different methods: the compute-uncompute method and the Swap test.
For both the Ising model and the stretched HF molecule, we have demonstrated that, despite the high level of device noise in observable quantum measurements, our hardware-friendly GGA-VQE procedure can select a sequence of unitary operators and corresponding optimal angles that can be used to construct an accurate approximation of the ground state. Indeed, our greedy operator selection relies on an extrapolation of the associated objective function using a minimal number of noisy quantum measurements, and this extrapolation technique seems very resilient to device noise, as evidenced by the close alignment between the QPU-extrapolated objective function and the HPC simulator-extrapolated objective function. Moreover, because we utilise extrapolated objective functions for the VQE portion of the algorithm, our greedy, gradient-free protocols do not require any multi-dimensional noisy optimisation at all, thus bypassing the main bottleneck of QPU implementations of adaptive VQEs [13].
Let us emphasise that the energy sorting procedure for optimal operator selection that we have developed in this study is easily extendable to multi-operator selection and optimisation at the cost of a higher number of measurements on the quantum device, and some preliminary ideas in this direction have been presented in Section 2.3. While the ground state preparation of the relatively simple Hamiltonians considered in this study could be effectively carried out by appending one locally optimal operator at a time to the current ansatz wave-function, it is likely that the multi-operator generalisations of our energy sorting procedure (see Section 2.3) will be effective in the ground state preparation of strongly correlated systems such as stretched linear chains of hydrogen atoms. Similarly, the extensions of the Ising model GGA-VQE algorithm that we have developed (see Section 5.4) can easily be applied to other spin-chain systems such as the Hubbard model. Further research in both these directions will be the subject of future work.
Finally, let us conclude by remarking that advancements in quantum hardware encompassing both the increasing number of qubits and their enhanced quality, coupled with practical hybrid executions of adaptive variational quantum algorithms on QPUs such as the ones executed in this study pave the way for simulating increasingly accurate ansatze for quantum chemical and many-body physics applications.
## Data availability
Data generated during the study is available upon request from the authors (E-mail: [email protected]).
## Code availability
The code used during the study is available upon request from the authors (E-mail: [email protected]).
## Acknowledgements
This work has been funded by the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation program (grant No 810367), project EMC2 (J.-P. P. and Y.M.). Support from the PEPR EPiQ and HQI programs is acknowledged. We thank Amazon Braket (G. Tourpe) for partial funding of the computations on the Aria IonQ machine and the whole team for their help in the setup of our computations on Amazon Braket's SDK (Software Development Kit).
## 5 Appendix
### Quantum circuits for qubit-excitation operators
For the sake of completeness, we present a few key quantum circuits used in the hardware experiments carried out for this study. The circuit for a single-qubit excitation is given in Figure 9 whereas the circuit for a double-qubit excitation is displayed in Figure 10. Note that both qubit excitations correspond to the qubit-excitation based (QEB) pool introduced in Section 2.2, and, as explained in [43], the circuits displayed here are the most hardware-efficient implementations of these operators.
Remark: when displaying quantum circuits, \(H\) denotes the Hadamard gate and not a physical Hamiltonian.
### Periodicity of QEB operator pool and involutory property of hardware-efficient pools
Recall the definition of the qubit excitation-based (QEB) pool given in Section 2.2 and \(A_{pqrs}\) denote a double-qubit excitation generator between the pairs of qubits \((p,q)\) and \((r,s)\) as defined through Equation (6), i.e.,
\[A_{pqrs}=\frac{1}{8}\left(X_{r}Y_{x}X_{p}X_{q}+Y_{r}X_{x}X_{p}X_{q}+Y_{r}Y_{s} Y_{p}X_{q}+Y_{r}Y_{s}X_{p}Y_{q}-X_{r}X_{s}Y_{p}X_{q}-X_{r}X_{s}X_{p}Y_{q}-Y_{r}X_{s}Y_{ p}Y_{q}-X_{r}Y_{s}Y_{p}Y_{q}\right).\]
Figure 10: A quantum circuit performing a generic double-qubit evolution [43].
Figure 9: A quantum circuit performing a generic single-qubit evolution [43].
A direct calculation shows that \(A_{pqrs}\) can be written in the equivalent form
\[A_{pqrs}=i(Q_{p}^{\dagger}Q_{q}^{\dagger}Q_{r}Q_{s}-Q_{r}^{\dagger}Q_{s}^{\dagger} Q_{p}Q_{q}),\]
where for any qubit index \(a\), we define \(Q_{a}=\frac{1}{2}(X_{a}+iY_{a})\).
Using this representation, we can easily show that \(A_{pqrs}|_{1}p_{1}q_{0}r_{0}s\rangle=i|_{0}p_{0}q_{1}r_{1}s\rangle\), \(A_{pqrs}|_{0}p_{0}q_{1}r_{1}s\rangle=-i|_{1}p_{1}q_{0}r_{0}s\rangle\) while the action of \(A_{qrs}\) on all other states is zero. Consequently, the subspace spanned by \(e_{1}\equiv|_{1}p_{1}q_{0}r_{0}s\rangle\) and \(e_{2}\equiv|_{0}p_{0}q_{1}r_{1}s\rangle\) is an invariant subspace of \(A_{pqrs}\), and in the basis \(\{e_{1},e_{2}\}\) of this invariant subspace, \(A_{pqrs}\) has the representation
\[\begin{pmatrix}0&-\imath\\ \imath&0\end{pmatrix},\]
which is the well-known \(Y\) Pauli matrix. We thus conclude that \(A_{pqrs}\) has eigenvalues \(0,\pm 1\) and satisfies \(A_{pqrs}^{3}=A_{pqrs}\).
A similar demonstration can be carried out for single-qubit generators from the QEB pool and generators from the qubit hardware efficient and minimal hardware efficient pools, which shows that these generators are involutory, i.e., they satisfy \(B^{2}=I\). For the sake of brevity, we do not provide a detailed argument.
The above observation motivates further investigation and leads to the following result.
**Theorem 1**.: _Let \(H\) denote an \(N\)-qubit Hamiltonian and let \(\mathbb{P}\) denote any of the operator pools introduced in Section 2.2. Then define, for any \(N\)-qubit wave-function \(|\phi\rangle\), any Hermitian generator \(B\in\mathbb{P}\) and any \(\theta\in[-\pi,\pi)\) the landscape function_
\[\mathcal{L}(B,\theta,|\phi\rangle)=\left\langle\phi\left|\exp(t\theta B)H\exp (-t\theta B)\right|\phi\right\rangle.\]
_Then it holds that_
\[\mathcal{L}(B,\theta,|\phi\rangle)=\begin{cases}\left\langle\phi|H|\phi \right\rangle+\big{(}\cos(\theta)-1\big{)}(\left\langle\phi|\{H,B^{2}\}|\phi \right\rangle-2\left\langle\phi|BHB|\phi\right\rangle)&\text{if }B^{3}=B,\\ +\big{(}1-\cos(\theta)\big{)}^{2}\big{(}\left\langle\phi|B^{2}HB^{2}|\phi \right\rangle-\left\langle\phi|BHB|\phi\right\rangle\big{)}\\ +\sin(\theta)(\cos(\theta)-1)\left\langle\phi|B[H,B]B|\phi\right\rangle\\ +\sin(\theta)\left\langle\phi|t[B,H]|\phi\right\rangle\\ \\ \cos^{2}(\theta)\left\langle\phi|H|\phi\right\rangle+\frac{\sin(2\theta)}{2} \left\langle\phi|t[B,H]|\phi\right\rangle&\text{if }B^{2}=I.\\ +\sin^{2}(\theta)\left\langle\phi|BHB|\phi\right\rangle,\end{cases}\]
_where \(\{\cdot,\cdot\}\) and \([\cdot,\cdot]\) denote the anti-commutator and commutator respectively._
Proof.: We consider first the case \(B^{3}=B\). For such a Hermitian generator, we can use the Taylor series expansion of the exponential to deduce that
\[\begin{split}\exp(-t\theta B)&=\sum_{k=0}^{\infty}\frac {(-i\theta B)^{2k}}{(2k)!}+\sum_{k=0}^{\infty}\frac{(-i\theta B)^{2k+1}}{(2k+1)! }\\ &=I+(\cos(\theta)-1)B^{2}-i\sin(\theta)B.\end{split} \tag{22}\]
Plugging in the expression (22) into the definition of the landscape function \(\mathcal{L}(B,\theta,|\phi\rangle)\) now yields the desired result. The case \(B^{2}=I\) is simply a special case.
### Analytical expressions of GGA-VQE objective functions for the Ising Hamiltonian
Throughout this section, we use the setting and notation of Section 2.4. Our goal now is to demonstrate that for the Ising Hamiltonian defined through Equation (17) and the minimal hardware-efficient pool \(\mathbb{P}\) given by (18), the objective function
\(\mathcal{L}(B,\theta,|\Psi^{(m-1)}\rangle)\) has the following simple structure:
\[\mathcal{L}(B,\theta,|\Psi^{(m-1)}\rangle)=\begin{cases}\left\langle\Psi^{(m-1)} \Big{|}H\Big{|}\Psi^{(m-1)}\right\rangle&\text{if }\;B=Y_{p},\\ +\sin(2\theta)\Big{\langle}\Psi^{(m-1)}\Big{|}hZ_{p}-J(X_{p}Z_{p+1}+Z_{p-1}X_{ p}\delta_{p>0})\Big{|}\Psi^{(m-1)}\Big{\rangle}&\\ -2\sin^{2}(\theta)\Big{\langle}\Psi^{(m-1)}\Big{|}hX_{p}+J(Z_{p}Z_{p+1}+Z_{p-1 }Z_{p}\delta_{p>0})\Big{|}\Psi^{(m-1)}\Big{\rangle}.&\\ \left\langle\Psi^{(m-1)}\Big{|}H\Big{|}\Psi^{(m-1)}\right\rangle&\text{if }\;B=Z_{p}Y_{p+1}.\\ +\sin(2\theta)\Big{\langle}\Psi^{(m-1)}\Big{|}h(Z_{p}Z_{p+1}-Y_{p}Y_{p+1}) \Big{|}\Psi^{(m-1)}\Big{\rangle}&\\ -\sin(2\theta)\Big{\langle}\Psi^{(m-1)}\Big{|}J(X_{p+1}+Z_{p}X_{p+1}Z_{p+2} \delta_{p+2<n})\Big{|}\Psi^{(m-1)}\Big{\rangle}&\\ -2\sin^{2}(\theta)\Big{\langle}\Psi^{(m-1)}\Big{|}hX_{p}+hX_{p+1}+JZ_{p}Z_{p+ 1}+JZ_{p+1}Z_{p+2}\delta_{p+2<n}\Big{|}\Psi^{(m-1)}\Big{\rangle}.&\end{cases} \tag{23}\]
To show that Equation (23) indeed holds, we first recall the definition of the objective function \(\mathcal{L}(B,\theta,|\Psi^{(m-1)}\rangle)\) which is given by
\[\mathcal{L}(B,\theta,|\Psi^{(m-1)}\rangle)=\langle\Psi^{(m-1)}|\exp(t\theta B )H\exp(-t\theta B)|\Psi^{(m-1)}\rangle\,, \tag{24}\]
where \(B\in\mathbb{P}\) is any Hermitian generator from the minimal hardware-efficient operator pool, the parameter \(\theta\in[-\pi,\pi)\), and \(|\Psi^{(m-1)}\rangle\) denotes the previous ansatz wave-function.
Next, we recall from Equation (13) that the involutory property of the Hermitian generators from the minimal hardware-efficient pool yields the following simplification of Equation (24):
\[\mathcal{L}(B,\theta,|\Psi^{(m-1)}\rangle)=\cos^{2}(\theta)\left\langle\phi |H|\phi\right\rangle+\frac{\sin(2\theta)}{2}\left\langle\phi|t[B,H]|\phi \right\rangle+\sin^{2}(\theta)\left\langle\phi|BHB|\phi\right\rangle. \tag{25}\]
Consequently, in order to arrive at Equation (23), we have to simplify each term involving the Ising Hamiltonian and minimal hardware-efficient generator \(B\) appearing in Equation (25).
To do so, recall that we denote the total number of qubits (i.e., the size of the quantum register) by \(N\in\mathbb{N}\), and fix an index \(p\in\{0,\ldots,N-2\}\). Using now the commutation relations of the Pauli matrices, a direct calculation reveals that
\[[Y_{p},H]=-2hiZ_{p}+2uJ(X_{p}Z_{p+1}+Z_{p-1}X_{p}\delta_{p>0})\]
and
\[[Z_{p}Y_{p+1},H]=2h(Y_{p}Y_{p+1}-Z_{p}Z_{p+1})+2uJ(X_{p+1}+Z_{p}X_{p+1}Z_{p+2} \delta_{p<N-2}). \tag{26}\]
A similar calculation utilising once again the commutation relations of the Pauli matrices further yields that
\[Y_{p}HY_{p}=H-2hX_{p}-2J(Z_{p}Z_{p+1}+Z_{p-1}Z_{j}\delta_{p>0}),\]
and
\[ZZ_{p+1}HZ_{p}Z_{p+1}=\sum_{q=0}^{N-1}Z_{p}Z_{p+1}X_{q}Z_{p}Z_{p+1}=H-2h(X_{p}+ X_{p+1}). \tag{27}\]
The result now follows by plugging in Equations (26) and (27) into Equation (25).
### Reducing the computational complexity of the energy sorting algorithm for general spin chains
As demonstrated in Section 2.4, the specific structure of transverse-field Ising Hamiltonian leads to a huge reduction in the computational cost of the energy sorting step of the GGA-VQE algorithm. Indeed, while the energy sorting step a priori requires \(\mathcal{O}(M)\) measurements for a general system Hamiltonian and an operator pool of size \(M\), the number of required
measurements reduces to just _five_ in the case of the one-dimensional transverse field Hamiltonian. The goal of this section is to briefly describe similar reductions in the computational complexity of the energy sorting algorithm for Ising spin-chain Hamiltonians with local magnetic fields and couplings in all three spatial directions, i.e., Hamiltonians of the form
\[H=\sum_{k=0}^{N-1}h_{k}^{x}X_{k}+\sum_{k=0}^{N-1}h_{k}^{z}Z_{k}+\sum_{k=0}^{N-2} J_{k}^{x}X_{k}X_{k+1}+\sum_{k=0}^{N-2}J_{k}^{y}Y_{k}Y_{k+1}+\sum_{k=0}^{N-2}J_{k}^{z}Z_{k}Z _{k+1}. \tag{28}\]
Here, \(h_{k}^{x}\) and \(h_{k}^{z}\) denote constants that model the intensity of the magnetic field along the \(x\) and \(z\) directions while \(J_{k}^{x},J_{k}^{y}\) and \(J_{k}^{z}\) are constants that model the strength of the nearest-neighbour interactions in the \(x,y,\) and \(z\) directions respectively.
Tables 1 and 2 list the terms of interest that appear in the one-dimensional GGA-VQE landscape functions that are used to perform the energy sorting step. Comparing the terms that appear in Tables 1 and 2 with the simpler expressions for the transverse-field Ising Hamiltonian from Section 2.4, we see that the only new terms that arise are of the \(Z_{p-2}Z_{p-1}X_{p}\) and \(Y_{p-1}X_{p}\). As before, we can simultaneously measure such operators acting on a disjoint set of qubits- a process that will require an additional five quantum circuits at each step. Consequently, applying the GGA-VQE algorithm to general Ising Hamiltonians of the form (28) will require constructing and measuring at most ten quantum circuits, irrespective of the number of qubits and the size of the minimal operator pool.
Finally, let us remark that we expect similar but likely less drastic simplifications to also hold for Hamiltonians arising from other physical models.
|
2310.03812 | Fishnets: Information-Optimal, Scalable Aggregation for Sets and Graphs | Set-based learning is an essential component of modern deep learning and
network science. Graph Neural Networks (GNNs) and their edge-free counterparts
Deepsets have proven remarkably useful on ragged and topologically challenging
datasets. The key to learning informative embeddings for set members is a
specified aggregation function, usually a sum, max, or mean. We propose
Fishnets, an aggregation strategy for learning information-optimal embeddings
for sets of data for both Bayesian inference and graph aggregation. We
demonstrate that i) Fishnets neural summaries can be scaled optimally to an
arbitrary number of data objects, ii) Fishnets aggregations are robust to
changes in data distribution, unlike standard deepsets, iii) Fishnets saturate
Bayesian information content and extend to regimes where MCMC techniques fail
and iv) Fishnets can be used as a drop-in aggregation scheme within GNNs. We
show that by adopting a Fishnets aggregation scheme for message passing, GNNs
can achieve state-of-the-art performance versus architecture size on
ogbn-protein data over existing benchmarks with a fraction of learnable
parameters and faster training time. | T. Lucas Makinen, Justin Alsing, Benjamin D. Wandelt | 2023-10-05T18:01:04Z | http://arxiv.org/abs/2310.03812v2 | # Fishnets: Information-Optimal, Scalable Aggregation for Sets and Graphs
###### Abstract
Set-based learning is an essential component of modern deep learning and network science. Graph Neural Networks (GNNs) and their edge-free counterparts Deepsets have proven remarkably useful on ragged and topologically challenging datasets. The key to learning informative embeddings for set members is a specified aggregation function, usually a sum, max, or mean. We propose Fishnets, an aggregation strategy for learning information-optimal embeddings for sets of data for both Bayesian inference and graph aggregation. We demonstrate that i) Fishnets neural summaries can be scaled optimally to an arbitrary number of data objects, ii) Fishnets aggregations are robust to changes in data distribution, unlike standard deepsets, iii) Fishnets saturate Bayesian information content and extend to regimes where MCMC techniques fail and iv) Fishnets can be used as a drop-in aggregation scheme within GNNs. We show that by adopting a Fishnets aggregation scheme for message passing, GNNs can achieve state-of-the-art performance versus architecture size on ogbn-protein data over existing benchmarks with a fraction of learnable parameters and faster training time.
Machine Learning, ICML
## 1 Introduction
Aggregating information from independent data in an optimal way is a fundamental problem in statistics and machine learning. On one hand, frequentist analyses need optimal estimators for data compression, while on the other Bayesian analyses need small informative summaries for simulation-based inference (SBI) schemes (Cranmer et al., 2020). In a deep learning context graph neural networks (GNNs) rely on aggregation schemes to pool information over large data structures, where each feature might be weakly informative, but at a graph level might contribute a lot of information for predictive or regression tasks (Zhou et al., 2020).
Up until now, graph aggregation schemes have relied on simple, fixed operations such as mean, max, sum, (Kipf and Welling, 2017; Hamilton et al., 2017; Xu et al., 2019), variance, or trainable variants of these aggregators (Battaglia et al., 2018; Li et al., 2020). We introduce a new optimal aggregation scheme grounded in information-theoretic principles. By leveraging the additive structure of the log-likelihood for independent data and underlying Fisher curvature, we can construct a learned summary space that asymptotically contains maximal information (Vaart, 1998; Coulton and Wandelt, 2023). We show that this formalism captures relevant information in both a Bayesian inference context as well as for edge aggregation in graphs.
Our approach boasts several advantages. By explicitly learning the score and corresponding inverse-Fisher weights, we are able to construct aggregated summaries that are both asymptotically optimal and robust to changes in data distribution. The result is that we are able to construct optimal summary statistics for independent data for SBI applications, and using the same formalism are able to beat key benchmark GNN learning tasks with far smaller architectures in faster training time than leading networks.
This paper is organised as follows: We first present the Fishnets neural embedding method and review relevant related work in common notation. Next we demonstrate information saturation, robustness, and scalability in a Bayesian context for increasingly difficult problems, and highlight where existing aggregators fall short. Finally, we show that adopting Fishnets aggregation as a drop-in replacement for existing GNN architectures allows networks to outperform standard benchmark architectures with fewer learnable parameters and faster training time.
## 2 Method: Optimal Aggregation of
Independent (Heterogeneous) Data
Maximum likelihood estimators (MLEs) are the asymptotically-optimal estimators for predictive tasks. When they are available, they provide an optimally-informative embedding of the data with respect to the parameters of interest, \(\mathbf{\theta}\)(Alsing and Wandelt, 2018).
Many inference problems consist of a set of \(n_{\mathrm{data}}\) data vectors, \(\{\mathbf{d}_{i}\}_{i=1}^{n_{\mathrm{data}}}\) which obey a global model controlled by parameters \(\mathbf{\theta}\), and a possibly arbitrarily deep hierarchy of latent values, \(\eta\). The full data likelihood for interesting parameters \(\mathbf{\theta}\) is given by the integral over latents,
\[p(\{\mathbf{d}_{i}\}|\mathbf{\theta})=\int p(\{\mathbf{d}_{i}\}|\mathbf{\theta},\eta)p (\eta|\mathbf{\theta})d\eta. \tag{1}\]
When the data are independently distributed, their log-likelihood takes the form
\[\ln p(\{\mathbf{d}_{i}\}|\mathbf{\theta})=\sum_{i=1}^{n_{\mathrm{data}}}\ln p( \mathbf{d}_{i}|\mathbf{\theta}). \tag{2}\]
A maximum likelihood estimator can then be formed (iteratively) by the Fisher scoring method (Alsing and Wandelt, 2018):
\[\hat{\mathbf{\theta}}^{\mathrm{MLE}}=\mathbf{\theta}_{\mathrm{fid}}+\mathbf{F}^{-1} \mathbf{t}, \tag{3}\]
which requires knowledge of the score, \(\mathbf{t}\), and Fisher matrix, \(\mathbf{F}\). For problems like linear regression where the analytic form of \(\mathbf{F}\) and \(\mathbf{t}\) are known, Eq. (3) gives the exact MLE for the parameters in a single iteration in the Gaussian approximation, given the dataset. In the case of independent data, both of the score and Fisher information are additive.
Taking the gradient of the log-likelihood with respect to the parameters, the score \(\mathbf{t}=\mathbf{\nabla}_{\mathbf{\theta}}\ln p(\{\mathbf{d}_{i}\}|\mathbf{\theta})\) for the full dataset is the sum of the scores of the individual data points:
\[\mathbf{t}=\sum_{i=1}^{n_{\mathrm{data}}}\mathbf{\nabla}_{\mathbf{\theta}}\ln p( \mathbf{d}_{i}|\mathbf{\theta})=\sum_{i=1}^{n_{\mathrm{data}}}\mathbf{t}_{i}( \mathbf{d}_{i}) \tag{4}\]
Taking the gradient again yields the Hessian, or Fisher information matrix (Amari, 2021; Vaart, 1998) for the dataset,
\[\mathbf{F}=\sum_{i=1}^{n_{\mathrm{data}}}\mathbf{\nabla}_{\mathbf{\theta}}\mathbf{\nabla} _{\mathbf{\theta}}^{T}\ln p(\mathbf{d}_{i}|\mathbf{\theta})=\sum_{i=1}^{n_{\mathrm{ data}}}\mathbf{F}_{i}(\mathbf{d}_{i}), \tag{5}\]
which is also comprised of a sum of Fisher matrices of individual data. Once the score and Fisher matrix for a dataset are known, the two can be combined to form a pseudo-maximum likelihood estimate (MLE) for the target parameters following Equation (3). Therefore, constructing optimal embeddings of independent data with respect to specific quantities of interest just requires aggregating the scores and Fishers, and combining them as in Eq. 3. However, in general explicit forms for the likelihood (per data vector) may not be known. In this general case, as we will show in the following section, we can parameterize and learn the score and Fisher information using neural networks.
### Twin Fisher-Score Networks
For many problems, however, the exact form of the Fisher and score are not known. Here we propose learning these functions with neural networks. Due to the additive structure of Eqs (5) and (4), we can parameterize the _per-datapoint_ score and Fisher with twin neural networks:
\[\hat{\mathbf{t}}_{i}=\mathbf{t}(\mathbf{d}_{i},\mathbf{\sigma}_{i};w _{t});\quad\mathbf{t}_{\mathrm{NN}}=\sum_{i}^{n_{\mathrm{data}}}\hat{\mathbf{ t}}_{i} \tag{6}\] \[\hat{\mathbf{F}}_{i}=\mathbf{F}(\mathbf{d}_{i},\mathbf{\sigma}_{i};w _{F});\quad\mathbf{F}_{\mathrm{NN}}=\sum_{i}^{n_{\mathrm{data}}}\hat{\mathbf{ F}}_{i} \tag{7}\]
where the score and Fisher network are parameterized by weights \(w_{t}\) and \(w_{F}\), respectively. The twin networks output a score and Fisher for each datapoint (see Appendix A for formalism), which are then each summed to obtain a global score and Fisher for the dataset. We can then compute parameter estimates using these aggregated embeddings following Eq. 3:
\[\hat{\mathbf{\theta}}_{\mathrm{NN}}=\mathbf{\theta}_{\mathrm{fid}}+\mathbf{F}_{ \mathrm{NN}}^{-1}\mathbf{t}_{\mathrm{NN}} \tag{8}\]
Provided the embeddings \(\hat{\mathbf{t}}_{i}\) and \(\hat{\mathbf{F}}_{i}\) are learned sufficiently well, the summation formalism can be used to obtain Fisher and score estimates for datasets with heterogeneous structure and arbitrary size. These summaries can be regarded as sufficient statistics, since the score as a function of parameters could in principle be used to reconstruct the likelihood surface up to a constant (Alsing and Wandelt, 2018; Hoffmann and Onnela, 2022).
Loss Function.The twin networks can then be trained jointly using a negative-log Gaussian loss:
\[\mathcal{L}=\frac{1}{2}(\mathbf{\theta}-\hat{\mathbf{\theta}}_{\mathrm{NN}})^{T} \mathbf{F}_{\mathrm{NN}}(\mathbf{\theta}-\hat{\mathbf{\theta}}_{\mathrm{NN}})-\frac{ 1}{2}\ln\det\mathbf{F}_{\mathrm{NN}}. \tag{9}\]
Minimizing this loss ensures that information is saturated via maximising the aggregated Fisher, and forces the distance between embedding MLE and parameters to be minimized with respect to the Cramer-Rao bound (Cramer, 1946) as a function of parameters. This loss can also be interpreted as a maximum likelihood (MLE) loss for the parameters of interest \(\theta\), as opposed to typical mean-square error (MSE) regression losses (see Appendix C for deepsets formalism details).
## 3 Related Work
**Deepsets Mean Aggregation.** A comparable method for learning over sets of data is regression using the Deepsets (DS) formalism (Zaheer et al., 2018). Here an embedding \(f(\textbf{d}_{i};w_{1})\) is learned for each datum, and then aggregated with a fixed permutation-invariant scheme and fed to a global function \(g\):
\[\hat{\mathbf{\theta}}=g\left(\bigoplus_{i=1}^{n_{\mathrm{data}}}f(\textbf{d}_{i};w _{1});\;\;w_{2}\right), \tag{10}\]
The networks are optimised minimising a squared loss against the true parameters, \(\mathrm{MSE}(\hat{\mathbf{\theta}},\hat{\mathbf{\theta}})\). When the aggregation is chosen to be the mean, the deepsets formalism is scalable to arbitrary data and becomes equivalent to the Fishnets aggregation formalism _with flat weights across the aggregated data_ (see Appendix C for in-depth treatment).
**Learned Softmax Aggregation.** Li et al. present a learnable softmax counterpart to the DS aggregation scheme in the context of edge aggregation in GNNs. Using the above notation, their aggregation scheme reads:
\[\text{SoftmaxAgg}(\cdot)=\sum_{i=1}^{n_{\mathrm{data}}}\frac{\exp\left(\beta \textbf{d}_{i}\right)}{\sum_{l}\exp\left(\beta\textbf{d}_{l}\right)}\cdot \textbf{d}_{i} \tag{11}\]
where \(\beta\) is a learned scalar temperature parameter. They show that adopting this aggregation scheme allows more graph convolution (GCN) layers to be stacked efficiently to deepen GNN models. Many other aggregation frameworks have been studied, including Graph Attention (Velickovic et al., 2018), LSTM units (Hamilton et al., 2017), and scaled multiple aggregators (Corso et al., 2020).
## 4 Experiments: Bayesian Information Saturation
Bayesian Simulation Based Inference (SBI) provides a framework in which to perform inference with intractable likelihood. There have been massive developments in SBI, such as neural ratio estimation (Miller et al., 2021) and density estimation (Alsing et al., 2019; Papamakarios et al., 2019). Key to all of these methods is compressing a large number of data down to small summaries-typically one informative summary per parameter of interest to preserve information (Alsing and Wandelt, 2018; Charnock et al., 2018; Makinen et al., 2021). ML methods like regression (Jeffrey and Wandelt, 2020) and information-maximising neural networks (Charnock et al., 2018; Makinen et al., 2022, 2021) are very good at learning embeddings for highly structured data like images, and can do so losslessly (Makinen et al., 2021). For unstructured datasets comprised of many independent data, the task of constructing optimal summaries amounts to an aggregation task (Zaheer et al., 2018; Hoffmann and Onnela, 2022; Wagstaff et al., 2019). The Fishnets formalism is an optimal version of this aggregation. What deepsets and "learned" aggregation functions are missing is explicitly constructing the inverse-Fisher weights per datapoint, as well as being able to construct the total Fisher information, which is required to turn summaries into unbiased estimators (Alsing and Wandelt, 2018). Explicitly learning the \(\textbf{F}^{-1}\) weights in addition to the score allows us to achieve 1) asymptotic optimality 2) scalability, and 3) robustness to changes in information content among the data.
In this section we demonstrate the 1) information saturation, 2) robustness and 3) scalability of the Fishnets aggregation through two examples in the context of SBI, and highlight the shortcomings of existing aggregators.
We first investigate a linear regression scaling problem and introduce a robustness test in which Fishnets outperforms deepset and learned softmax aggregation on test data. We then extend Fishnets to an inference problem with nuisance (latent) parameters and censorship to demonstrate the applicability of network scaling to a regime where MCMC becomes intractable.
### Validation Case: Linear Regression
We use a toy linear regression model to validate our method and demonstrate network scalability. We consider the form \(y=mx+b+\epsilon\), where \(\epsilon\sim\mathcal{N}(0,\sigma)\), where the parameters of interest are the slope and intercept \(\mathbf{\theta}=(m,b)\). This likelihood has an analytically-calculable score and Fisher matrix (see Appendix B.1), which can be used to calculate exact MLE estimates for the parameters \(\mathbf{\theta}=(m,b)\) via Eq. (3). We choose wide Gaussian priors for \(\theta\), and uniform
Figure 1: Fishnets saturate information for datasets 20 times larger than the training set. Residual maximum likelihood estimates for slope (_left_) and intercept (_right_) scatter about the truth for linear regression test datasets of size \(n_{\mathrm{data}}=10^{4}\). The solid pink line is obtained from a weighted average of an ensemble of Fishnets networks, _which were trained on datasets of size \(n_{\mathrm{data}}=500\).
distributions for \(x\in[0,10]\) and \(\sigma\in[1,10]\). For network training, we simulate \(10^{4}\) datasets of size \(n_{\mathrm{data}}=500\) datapoints. We use fully-connected MLPs of size [256, 256, 256] with ELU activations (Clevert et al., 2015) for both score and Fisher networks. Both networks receive the input data \([y_{i},x_{i},\sigma_{i}^{2}]^{T}\). We train networks for 2500 epochs with an adam optimizer using a step learning rate decay schedule. We train an ensemble of 10 networks in parallel on the same training data with different initializations. For testing, we generate an additional \(10^{4}\) datasets of size \(n_{\mathrm{data}}=10^{4}\) datapoints to demonstrate scalability.
**Results.** We display a comparison of test set performance to the true MLE solution in Figure 1, and slices of the true and predicted score vectors as a function of input data. The networks are able to recover the exact score and Fisher information matrices (see Figure 2), even when scaled up 20-fold. _This test demonstrates that Fishnets can (1) saturate information on small training sets to enable scalable predictions on far larger aggregations of data (2)._
### Robustness to Changes in the Underlying Data Distributions
In real-world applications, actual data processed by a network might follow a different distribution than that seen in training. Here we compare three different network formalisms on changing shapes of target data distributions.
We train three networks on the same \(n_{\mathrm{data}}=500\) datasets as before: a sum-aggregated Fishnets network, a mean-aggregated deepest, and a learned softmax-aggregated deepest (no Fisher output and standard MSE loss against true parameters \(\mathrm{MSE}(\hat{\mathbf{\theta}},\mathbf{\theta})\)). Here we initialise Fishnets with [50, 50, 50] hidden units for score and Fisher networks, and two embeddings of [128, 128, 128] hidden units for both deepest networks, all with swish (Ramachandran et al., 2017) nonlinearities for the data embedding (see Table 1). All networks are initialised with the same seed (see Appendix B.2 for architecture details).
We apply our trained networks to test data \(n_{\mathrm{data}}=850\) with noise variances and \(x\) values drawn from different distributions to the training data: \(\sigma\curvearrowright\text{Exp}(\lambda=1.0)\) centred at \(\sigma=3.5\), truncated at \(\sigma=10.0\), and \(x\curvearrowright\mathcal{U}(0,3)\). The noise and covariate distributions have the same support as the training data, but have different expectation values and distributions, which can pose a problem for the mean-aggregation used in the standard deepsets formalism. We display results in Figure 3.
The heterogeneous Fishnets aggregation allows the network to correctly embed the noisy data drawn from the different distributions, while a significant loss in information can be seen for flat mean aggregation. The learned softmax aggregation improves the width of the residual distribution, but is still significantly wider than the Fishnets solution. We quote numeric results in Table 1.
These robustness tests show that Fishnets successfully learns _per-object_ embeddings (score) and weights (Fisher) within sets, _while being robust to changing shapes of the training distributions of these quantities (3)_. This test also shows that even in a very simple prediction scenario, _common and learned aggregators can suffer from robustness issues_.
### Scalable Inference With Censorship and Nuisance Parameters
As a non-trivial information saturation example we consider a censorship inference problem with latent parameters inspired by epidemiological studies. Consider a serum which, when injected into a patient, decays over time, and the (heterogeneous) decay rate among people is not well known. A population of patients are injected with the serum and then asked to come back to the lab within \(t_{\mathrm{max}}=10\) days for a measurement of the remaining serum-levels in their blood, \(s\). We can cast this problem using the following hierarchical model:
\[\mu\curvearrowright\mathcal{U}(0.5,10);\ \ \Theta\curvearrowright \mathcal{U}(0.1,1.5)\] \[\gamma_{i}\curvearrowright\text{Gamma}(\alpha=\mu/\Theta,\beta=1 /\Theta)\] \[\tau_{i}\curvearrowright\mathcal{U}(0,10);\ \ \lambda_{i}=A\exp(-\tau_{i}/ \gamma_{i})\] \[s_{i}\curvearrowright\text{Pois}(\lambda_{i}),\]
where the goal is to infer the mean \(\mu\) and scale \(\Theta\) of the decay rate Gamma distribution from the data, \(\{\tau_{i},s_{i}\}\). In the censored case, measurements are rejected if \(s_{i}<s_{\mathrm{min}}\), and collected until \(n_{\mathrm{data}}\) valid samples are collected. The hierarchical model is visualised in a plate diagram in Figure 4. As a ground-truth comparison for the uncensored version
Figure 2: Fishnets achieve the exact form of the score as a function of input data in the linear regression case, indicating information saturation. Slices of true (dark) and network predicted (pink) score vector components as a function of data inputs for the \(n_{\mathrm{data}}=10^{4}\) test set.
of this problem, we sample the above hierarchical model using Hamiltonian Monte-Carlo (HMC). For comparison, we utilize the same Fishnets architecture and small-data training setup as before to predict \((\mu,\Theta)\) from data inputs \([\tau_{i},s_{i}]^{\widetilde{T}}\) (see Appendix B.3 for details). Once trained, we generated a new suite of \(n_{\mathrm{data}}=500\) simulations to learn a neural posterior for \(p(\hat{\theta}_{\mathrm{NN}}|\theta)\) using a Mixture Density Network following (Alsing et al., 2019). We then evaluated both HMC and neural posteriors at the same target data. Finally, using the same network we perform the same procedure, this time with simulations of size \(n_{\mathrm{data}}=10^{4}\), _where the HMC becomes computationally prohibitive._
**Results.** We display inference results in Figure 5. The Fishnets embedding (green) results in slightly inflated contours, indicating a small leakage of information. To demonstrate scaling, we additionally generate another simulation at \(n_{\mathrm{data}}=10^{4}\) using the same random seed. We train another amortised posterior using 5000 simulations at \(n_{\mathrm{data}}=10^{4}\) and pass the data through the same trained Fishnet architecture. The resulting posterior is shown in blue on Figure 5 for comparison.
The summaries obtained from Fishnet compression of the
\begin{table}
\begin{tabular}{l|l|l|l|l} & **network** & **\# params** & \(\mathbf{MSE}(\hat{\mathbf{m}},\mathbf{m_{\mathrm{true}}})\) & \(\mathbf{MSE}(\hat{\mathbf{c}},\mathbf{c_{\mathrm{true}}})\) \\ \hline robustness test & **fishnets** & \(10,855\) & \(\mathbf{0.007\pm 0.017}\) & \(\mathbf{0.046\pm 0.078}\) \\ & deepset & \(87,810\) & \(0.120\pm 0.178\) & \(0.285\pm 0.406\) \\ & softmax & \(87,811\) & \(0.042\pm 0.069\) & \(0.482\pm 0.347\) \\ \end{tabular}
\end{table}
Table 1: Summary of robustness testing for different set-based networks. Fishnets’ Fisher aggregation has an advantage over mean- and learned softmax deepsets aggregation when test data follows a different distribution than the training suite, and does so with an eigth of the number of learnable parameters.
Figure 4: Gamma population hierarchical Bayesian model diagram. Circles represent random variables, boxes are deterministic quantities, and shaded variables are observed as data. The dashed line represents a possible censorship in measurement. Measurements of data \((t,s)_{i}\) are conducted until \(n_{\mathrm{data}}\) samples are drawn.
Figure 3: Fishnets (pink, 3(b)) are robust to different noise distributions in test data, shown in 3(a). 3(b): Deepsets (grey) can return biased results for some parameters (left) and lossy estimates for others (right). Learned softmax aggregation appears to provide lossier and biased parameter estimates.
small data (green) result in posteriors that huge the "true" MCMC contours (black), _indicating information saturation_. Extending the same network on the larger data results in intuitively smaller contours (more information). It should be emphasized that \(n_{\mathrm{data}}=10^{4}\)_is a regime where the MCMC inference is no longer tractable on standard devices_. Fishnets here allows for 1) much faster posterior calculation and 2) allows for optimal inference on larger data set sizes without any retraining.
As a final demonstration we solve the same problem, this time subject to censorship. In the censored case, the target joint posterior defined by the hierarchical model requires computing an integral for the selection probability as a function of the model hyper-parameters; in general, these selection terms make Bayesian problems with censorship computationally challenging, or intractable in some cases (Qi et al., 2022; Dickey et al., 1987).
We again train the Fishnets architecture on \(10^{4}\) simulations of size \(n_{\mathrm{data}}=500\), subject to censorship below \(s_{\mathrm{min}}\). We obtain posteriors of the same shape of the censored case, but for a consistency check perform a probability-integral transform (PIT) test for the neural posterior. For each parameter we want the marginal PIT test to yield a uniform distribution to show that the learned posterior behaves as a continuous distribution. We display these results in Figure 6. We obtain a Kolmogorov-Smirnov test (Massey Jr., 1951) p-value of 0.628 and 0.233 for parameters \(\mu\) and \(\Theta\), respectively, indicating that our posterior is well-parameterized and robust.
## 5 Graph Neural Network Aggregation
Graphs can be thought of as tuples of sets within connected neighborhoods. Graph neural networks (GNNs) operate by message-passing along edges between nodes. For predicting node- and graph-level properties, an aggregation of these sets of edges \(\{e_{ij}\}\) or nodes \(\{v_{i}\}\) is required to reduce features to fixed-size feature vectors.
Here we compare the Fishnets aggregation scheme as a drop-in replacement for learned softmax aggregators within the graph convolutional network (GCN) scheme presented by Li et al.. We can rewrite our aggregations to occur within neighborhoods of nodes:
\[\text{SoftmaxAgg}(\cdot) =\sum_{i\in\mathcal{N}(v)}\frac{\exp\left(\beta\mathbf{e}_{iv} \right)}{\sum_{l\in\mathcal{N}}\exp\left(\beta\mathbf{e}_{li}\right)}\cdot \mathbf{e}_{iv}, \tag{12}\] \[\text{FishnetsAgg}(\cdot) =\left(\sum_{i\in\mathcal{N}(v)}\mathbf{F}(\mathbf{e}_{iv}) \right)^{-1}\left(\sum_{i\in\mathcal{N}(v)}\mathbf{t}(\mathbf{e}_{iv})\right), \tag{13}\]
where the aggregation occurs in a neighborhood \(\mathcal{N}\) of a node \(v\). The Fishnets aggregation requires a bottleneck hyperparameter, \(n_{p}\), which controls the size of the score embedding \(\mathbf{t}(\mathbf{e}_{iv})\in\mathbb{R}^{n_{p}}\) and Fisher Cholesky factors \(\mathbf{F}_{\mathrm{chol}}\in\mathbb{R}^{n_{p}(n_{p}+1)/2}\). We use a single linear layer before aggregation to obtain score and Fisher components from hidden layer embeddings.
Figure 5: The same Fishnets network can be used for inference on datasets much larger than those used in training. The twin Fishnet architecture was trained on \(n_{\mathrm{data}}=500\). We then compress a target dataset and perform density estimation (green) and compare to an MCMC sampler as our true posterior (black dashed). Fishnets nearly saturates the information. We then _use the same network to compress simulations of \(n_{\mathrm{data}}=10^{4}\)_ to obtain the blue contours.
Figure 6: Density estimation posteriors obtained from parameter-Fishnets summary pairs are robust over training data for the censorship test. Each parameter’s PIT test is close to uniform, which shows that the Fishnets summary posterior has successfully captured the underlying Bayesian information from the data.
### Testing Aggregations on ogbn-proteins Benchmark
The ogbn-proteins dataset (Hu et al., 2020, 2021) is comprised of "association scores" in eight different categories (edges) between a large set of proteins (nodes). The goal of a GNN in this benchmark is to aggregate edge association embeddings to predict protein node properties, parameterized as a 112-class categorization task. Here we expect different node neighborhoods to have a heterogenous structure across association categories, making Fishnets aggregation ideal for applicability beyond the training set.
We adopt the publicly-available Pytorch-Geometric implementation of DeeperGCN (Li et al., 2020; Fey and Lenssen, 2019), and developed a drop-in Fishnets Aggregator (in lieu of softmax aggregation) for the architecture.
We test five model architectures. As a benchmark we adopt the out-of-the-box 28-layer GCN network from (Li et al., 2020) with learned softmax aggregations and hidden size of 64, and a smaller version of this model with hidden size 14 and 28 layers. We construct two, shallower Fishnets GNNs, with 16 and 20 layers, each with 64 hidden units, and one small model with 14 hidden units and 14 layers. For each graph convolution aggregation, we adopt a "score" bottleneck of \(n_{p}=10\) for the large models and \(n_{p}=8\) for the small model. We train all networks with a cross-entropy loss over the same dataset and fixed random seed using an adam optimizer with fixed learning rate \(0.001\). We incorporate an early stopping criterion conditioned on the validation data, which dictates an end to training (saturation) when the validation ROC-AUC metric stops increasing for \(\texttt{patience}=250\) epochs.
**Results.** We display representative test ROC-AUC curves over training in Figure 7, and in Table 2. Fishnets-16 and Fishnets-20 clearly saturate information within 250 epochs to \(79.63\%\) and \(81.10\%\) accuracy respectively. The small Fishnets model saturates to \(79.29\%\). The 28-layer GCN saturates the patience criterion to \(79.51\%\) accuracy only after 900 epochs. This small ablation study shows that incorporating the more information-efficient Fishnets Aggregation, we can achieve better or similar results than SOTA GNNs _with a fraction of the trainable parameters and training epochs_ (see Table 2).
### Modelling Uncertain Protein Associations.
As a final test, we demonstrate Fishnets aggregation for graphs in a setting with uncertainties on the protein interaction strengths (edges), in order to demonstrate the robustness of the Fishnets approach to changes in the underlying data (noise) distribution on the graph features.
We model noisy "measurements" of the protein graph edge associations using a simple Binomial model: taking the dataset edges \(\textbf{p}_{ij}=\textbf{e}_{ij}\in[\textbf{0},\textbf{1}]\) as the"true" association strengths, we can simulate a noisy measurement of those quantities as \(N\) weighted coin tosses per edge, where \(N\) varies between measurements:
\[N\curvearrowright\mathcal{U}(20,200) \tag{14}\] \[\textbf{n}_{\text{success}}\curvearrowright\text{Binomial}\left(n =N,p=\textbf{p}_{ij}\right)\] (15) \[\hat{\textbf{p}}_{ij}=\textbf{n}_{\text{success}}/N\] (16) \[\textbf{e}_{ij}\leftarrow\left[\hat{\textbf{p}}_{ij},N\right]. \tag{17}\]
Note that in the last step the new graph edge now contains the (noisy) measured associations, as well as \(N\) (which provides a measure of uncertainty on those estimated interaction strengths). The GNN task is now to learn to reweight predictions conditioned on the provided \(N\) coin toss information, much like feeding in \(\sigma\) in the linear regression case. We train a 28-layer GCN and 20-layer Fishnets network on this simulated noisy version of the proteins dataset. For the test dataset, we alter the distribution for \(N\) to be \(\mathcal{U}(20,50)+\mathcal{U}(170,200)\) such that we sample the extremes of the training distribution support.
**Results.** We display test ROC-AUC curves for both networks in Figure 8, subject to a patience setting of 250 epochs on the validation set. The GCN framework exhibits an early plateau at \(64.71\%\) accuracy, while Fishnets saturates to \(71.98\%\) accuracy. This stark difference in behaviour can be explained by the difference in formalism: The Fishnets aggregation explicitly learns a weighting scheme as a function of measured edge probabilities _and_ the conditional information \(N\), much like the linear regression case where \(\sigma\) was passed as an input. This scheme helps to learn how to deal with edge-case noise artefacts like the noisy edge
Figure 7: Drop-in replacement Fishnets aggregation improves GNN benchmark performance. We show representative test ROC-AUC curves for the benchmark proteins datasets. The dashed purple line shows our best model, Fishnet-20’s saturation point. Fishnets architectures clearly saturates the information more quickly than GCNs with learned softmax aggregations.
test case. Explicitly specifying the inverse-Fisher weighting formalism as an inductive bias Battaglia et al. (2018) during aggregation can help explain the fast information saturation exhibited in both graph test settings.
## 6 Discussion & Future Work
In this paper we built up an information-theoretic approach to optimal aggregation in the form of Fishnets. Through progressively non-trivial examples, we demonstrated that explicitly parameterizing the score and inverse-Fisher weights of set members results in an aggregation scheme that saturates Bayesian information in non-trivial problems, and also serves as an optimal aggregator for graph neural networks.
The stark improvement in information saturation on the proteins test dataset relative to architecture size and training efficiency indicates that the Fishnets aggregation acts as an information-level inductive bias for GNN aggregation.
Follow-up study is warranted on optimizing hyperparameter choices for graph neural network architectures using Fishnets. We chose to demonstrate improved information capture by using an ablation study of smaller models, but careful (and potentially bigger) network design would almost certainly improve results here and potentially achieve SOTA accuracy on common benchmarks.
## Code Availability
The method presented here was tested across frameworks in Tensorflow Abadi et al. (2015), JAX Bradbury et al. (2018), and PyTorch Geometric Fey and Lenssen (2019); Paszke et al. (2019). The code will be made public upon acceptance of the paper.
## Acknowledgments
T.L.M. acknowledges the Imperial College London President's Scholarship fund for support of this study, as well as the insightful discussions with Alan Heavens, Boris Leistedt, and Francisco Villaescusa-Navarro. The authors are members of the Simons Collaboration on "Learning the Universe". T.L.M. and J.A. acknowledge support under this collaboration. This work was done within the Aquila Consortium. J.A. is funded by the European Research Council (ERC), under the European Union's Horizon 2020 research and innovation programme (grant agreement No. 101018897 CosmicExplorer). The Flatiron Institute is supported by the Simons Foundation.
|
2301.04182 | schlably: A Python Framework for Deep Reinforcement Learning Based
Scheduling Experiments | Research on deep reinforcement learning (DRL) based production scheduling
(PS) has gained a lot of attention in recent years, primarily due to the high
demand for optimizing scheduling problems in diverse industry settings.
Numerous studies are carried out and published as stand-alone experiments that
often vary only slightly with respect to problem setups and solution
approaches. The programmatic core of these experiments is typically very
similar. Despite this fact, no standardized and resilient framework for
experimentation on PS problems with DRL algorithms could be established so far.
In this paper, we introduce schlably, a Python-based framework that provides
researchers a comprehensive toolset to facilitate the development of PS
solution strategies based on DRL. schlably eliminates the redundant overhead
work that the creation of a sturdy and flexible backbone requires and increases
the comparability and reusability of conducted research work. | Constantin Waubert de Puiseau, Jannik Peters, Christian Dörpelkus, Hasan Tercan, Tobias Meisen | 2023-01-10T19:27:11Z | http://arxiv.org/abs/2301.04182v2 | # _schlably_: A Python Framework for Deep Reinforcement Learning Based Scheduling Experiments
###### Abstract
Research on deep reinforcement learning (DRL) based production scheduling (PS) has gained a lot of attention in recent years, primarily due to the high demand for optimizing scheduling problems in diverse industry settings. Numerous studies are carried out and published as stand-alone experiments that often vary only slightly with respect to problem setups and solution approaches. The programmatic core of these experiments is typically very similar. Despite this fact, no standardized and resilient framework for experimentation on PS problems with DRL algorithms could be established so far. In this paper, we introduce _schlably_, a Python-based framework that provides researchers a comprehensive toolset to facilitate the development of PS solution strategies based on DRL. _schlably_ eliminates the redundant overhead work that the creation of a sturdy and flexible backbone requires and increases the comparability and reusability of conducted research work.
keywords: Production Scheduling, Deep Reinforcement Learning, Python, Framework +
Footnote †: journal: SoftwareX
###### Abstract
The proposed method is a novel approach to the classification of the data from a set of 3000 images. The proposed method is a novel approach to the classification of the data from a set of 3000 images. The proposed method is a novel approach to the classification of the data from a set of 3000 images. The proposed method is a novel approach to the classification of the data from a set of 3000 images. The proposed method is a novel approach to the classification of the data from a set of 3000 images. The proposed method is a novel approach to the classification of the data from a set of 3000 images. The proposed method is a novel approach to the classification of the data from a set of 3000 images. The proposed method is a novel approach to the classification of the data from a set of 3000 images. The proposed method is a novel approach to the classification of the data from a set of 3000 images. The proposed method is a novel approach to the classification of the data from a set of 3000 images. The proposed method is a novel approach to the classification of the data from a set of 3000 images. The proposed method is a novel approach to the classification of the data from a set of 3000 images. The proposed method is a novel approach to the classification of the data from a set of 3000 images. The proposed method is a novel approach to the classification of the data from a set of 3000 images. The proposed method is a novel approach to the classification of the data from a set of 3000 images. The proposed method is a novel approach to the classification of the data from a set of 3000 images. The proposed method is a novel approach to the classification of the data from a set of 3000 images. The proposed method is a novel approach to the classification of the data from a set of 3000 images. The proposed method is a novel approach to the classification of the data from a set of 3000 images. The proposed method is a novel approach to the classification of the data from a set of 3000 images. The proposed method is a novel approach to the classification of the data from a set of 3000 images. The proposed method is a novel approach to the classification of the data from a set of 3000 images. The proposed method is a novel approach to the classification of the data from a set of 3000 images. The proposed method is a novel approach to the classification of the data from a set of 3000 images. The proposed method is a novel approach to the classification of the data from a set of 3000 images. The proposed method is a novel approach to the classification of the data from a set of 3000 images. The proposed method is a novel approach to the classification of the data from a set of 3000 images. The proposed method is a novel approach to the classification of the data from a set of 3000 images. The proposed method is a novel approach to the classification of the data from a set of 3000 images. The proposed method is a novel approach to the classification of the data from a set of 3000 images. The proposed method is a novel approach to the classification of the data from a set of 3000 images. The proposed method is a novel approach to the classification of the data from a set of 3000 images. The proposed method is a novel approach to the classification of the data from a set of 3000 images. The proposed method is a novel approach to the classification of the data from a set of 3000 images. The proposed method is a novel approach to the classification of the data from a set of 3000 images. The proposed method is a novel approach to the classification of the data from a set of 3000 images. The proposed method is a novel approach to the classification of the data from a set of 3000 images. The proposed method is a novel approach to the classification of the data from a set of 3000 images. The proposed method is a novel approach to the classification of the data from a set of 3000 images. The proposed method is a novel approach to the classification of the data from a set of 3000 images. The proposed method is a novel approach to the classification of the data from a set of 3000 images. The proposed method is a novel approach to the classification of the data from a set of 3000 images. The proposed method is a novel approach to the classification of the data from a set of 3000 images. The proposed method is a novel approach to the classification of the data from a set of 3000 images. The proposed method is a novel approach to the classification of the data from a set of 3000 images. The proposed method is a novel approach to the classification of the data from a set of 3000 images. The proposed method is a novel approach to the classification of the data from a set of 3000 images. The proposed method is a novel approach to the classification of the data from a set of 3000 images. The proposed method is a novel approach to the classification of the data from a set of 3000 images. The proposed method is a novel approach to the classification of the data from a set of 30000 images. The proposed method is a novel approach to the classification of the data from a set of 3000 images. The proposed method is a novel approach to the classification of the data from a set of 3000 images. The proposed method is a novel approach to the classification of the data from a set of 30000 images. The proposed method is a novel approach to the classification of the data from a set of 3000 images. The proposed method is a novel approach to the classification of the data from a set of 3000 images. The proposed method is a novel approach to the classification of the data from a set of 3000 images. The proposed method is a novel approach to the classification of the data from a set of 3000 images. The proposed method is a novel approach to the classification of the data from a set of 3000 images. The proposed method is a novel approach to the classification of the data from a set of 3000 images. The proposed method is a novel approach to the classification of the data from a set of 3000 images. The proposed method is a novel approach to the classification of the data from a set of 3000 images. The proposed method is a novel approach to the classification of the data from a set of 3000 images. The proposed method is a novel approach to the classification of the data from a set of 3000 images. The proposed method is a novel approach to the classification of the data from a set of 3000 images. The proposed method is a novel approach to the classification of the data from a set of 3000 images. The proposed method is a novel approach to the classification of the data from a set of 3000 images. The proposed method is a novel approach to the classification of the data from a set of 3000 images. The proposed method is a novel approach to the classification of the data from a set of 3000 images. The proposed method is a novel approach to the classification of the data from a set of 3000 images. The proposed method is a novel approach to the classification of the data from a set of 3000 images. The proposed method is a novel approach to the classification of the data from a set of 3000 images. The proposed method is a novel approach to the classification of the data from a set of 3000 images. The proposed method is a novel approach to the classification of the data from a set of 3000 images. The proposed method is a novel approach to the classification of the data from a set of 3000 images. The proposed method is a novel approach to the classification of the data from a set of 3000 images. The proposed method is a novel approach to the classification of the data from a set of 3000 images. The proposed method is a novel approach to the classification of the data from a set of 3000 images. The proposed method is a novel approach to the classification of the data from a set of 3000 images. The proposed method is a novel approach to the classification of the data from a set of 3000 images. The proposed method is a novel approach to the classification of the data from a set of 3000 images. The proposed method is a novel approach to the classification of the data from a set of 3000 images. The proposed method is a novel approach to the classification of the data from a set of 3000 images. The proposed method is a novel approach to the classification of the data from a set of 3000 images. The proposed method is a novel approach to the classification of the data from a set of 3000 images. The proposed method is a novel approach to the classification of data from a set of 3000 images. The proposed method is a novel approach to the classification of the data from a set of 3000 images. The proposed method is a novel approach to the classification of data from a set of 3000 images. The proposed method is a novel approach to the classification of the data from a set of 3000 images. The proposed method is a novel approach to the classification of the data from a set of 3000 images. The proposed method is a novel approach to the classification of the data from a set of 3000 images. The proposed method is a novel approach to the classification of the data from a set of 3000 images. The proposed method is a novel approach to the classification of the data from a set of 3000 images. The proposed method is a novel approach to the classification of the data from a set of 3000 images. The proposed method is a novel approach to the classification of the data from a set of 3000 images. The proposed method is a novel approach to the classification of the data from a set of 3000 images. The proposed method is a novel approach to the classification of the data from a set of 3000 images. The proposed method is a novel approach to the classification of the data from a set of 3000 images. The proposed method is a novel approach to the classification of the data from a set of 3000 images. The proposed method is a novel approach to the classification of the data from a set of 3000 images. The proposed method is a novel approach to the classification of the data from a set of 3000 images. The proposed method is a novel approach to the classification of the data from a set of 3000 images. The proposed method is a novel approach to the classification of the data from a set of 3000 images. The proposed method is a novel approach to the classification of the data from a set of 3000 images. The proposed method is a novel approach to the classification of the data from a set of 3000 images. The proposed method is a novel approach to the classification of the data from a set of 3000 images. The proposed method is a novel approach to the classification of the data from a set of 3000 images. The proposed method is a novel approach to the classification of data from a set of 3000 images. The proposed method is a novel approach to the classification of the data from a set of 3000 images. The proposed method is a novel approach to the classification of the data from a set of 3000 images. The proposed method is a novel approach to the classification of the data from a set of 3000 images. The proposed method is a novel approach to the classification of the data from a set of 3000 images. The proposed method is a novel approach to the classification of the data from a set of 3000 images. The proposed method is a novel approach to the classification of the data from a set of 3000 images. The proposed method is a novel approach to the classification of the data from a set of 3000 images. The proposed method is a novel approach to the classification of the data from a set of 3000 images. The proposed method is a novel approach to the classification of the data from a set of 3000 images. The proposed method is a novel approach to the classification of the data from a set of 3000 images. The proposed method is a novel approach to the classification of the data from a set of 3000 images. The proposed method is a novel approach to the classification of the data from a set of 3000 images. The proposed method is a novel approach to the classification of the data from a set of 3000 images. The proposed method is a novel approach to the classification of the data from a set of 3000 images. The proposed method is a novel approach to the classification of the data from a set of 3000 images. The proposed method is a novel approach to the classification of the data from a set of 3000 images. The proposed method is a novel approach to the classification of the data from a set of 3000 images. The proposed method is a novel approach to the classification of the data from a set of 3000 images. The proposed method is a novel approach to the classification of the data from a set of 3000 images. The proposed method is a novel approach to the classification of the data from a set of 3000 images. The proposed method is a novel approach to the classification of the data from a set of 3000 images. The proposed method is a novel approach to the classification of the data from a set of 3000 images. The proposed method is a novel approach to the classification of the data from a set of 3000 images. The proposed method is a novel approach to the classification of the data from a set of 3000 images. The proposed method is a novel approach to the classification of the data from a set of 3000 images. The proposed method is a novel approach to the classification of the data from a set of 3000 images. The proposed method is a novel approach to the classification of the data from a set of 3000 images. The proposed method is a novel approach to the classification of the data from a set of 3000 images. The proposed method is a novel approach to the classification of the data from a set of 3000 images. The proposed method is a novel approach to the classification of the data from a set of 3000 images. The proposed method is a novel approach to the classification of the data from a set of 300 images. The proposed method is a novel approach to the classification of the data from a set of 300 images. The proposed method is a novel approach to the classification of the data from a set of 3000 images. The proposed method is a novel approach to the classification of the data from a set of 3000 images. The proposed method is a novel approach to the classification of the data from a set of 3000 images. The proposed method is a novel approach to the classification of the data from a set of 3000 images. The proposed method is a novel approach to the classification of the data from a set of 300 images. The proposed method is a novel approach to the classification of the data from a set of 3000 images. The proposed method is a novel approach to the classification of the data from a set of 3000 images.
within one or more of these components, for example by incorporating a new DRL algorithm [10; 11; 12], interaction logic between agent and environment [6; 3], training procedure [13], learning objective [14; 12] or additional problem constraint [14; 15]. Regardless of large overlaps, all researchers implement their own individual experimentation framework with the following two consequences: Large initial ramp-up efforts when experimenting with new methodologies or custom problem settings, and scarcity of empirical comparisons to other works. In this paper, we address these shortcomings and present the software framework _schlably_ for developing and evaluating DRL-based PS solutions. _schlably_ provides the following contributions:
* It is modular, so individual changes may be adapted without much overhead.
* It works out of the box with functioning environments, data-generation scripts, agents, logging functions, training, and testing scripts.
* It provides benchmark datasets for different scheduling problem classes and sizes.
* It includes widely recognized benchmark algorithms and results.
* It facilitates the application of algorithms designed for one problem class and size to other problems.
_schlably_ will accelerate the research area of DRL-based PS under real-world conditions by lowering the barrier of entry for researchers from different domains with different perspectives, levels of expertise, and objectives.
## 2 Background and Related Work
_schlably_ started as a code base for experiments on a real-world inspired scheduling problem in the context of a university research project with industrial partners. As such, several requirements became apparent early on that can be summarized in four general **design goals**. First, from the applied industrial perspective, _schlably_ has to offer the integration of DRL methods and heuristics which work out of the box. Second, it also has to cover different scheduling scenarios, e.g. including such bounded by resource constraints. Third, from the scientific research perspective, _schlably_ should support detailed comparable evaluations of methods. Lastly, it has to be easy to interact with at the code level, to enable students with limited experience to quickly understand the topic, concepts, and implementation. The implications of these design goals are discussed in this section.
In the following, we present an overview of related published experimental frameworks and compare them to our design goals in _schlably_. In the comparison, we include frameworks dedicated to being used by others [16, 17, 18, 19, 20] and frameworks published in a supplementary fashion along with research papers and projects [21, 22, 23, 24, 25, 26], because in practice both may serve as starting points for additional experiment designs. The frameworks were found via references in scientific publications and a search of "Scheduling Reinforcement Learning" on GitHub. We do not claim the list to be exhaustive but are not aware of any other popular frameworks at the time of writing this paper. Commercial or proprietary scheduling software was excluded because license fees and other accompanying challenges introduce a major barrier. However, we are not aware of any commercial software that provides the tight integration of DRL and PS. Table 2 provides an overview of the frameworks. In addition, we assessed them regarding their fulfillment of our design goals which are formally categorized into the following four groups.
Pre-Implemented Benchmarks.Several frameworks either provide pre-implemented DRL agents and scripts for training or the easy integration of agents from popular DRL libraries, e.g. from StableBaselines [27] or RLIlib [28]. Like [20], our goal is to enable and facilitate both, the manual extensions of basic DRL algorithms, like Deep Q-Networks (DQN) [29] and Proximal Policy Optimization (PPO) [30], as well as the usage of powerful third-party libraries. Both options are important to empower users to choose the appropriate approach for their respective research interests and are therefore implemented in _schlably_. Additionally, for the sake of comparability, it is crucial to provide common benchmarks in the form of popular priority dispatching rules (PDRs), such as Shortest-Processing-Time-First, and a flexible optimal solver that can handle several scheduling problem types. _schlably_ provides these baselines, while most other frameworks only cover a few PDRs, often missing competitive ones, such as Most-Tasks-Remaining and even random baselines, where agent actions are sampled from a normal distribution.
Scheduling Instance Generation.Generating new data with varying problem cases is necessary to enable comprehensive training and testing of a DRL agent. Accordingly, a suitable framework must implement a flexible problem instance generator. This generator enables the user to create scheduling problems of different popular categories (e.g. the Job Shop Scheduling Problem (JSSP) or the Flexible Job Shop Scheduling Problem (FJSSP) [1]) instances
\begin{table}
\begin{tabular}{c l c
with any combination of instance variables, like the number of jobs, number of tasks, runtimes, and more. All combinations and values are drawn using equal distributions. _schlably_ further extends these options by an optional resource constraint, in which each task requires a certain additional resource. These options are already implemented in the data generation and can be processed by the agents provided in schlably. Moreover, its design simplifies the integration of additional scheduling problem types to encourage the implementation of individual, more complex, or more specific use cases. Finally, to the best of our knowledge, this is the first framework providing an optional resource constraint. Concretely, the user is able to specify a required tool per operation.
Logging and EvaluationLogging results and evaluation metrics in a structured manner is key for quick feedback during training runs, but also for identifying patterns in and drawing conclusions from large-scale experiments. Our objective with _schlably_ is to provide extensive logging options that may be turned on and off, and where results and models may be shared to promote collaboration on projects. This design goal is met to a large degree by [20] from which we took much inspiration in this regard. All other frameworks do not address this design goal. For the evaluation of solutions generated by DRL agents, a comprehensive framework should apply the benchmarks mentioned above and provide an overview of the overall performance. Most of the reviewed frameworks lack functionality in this aspect. Moreover, to get a graphical overview and to visually support the tracking of very specific actions in a production schedule, a Gantt chart plotter is useful for human inspection. The Gantt chart should display all metadata of operations, e.g. the runtime or required tool, and has been found to be helpful for debugging and evaluating DRL agents. Many other frameworks, but not all, include a Gantt chart plotter.
Code UsabilityAs usability is of utmost importance, a framework has to offer easy access through full documentation and must include a README, user application programming interface (API) manual, and formal functionality description. Within the reviewed frameworks, only [20] covers all criteria. To be usable with different skill sets our explicit goal is to enable users to start experimenting with small design decisions by only using the configuration files but at the same time facilitate substantial logical changes to routines and components by means of their own implementations. For that reason, we would favor comprehensibility over efficiency wherever a trade-off is unavoidable. This requires a careful balance. In our opinion, all other frameworks
overemphasize one side: [20] offers many functional changes through configuration files but at the cost of a comparatively complex software architecture. On the other hand, all other frameworks are much smaller and easier to get an overview of, at the expense of limited functionality. Lastly, a framework with a claim to widespread use should stick to conventional APIs. In the context of DRL, the most commonly used API is the OpenAI Gym [31] API. Only half of the reviewed frameworks adhere to it.
## 3 Software Architecture
This chapter describes our framework with focus on implementation-specific details. We are providing a general overview of the code itself, focusing on currently existing exemplary implementations while also pointing out open interfaces. Furthermore, we describe details regarding the main components of _schlably_ to demonstrate the realization of the design goals, as introduced in Chapter 2, and to enable users to fit _schlably_ to their needs. The overall structure is illustrated in Figure 1. We divided the code base into six main components, which are described below in detail. Following this component-oriented approach, and in combination with comprehensive code documentation, _schlably_ adheres to the objective of the fourth design goal, which requires easy interaction and usability at the code level.
Data generatorThe general data structure of scheduling problems, as used in _schlably_, is represented by so-called instances. A user can generate infinite instances of a scheduling problem, however, each instance is a specific configuration and entity. The specific configuration, contained within an instance, is given by a number of jobs, with a job simply being an encompassing logical container consisting of individual tasks. The data_generator component incorporates the necessary classes to generate such an instance and the individual tasks. From the scheduling problem point-of-view, it is the centerpiece of the problem formulation and representation. The Task class is a specifically designed data class and its entities are the atomic units of a scheduling problem instance. Such an instance can be created via the SPFactory, which allows the generation of different types of scheduling problems that are given via the included Enum. If users would like to introduce a new type of scheduling problem into _schlably_, they would have to include their function in this class and add it to the Enum. Finally, the InstanceFactory enables high-level access to the problem factory class and manages the configuration-based creation of batches of instances. Thus, the data_generator component realizes the foundation of the second design
Figure 1: Overview of _schlably_ project and code structure.
goal, which requires the implementation and handling of different scheduling scenarios.
Environment.An environment defines the observation space, action space, and reward strategy. Thus, it represents a simulation of the agent's environment and interaction dynamics and is the central piece of any DRL approach. All _schlably_ environments are included in the environments component. Exemplary, we provide a simple scheduling Env as well as a derived version named EnvIndirectAction to showcase the expandability. All environments adhere to the Gym API and are explicitly derived from a base Gym environment. The EnvironmentLoader class enables high-level access and management of the different environment types and appropriate algorithms, as not all algorithms are feasible for every environment. New environments have to be included in this component and added to the managing EnvironmentLoader. This encapsulated approach, in conjunction with the data_generator component, represents the implementation of the second design goal.
Agent.The agents component combines the heuristic functionalities, the solver, and implementations of DRL algorithms as well as the train and test functions for the DRL approach. Users can to integrate functionalities from other DRL frameworks, like more extensive training procedures, model types, and learning algorithms via pre-defined interfaces. As such, the agents component realizes the first design goal, to support and simplify the integration of out-of-the-box-methods as well as pre-implemented benchmarks.
Visual generator.The component visuals_generator incorporates all classes and scripts which are used to create visualizations of the problem instances and generated solutions. These functionalities are intentionally isolated as different scheduling problem environments and still share the same visualization approach. _schlably_, for example, introduces a GanttChartPlotter that enables a user to generate individual Gantt chart images (see Figure 2b) or create a GIF of the scheduling progress. Thus, it is part of the implementation of the third design goal. The module is intended for debugging and visual analysis and can currently only display Gantt-Charts of problem sizes smaller than 8x8 because of limitations of the used library (the exact limit depends on the number of tasks and processing times). However, we believe that visualizations become less useful for larger problem sizes, because there are too many blocks and colors to gain an overview.
UtilsThe utils component aggregates classes and functions which have a supporting character for the main functionalities of _schlably_. Specifically, it includes user interface components (ui_tools), data interface components (file_handler), e.g. to load and save data, and the high-level Logger class. Accordingly, the utils component realizes the third design goal, facilitating logging and evaluations for comparisons.
Code testsAll code tests that ensure the crucial functionality of the described components are collected in the code_tests component. Up to this point, we included multiple unit tests with a central Runner. These are also intended as an example for users that plan to extend the code base.
## 4 Illustrative Example
To illustrate a typical use case, we consider a scenario in which an ML engineer wants to compare the learning behavior of two PPO agents that interact with the implemented environment. It is also part of our tutorial in the documentation. One agent is trained on 6x6 JSSP instances and receives a reward based on the change in the time to complete all tasks (i.e. the makespan) per step, as proposed in [3]. This setup is also the default setting delivered in the framework. The other one is trained on a 3x4 tool-constrained JSSP instance and receives a zero reward per step with the exception of the last step, where the reward is equal to the overall achieved makespan. The way in which both agent interacts with the default environment is that in each step, the agent chooses between the next unscheduled task within each job sequence. This task is then integrated into the current schedule by scheduling it at the earliest time possible in accordance with the constraints and without shifting already scheduled tasks. The remaining training parameters are kept constant. The second training requires only minimal manual changes to the base model. These include setting different configuration parameters, generating new data, and changing the reward function in the base environment. Details may be found in the documentation. The integrated interface to Weights&Biases [32] makes it easy to compare the training curves and achieved results, as depicted in Figure 2.
The described short example reflects several of our design goals. Figure 2c) demonstrates that the agents' performance is automatically compared to many other benchmarks and with respect to different dimensions such as the reward or the gap to the optimal solver. The continuous logging and graphical depiction are visible in Figure 2a) and b). The example also showcases our understanding of high code usability. The experiments could
Figure 2: Comparing agent runs in Weights&Biases (screenshot from the web interface shown on the left-hand side). a) Visualized training curves for interpreting the learning performance of the agent. b) Gantt chart depicting the solution of the trained agent on a selected test instance. c) Table providing evaluation results and comparison of the trained agents and benchmark methods on the test instances.
be defined by changing training parameters (only a few lines in configuration files) and minimal intended changes to the source code. Examples of the most common changes which are intended to be coded are explained in more detailed follow-along tutorials in the provided documentation.
## 5 Impact
_schlably_ is useful for the entire community around PS with DRL. Compared to other frameworks, it is particularly useful to reduce the entry barrier for researchers from the OR or other related domains, who want to empirically explore a new methodology for scheduling problems, and for DRL researchers who want to test a new algorithm on a challenging and impactful problem domain. We believe that the seamless interchangeability of problem settings offered by _schlably_ will also encourage researchers in the domain of PS with DRL to try out methodologies applied to one particular problem setting (e.g. 6x6 JSSP) on different problem settings (e.g. 11x11 tool-constrained JSSP). This has the potential to greatly speed up the transfer of research from academic problems to real-world problems.
In several projects where our test partners and we have used _schlably_, it has significantly increased the throughput of experiments. This is achieved because new methodological ideas can be integrated more quickly and the results of experiments can be compared more easily. _schlably_ facilitates the generation of new problem instances and the training and evaluation of custom DRL agents. Due to the various pre-implementations in the framework, such as training and testing routines, well-known scheduling benchmarks, and visualization of logged results, it is much easier to conduct experimental research in DRL for PS. In addition, collaboration has become more effective because design changes can be compared easily and the results of peers can be viewed online through Weights&Biases. We have further experienced a substantial increase in productivity in research projects, where new researchers and university students, who had no prior domain knowledge and little coding skills, had to conduct experiments on the PS domain. This, we mainly attribute to the code documentation and modular structure, but also to the fact that _schlably_ is 100% written in Python and therefore runs on all relevant operation systems.
## 6 Discussion and limitations
In its current state, _schlably_ serves as a useful framework for empirical DRL-based PS research. It has reached a maturity level, at which it works out-of-the-box and, to the best of our knowledge, offers the broadest
range of different easy-to-implement design choices compared to any published framework. _schlably_, on the one hand, is intended to be abstract and modular enough to offer different instance generation, training, and testing configurations without many lines of code. On the other hand, it is designed to not be too interwoven in its code structure to hinder the extension with fundamentally different features experts might find desirable. As such, the development required a balancing act and certain compromises, which some may see as limitations. For example, one deliberate choice was made in favor of a class-based problem description as opposed to a vector representation. The class-based description simplifies the search and usage of certain information about the current state of jobs and increases code readability compared to a vector problem representation. Hence, the choice was made between readability and computational efficiency in favor of the former.
## 7 Conclusions
In this paper, we introduced _schlably_, a software framework for research on DRL-based PS. With the release of the framework, we strive towards two main goals: the first is to lower the entry barrier for researchers, who have little experience with production scheduling, deep reinforcement learning (DRL) and/or coding. The second goal is to encourage researchers already active in the field to apply and test their methods on other problem settings, which is largely facilitated by _schlably_. Both goals aim at promoting the transfer of DRL methods to real-world scheduling applications. In the future, we plan to include more problem settings, such as the dynamic JSSP and stochastic properties of environments like machine breakdowns to get even closer to real-world scenarios.
## 8 Conflict of Interest
We wish to confirm that there are no known conflicts of interest associated with this publication and there has been no significant financial support for this work that could have influenced its outcome.
## Acknowledgements
This research work was undertaken within the research project AlphaMES funded by the German Federal Ministry for Economic Affairs and Climate Action (BMWK). |
2305.09151 | Non-periodic input-driven magnetization dynamics in voltage-controlled
parametric oscillator | Input-driven dynamical systems have attracted attention because their
dynamics can be used as resources for brain-inspired computing. The recent
achievement of human-voice recognition by spintronic oscillator also utilizes
an input-driven magnetization dynamics. Here, we investigate an excitation of
input-driven chaos in magnetization dynamics by voltage controlled magnetic
anisotropy effect. The study focuses on the parametric magnetization
oscillation induced by a microwave voltage and investigates the effect of
random-pulse input on the oscillation behavior. Solving the
Landau-Lifshitz-Gilbert equation, temporal dynamics of the magnetization and
its statistical character are evaluated. In a weak perturbation limit, the
temporal dynamics of the magnetization are mainly determined by the input
signal, which is classified as input-driven synchronization. In a large
perturbation limit, on the other hand, chaotic dynamics are observed, where the
dynamical response is sensitive to the initial state. The existence of chaos is
also identified by the evaluation of the Lyapunov exponent. | Tomohiro Taniguchi | 2023-05-16T04:10:26Z | http://arxiv.org/abs/2305.09151v1 | # Non-periodic input-driven magnetization dynamics in voltage-controlled parametric oscillator
###### Abstract
Input-driven dynamical systems have attracted attention because their dynamics can be used as resources for brain-inspired computing. The recent achievement of human-voice recognition by spintronic oscillator also utilizes an input-driven magnetization dynamics. Here, we investigate an excitation of input-driven chaos in magnetization dynamics by voltage controlled magnetic anisotropy effect. The study focuses on the parametric magnetization oscillation induced by a microwave voltage and investigates the effect of random-pulse input on the oscillation behavior. Solving the Landau-Lifshitz-Gilbert equation, temporal dynamics of the magnetization and its statistical character are evaluated. In a weak perturbation limit, the temporal dynamics of the magnetization are mainly determined by the input signal, which is classified as input-driven synchronization. In a large perturbation limit, on the other hand, chaotic dynamics are observed, where the dynamical response is sensitive to the initial state. The existence of chaos is also identified by the evaluation of the Lyapunov exponent.
keywords: spintronics, chaos, input-driven dynamical system, voltage controlled magnetic anisotropy effect +
Footnote †: journal: Journal of Magnetism and Magnetic Materials
## 1 Introduction
After the successful reports on human-voice recognition by spin-torque oscillator [1], associative memory operation by three-terminal magnetic memory [2], and pattern recognition by an array of spin-Hall oscillators [3] in 2017, the application of spintronics technology to emerging computing has
become an exciting topic in magnetism [4; 5]. The works bridge the research field to the others such as computer science, statistical physics, and nonlinear science. Among them, the input-driven dynamical theory [6] has gained great attention because most models related to emerging computing, such as machine learning and robotics, are input-driven. For example, the human-voice recognition task can be solved using spin-torque oscillator [1] if there is one-to-one correspondence between the input electric voltage, converted from human voice, and the output power originated from nonlinear magnetization dynamics. The correspondence as such is classified as input-driven synchronization [7; 8; 9; 10; 11; 12], where the dynamical output from the oscillator is solely determined by the input data and is independent of the initial state of the magnetization; therefore, by learning the correspondence, the system can recognize the input data. Another example of the input-driven dynamics is chaos, which has a sensitivity to the initial state and has been found in brain activities and artificial neural networks [13; 14]. Contrary to the input-driven synchronization in magnetization dynamics [1; 15; 16; 17; 18; 19; 20], however, the input-driven chaotic dynamics in spintronics devices have not been fully investigated yet [20].
The input-driven magnetization synchronization has been mainly studied in spin-torque oscillator [1; 15; 16; 17; 18; 19; 20], where electric current drives the dynamics. From viewpoint of energy-saving computing, it would be preferable to drive magnetization dynamics by voltage controlled magnetic anisotropy (VCMA) effect [21; 22; 23; 24; 25; 26; 27; 28; 29]. The VCMA effect arises from the modification of electron states [24; 25] and/or the induction of magnetic moment [29] near the ferromagnetic/insulator interface by an application of electric voltage, and is expected to provide low-power writing scheme in magnetoresistive random access memory. A recognition task of the random input signal by using the relaxation dynamics of the magnetization caused by VCMA effect was reported recently [30]. Remind that recognition tasks are solved in terms of input-driven synchronization. In such circumstances, it is of interest to investigate a possibility to induce the input-driven chaos in magnetization dynamics manipulated by VCMA effect.
In this work, we propose a method to excite the input-driven chaotic magnetization dynamics in a parametric oscillator maintained by a microwave VCMA effect. Note that the relaxation dynamics of the magnetization caused by a direct VCMA effect may not be suitable for inducing chaos because the dynamics saturates to a fixed point, while chaos, on the other hand, must be
sustained. To overcome the issue, we focus on the parametric magnetization oscillation caused by a microwave VCMA effect, which was recently demonstrated experimentally [31; 32]. Specifically, we study the modulation of the parametric oscillation caused by the injection of input signal and solving the Landau-Lifshitz-Gilbert (LLG) equation. It is shown that the magnetization dynamics in the presence of random input signal become sensitive to the initial state, indicating the appearance of input-driven chaos. The appearance of chaos is also investigated by evaluating the Lyapunov exponent.
## 2 Temporal dynamics
Here, we show the temporal dynamics of the magnetization in the presence of time-dependent inputs.
### Parametric oscillation
Figure 1(a) shows a schematic view of a ferromagnetic multilayer consisting of free and reference layers separated by a thin nonmagnetic spacer. The unit vector pointing in the magnetization direction in the free layer is denoted as \(\mathbf{m}\). The \(z\) axis is normal to the film plane. It was experimentally confirmed [31] that the magnetization dynamics driven by VCMA effect is
Figure 1: (a) Schematic illustration of a magnetic multilayer. The unit vector pointing in the magnetization direction in the free layer is denoted as \(\mathbf{m}\). An external magnetic field \(H_{\mathrm{appl}}\) is applied in the \(x\) direction. In parametric oscillation state, the magnetization rotates around the \(x\) axis, as schematically shown by the yellow arrow. (b) Time evolution of \(m_{x}\) in the presence of a microwave voltage. The horizontal axis represents the ratio of the frequency \(f\) of the voltage with respect to the Larmor frequency \(f_{\mathrm{L}}\). (c) Examples of \(m_{x}\) (red) and \(m_{z}\) (black) in steady states. The solid and dotted lines correspond to the microwave frequency of \(f=2.0f_{\mathrm{L}}\) and \(f=2.5f_{\mathrm{L}}\), respectively.
well described by the macrospin LLG equation,
\[\frac{d{\bf m}}{dt}=-\gamma{\bf m}\times{\bf H}+\alpha{\bf m}\times\frac{d{\bf m} }{dt}, \tag{1}\]
where \(\gamma\) and \(\alpha\) are the gyromagnetic ratio and the Gilbert damping constant, respectively. The magnetic field \({\bf H}\) consists of the in-plane external magnetic field \(H_{\rm appl}\) and the perpendicular magnetic anisotropy field \(H_{\rm K}\) as [31]
\[{\bf H}=H_{\rm appl}{\bf e}_{x}+H_{\rm K}m_{z}{\bf e}_{z}, \tag{2}\]
where \({\bf e}_{i}\) (\(i=x,y,z\)) is the unit vector in the \(i\)-direction and we assume that the external magnetic field points to the \(x\) direction. The values of the parameters are similar to those used in Refs. [31, 32], where \(\gamma=1.764\times 10^{7}\) rad/(Oe s), \(\alpha=0.005\), and \(H_{\rm appl}=720\) Oe. Note that, when \(H_{\rm appl}\) and \(H_{\rm K}\) are constants, the magnetization dynamics described by Eq. (1) are relaxation dynamics towards the minima of the energy density \(E=-M\int d{\bf m}\cdot{\bf H}\), i.e., the magnetization saturates to a fixed point. Therefore, to excite sustainable dynamics such as an oscillation or chaos, \(H_{\rm appl}\) and/or \(H_{\rm K}\) should be time-dependent.
Let us first show the parametric oscillation of the magnetization [31, 32]. Before applying voltage, the magnetic anisotropy field \(H_{\rm K}\) has a value determined by the competition between the shape and interfacial magnetic anisotropy fields [33, 34, 35]. Next, both direct and microwave voltages are applied, which make the magnetic anisotropy field as \(H_{\rm K}=H_{\rm Kd}+H_{\rm Ka}\sin(2\pi ft)\) by VCMA effect, where \(H_{\rm Ka}\) and \(f\) are the amplitude and frequency of the microwave component in VCMA fields. For simplicity, we assume that the direct component \(H_{\rm Kd}\) in \(H_{\rm K}\) in the presence of VCMA effect is zero [32], while \(H_{\rm Ka}=100\) Oe. Note that the value of \(H_{\rm Ka}/H_{\rm appl}\) should be larger than \(2\alpha\) to excite a sustainable oscillation [32]. Figure 1(b) shows the time evolution of \(m_{x}\) for various \(f\). The magnetization basically saturates to a fixed point \(m_{x}=+1\) due to the relaxation to the direction of the external magnetic field. An exception occurs when the input frequency \(f\) is close to \(2f_{\rm L}\), where \(f_{\rm L}=\gamma H_{\rm appl}/(2\pi)\) is the Larmor precession frequency. Initially, \(m_{x}\) oscillates around \(m_{x}=0\) and finally tends to \(m_{x}\simeq 0\). Figure 1(c) summarizes the time evolution of \(m_{x}\) (red) and \(m_{z}\) (black) for \(f=2.0f_{\rm L}\) (solid) and \(2.5f_{\rm L}\) (dotted). A steady precession is excited for \(f=2f_{\rm L}\), where the magnetization oscillates almost in the \(yz\) plane (\(m_{x}\simeq 0\)); see also Appendix A showing the spatial trajectory of the oscillation. Since the input frequency is two times larger than the Larmor precession frequency, the oscillation is classified to the parametric oscillation.
### Input-driven dynamics
Next, let us consider the input-driven dynamics. The microwave voltage inducing the parametric oscillation is input signal of one kind. In fact, it causes a synchronized motion of the magnetization with respect to the microwave voltage, where the relative phase between them saturates to one of two stable values [32]; see also A. Multistability and chaotic behavior were also found very recently [36]. Such a periodic input-driven dynamics has been studied for a long time [37]. Note, however, that the input signal used in emerging computing is often non-periodic, as in the case of human voice. A main focus in recent input-driven dynamical theory [6] is to study whether the dynamical response caused by non-periodic input is solely determined by the input signal or depends on the initial state of the physical system. The former is the input-driven synchronization. In the latter case, the dynamics might be the case of the input-driven chaos.
Therefore, there are two requirements for studying the input-driven dynamics. First, it is necessary to compare the solutions of Eq. (1) with different initial conditions. Second, non-periodic input signal should be added to VCMA effect. For the first requirement, we prepare natural initial conditions in the absence of VCMA effect from thermal equilibrium distribution [12]; see B. We solve the LLG equations for these initial conditions with \(H_{\rm K}=H_{\rm Ka}\sin(2\pi f_{\rm L}t)\) from \(t=0\) to \(t=5.0\)\(\mu\)s, where non-periodic input is not injected yet. For convenience, let us denote two solutions of Eq. (1) with slightly different initial conditions as \({\bf m}_{1}\) and \({\bf m}_{2}\). Figure 2(a) shows the time evolution of their difference, \(|{\bf m}_{1}-{\bf m}_{2}|=\sqrt{(m_{1x}-m_{2x})^{2}+(m_{1y}-m_{2y})^{2}+(m_{1z }-m_{2z})^{2}}\), in the presence of a microwave voltage. The difference decreases with time increasing because
Figure 2: (a) Time evolution of the difference of two solutions of Eq. (1) with different initial conditions. (b) Examples of the uniformly distributed random input signal \(r_{k}\). (c) Time evolution of \(m_{x}\). The random input signal is injected from \(t=5.0\)\(\mu\)s. The strength and the pulse width of the random input signal are \(\nu=0.8\) and \(t_{\rm p}=2.0\) ns, respectively.
the microwave voltage tends to fix the phase of the magnetization oscillation [32]. Simultaneously, we should note that a tiny difference still remains because the phase fixing by the microwave voltage is achieved only in the limit of \(t\rightarrow\infty\). Next, for the second requirement, we add uniformly distributed random-pulse number \(r_{k}\) (\(-1\leq r_{k}\leq 1\)) as input signal, which is used in a recognition task of physical reservoir computing [5; 20]. The suffix \(k\) represents the order of the input signal. Thus, from \(t=5.0\)\(\mu\)s, the magnetic anisotropy field becomes
\[H_{\rm K}=H_{\rm Ka}\left(1+\nu r_{k}\right)\sin(2\pi f_{\rm L}t), \tag{3}\]
where the frequency \(f\) is fixed to \(2f_{\rm L}\). The dimensionless parameter \(\nu\) determines the modulation of VCMA effect by the input signal. Figure 2(b) shows an example of the random input signal \(r_{k}\), where the pulse width is 2.0 ns. The input signal modulates the magnetic anisotropy field and induces complex dynamics of the magnetization, as shown in Fig. 2(c), where \(\nu\) is 0.8.
Now let us investigate the sensitivity of the magnetization dynamics with respect to the initial state. Figure 3(a) shows the temporal difference between two solutions of Eq. (1) with different initial conditions, where \(\nu=0.2\). As mentioned, for \(t\leq 5.0\)\(\mu\)s, only the microwave voltage is applied, and the difference tends to be zero. There is, however, still a tiny difference, as shown in Fig. 2(a). This difference can be regarded as the difference given to the initial state for the dynamics in the presence of the random input signal. Note that, even after the injection of the random input signal from \(t=5.0\)\(\mu\)s, the difference remains negligible for this weak (\(\nu=0.2\)) perturbation limit. The result indicates that the synchronization caused by the microwave VCMA effect is maintained. The conclusion can be verified from a different viewpoint shown in the inset of Fig. 3(a), where two solutions of the LLG equation are almost overlapped. However, when the strength of the random input signal becomes large as \(\nu=0.8\), a tiny difference at \(t=5\)\(\mu\)s is enlarged due to the excitation by the random input, as shown in Fig. 3(b). Remind that the LLG equation conserves the norm of the magnetization as \(|{\bf m}|=1\); therefore, the maximum value of the difference between two solutions is 2, at which two magnetizations point to the opposite direction. Therefore, the difference shown in Fig. 3(b), which is larger than 1, is regarded as non-negligible. The temporal dynamics of two solutions shown in the inset of the figure also indicate that the synchronization caused by the microwave VCMA effect is broken. These results indicate that, although the difference
Figure 3: Difference \(|{\bf m}_{1}-{\bf m}_{2}|\) of the solutions of the LLG equation with slightly different initial conditions for (a) \(\nu=0.2\) and (b) \(\nu=0.8\). The insets show temporal dynamics of \(m_{1x}\) and \(m_{2x}\).
of two solutions at \(t=5\)\(\mu\)s is negligibly small, as shown in Fig. 2(a), it is expanded by the injection of the random-pulse input signal. In other words, the dynamics are sensitive to the difference at \(t=5\)\(\mu\)s. Such a sensitivity implies that the dynamics in Fig. 3(b) is chaos.
### Validity of parameters
We note that the value of the parameters used in this work is in a reasonable range realized in experiments. The perpendicular magnetic anisotropy energy density,\(K\), consists of the bulk magnetic anisotropy energy density \(K_{\rm V}\), the interfacial magnetic anisotropy energy \(K_{\rm i}\), the contribution from the VCMA effect as \(Kd=K_{\rm V}d+K_{\rm i}-\eta\mathscr{E}\), where \(d\) is the thickness of the free layer. The electric field \(\mathscr{E}\) relates to the voltage \(V\) via \(\mathscr{E}=V/d_{\rm I}\), where \(d_{\rm I}\) is the thickness of the insulating barrier. In typical magnetic multilayers, where the free layer and insulating barrier are CoFeB and MgO, respectively, \(K_{\rm i}\) is the dominant contribution to \(K\) and its value increases with the composition of Fe increasing [33]. It can reach on the order of 1.0 mJ/m\({}^{2}\), which corresponds to, typically, on the order of 1 T in terms of magnetic field, \(2K_{\rm i}/(Md)\), where \(M\) is the saturation magnetization and is about 1000 emu/cm\({}^{3}\). On the other hand, the VCMA efficiency \(\eta\) reaches 300 fJ/(Vm) [38; 39]. Regarding typical values of the thickness of the insulating barrier (about 2.5 nm) and applied voltage (0.5 V at maximum) [40], the tunable range of the magnetic anisotropy field by voltage is on the order of 1.0 kOe at maximum. Summing these values, it is possible to generate an oscillating component of the magnetic anisotropy field on the order of 100 Oe. It should also be noted that a series of random-pulse input signal with the pulse width of nanoseconds was applied to magnetic multilayers in experiments of physical reservoir computing [15; 16]. Therefore, the proposal made here will be experimentally examined.
### Comment on LLB equation
The results shown in this work are derived by solving the LLG equation. There is another equation of motion, Landau-Lifshitz-Bloch (LLB) equation, describing the magnetization dynamics. Here, let me mention their differences briefly.
The LLG equation assumes the conservation of the magnetization magnitude, i.e., \(|{\bf m}|=1\), which is valid at temperature sufficiently lower than Curie temperature. The relaxation of the magnetization is characterized by
the dimensionless damping parameter \(\alpha\). Note that the number of independent variables in the LLG equation is two, although the vector \(\mathbf{m}\) has three components in the Cartesian coordinate. This is because the condition \(|\mathbf{m}|=1\) acts as a constraint and reduces the number of independent variables. On the other hand, the LLB equation does not conserve the magnetization magnitude, and is valid at high temperature. There are two parameters, the longitudinal and transverse relaxation times, characterizing the magnetization relation. The number of independent variables is three in the LLB equation.
We should note that chaos appears in a high-dimensional system. In fact, chaos is prohibited in a dynamical system whose dimension is less or equal to two, according to the Poincare-Bendixson theorem. Therefore, chaos might be easily excited in a system described by the LLB equation than that described by the LLG equation. However, since the number of the parameters describing the relaxation are different between two equations, it is difficult to compare chaos in these two equations on an equal footing. Therefore, we would like to leave chaos in the LLB equation for further study in future.
## 3 Statistical analysis of Lyapunov exponent
In Sec. 2.1, we study the existence of chaos from temporal dynamics. To identify chaos from different perspectives, here, we evaluate the Lyapunov exponent.
The Lyapunov exponent is an expansion rate of the difference between two solutions of an equation of motion with slightly different initial conditions. The Lyapunov exponent is negative when the solution saturates to a fixed point. The input-driven synchronization is an example of the dynamics with a negative Lyapunov exponent because the temporal dynamics are solely determined by the input signal and independent of the initial condition. When the Lyapunov exponent is zero, the difference remains constant. An example of the dynamics corresponding to a zero Lyapunov exponent is a limit-cycle oscillation. The corresponding dynamics thus depends on the initial state but is not chaos. When the Lyapunov exponent is positive, the difference is expanded and thus, the dynamics are sensitive to the initial state. A positive Lyapunov exponent indicates an existence of chaos. Note that the sensitivity to the initial state in dynamics is a necessary condition of chaos but is not a sufficient condition because the dynamics with a zero Lyapunov exponent also depends on the initial state. The evaluation of the
Figure 4: Lyapunov exponent as a function of the dimensionless input strength \(\nu\).
Lyapunov becomes a measure of chaos because its sign provides an evidence of chaos. Here, we evaluate the Lyapunov exponent by Shimada-Nagashima method [41], where the exponent is defined as
\[\Lambda=\lim_{N\rightarrow\infty}\frac{1}{N\Delta t}\sum_{i=1}^{N}\ln\frac{ \mathscr{D}}{\epsilon}, \tag{4}\]
where \(\Delta t\) is the time increment of the LLG equation and is 1 ps in this work. In the Shimada-Nagashima method, the solution of an equation of motion at a certain time \(t_{0}\) is shifted with a tiny distance \(\epsilon\) in phase space. Then, the original and shifted solutions are evolved from \(t=t_{0}\) to \(t=t_{0}+\Delta t\) by the equation of motion. The distance between these solutions at \(t=t_{0}+\Delta t\) is \(\mathscr{D}\). If \(\mathscr{D}/\epsilon<(>)1\), the difference given at the time \(t_{0}\) shrinks (expanded), and thus, the temporal Lyapunov exponent is negative (positive). The Lyapunov exponent is a long-time average of such a temporal Lyapunov exponent, as implied by Eq. (4); see also C for details. Figure 4 summarizes the Lyapunov exponent as a function of the strength of the input signal, \(\nu\). For small \(\nu\), the Lyapunov exponent is negative, indicating that the dynamical state of the magnetization is determined by the input signal and is insensitive to the initial sate. The Lyapunov exponent changes its sign around \(\nu=0.5\) and becomes positive for large \(\nu\), indicating that the dynamics becomes sensitive to the initial state. The positive Lyapunov exponents are another evidence of the appearance of input-driven chaos in the parametric oscillator.
## 4 Conclusion
In conclusion, the input-driven magnetization dynamics in the parametric oscillator were studied by solving the LLG equation. The microwave voltage induces a sustainable oscillation of the magnetization around an external magnetic field through VCMA effect. Adding non-periodic input signal changes the dynamical behavior, depending on its magnitude. In a weak perturbation limit, the temporal dynamics of the magnetization were determined by the input signal and are insensitive to the initial state. On the other hand, in a large perturbation limit, the dynamics become sensitive to the initial state. Such a chaotic behavior was revealed by comparing the difference of two solutions of the LLG equation with different initial conditions. The evaluation of the Lyapunov exponent also identified the appearance of chaos in the magnetization dynamics.
The existence of chaos in the input-driven spintronics systems will be of interest for emerging computing technologies. For example, it has been empirically shown that the computing performance of physical reservoir computing is maximized at the edge of chaos [42; 43], although it does not seem a general conclusion [5]. Therefore, a tunability of the dynamical state in physical systems is required for an enhancement of the computing capability. The result shown in Fig. 4 shows, for example, that the dynamical state of spintronics devices can be tuned between input-driven synchronization and chaos by tuning the input strength. As emphasized in Sec. 2.3, the values of the parameters used in this work are in a reasonable range available in experiments, and therefore, the results in this work will provide a direction to design the emerging computing devices based on spintronics technologies. The input-driven chaotic magnetization dynamics might also have some applications because chaos was found in brain activities [14] and theoretical models emulating the neural dynamics of squid [13]. Developing the present results to brain-inspired computing will be, therefore, an interesting future work.
## Acknowledgments
The work is supported by JSPS KAKENHI Grant Number 20H05655.
## Appendix A Parametric oscillation by microwave voltage
In the main text, two time-dependent inputs are added to the magnetic anisotropy field. One is a microwave voltage and the other is uniformly distributed random numbers. The former induces a parametric oscillation [31]. Figure A.5(a) shows the spatial trajectory of the magnetization oscillation in a steady state. As mentioned in the main text, the magnetization oscillates around the \(x\) axis. The solid lines in Fig. A.5(b) show examples of the magnetization oscillation with different initial conditions, whereas the dotted line represents the oscillation of the microwave voltage, \(\sin(2\pi f_{\mathrm{L}}t)\). It indicates that the oscillation frequency is a half of the microwave frequency.
The microwave voltage fixes the phase of the magnetization with respect to the voltage oscillation. There are more than one solution of the magnetization phase [32]. The phase depends on the initial conditions, as implied in Fig. A.5(b); see also Sec. B below. When we study chaos in the main text, we choose the solutions of the LLG equation with the same phases because chaos is characterized by the sensitivity to the initial state.
## Appendix B Preparation of initial state
Chaotic dynamics are sensitive to the initial state. Therefore, to identify the existence of chaos, it is necessary to study the dependence of the temporal dynamics on the initial state. We prepare natural initial states by solving the LLG equation in the absence of the input signal. The value of \(H_{\rm K}\) is that in the absence of external voltage and is 6.28 kOe [31]. Also note that, at zero temperature, the solution of the LLG equation saturates to the minimum energy state, \(\sin\theta=H_{\rm appl}/H_{\rm K}\), where \(\theta\) relates to \(m_{z}\) via \(m_{z}=\cos\theta\). To obtain natural distribution of the initial state [12], we add a torque, \(-\gamma{\bf m}\times{\bf h}\), due to thermal fluctuation to the right-hand side of Eq. (1). Here, the components of \({\bf h}\) satisfy the fluctuation-dissipation theorem [44],
\[\langle h_{k}(t)h_{\ell}(t^{\prime})\rangle=\frac{2\alpha k_{\rm B}T}{\gamma MV }\delta_{k\ell}\delta(t-t^{\prime}), \tag{12}\]
where the saturation magnetization \(M\) is assumed to be 955 emu/cm\({}^{2}\)[31]. The temperature \(T\) is 300 K, while the volume is \(V=\pi\times 50\times 50\times 1.1\) nm\({}^{3}\), which is typical for VCMA experiments. The thermal fluctuation excites a small-amplitude oscillation of the magnetization around the energetically minimum state with the ferromagnetic resonance frequency. We pick up the temporal directions of the oscillating magnetization and use them as the natural initial states.
Figure 6(a) shows the spatial distribution of the initial states, where
Figure 5: (a) Spatial trajectory of the parametric oscillation induced by a microwave voltage. (b) Temporal evolution of \(m_{z}\) with different initial conditions. Dotted line represents the oscillation of the microwave voltage for comparison.
we prepared 60 samples. Figure B.6(b) summarize the values of \(\mathbf{m}\) for these samples. For example, the dynamics shown in Fig. 3 in the main text are derived by using the sample numbers 1 and 2 as the initial states, where the solutions of the magnetization in both samples have the same phase when the dynamics are driven by a microwave voltage. On the other hand, in Fig. A.5(b), the red and blue lines correspond to the sample number 1 and 15. They are unsuitable to study chaos because the dynamical states at which the random input signal is injected are greatly different.
## Appendix C Evaluation method of Lyapunov exponent
The Lyapunov exponent is evaluated by Shimada-Nagashima method [41]. As written in the main text, we add the random input signal from \(t=5\)\(\mu\)s. Let us denote the solution of the LLG equation at this time as \(\mathbf{m}(t)\). We introduce the zenith and azimuth angles, \(\theta\) and \(\varphi\), as \(\mathbf{m}=(m_{x},m_{y},m_{z})=(\sin\theta\cos\varphi,\sin\theta\sin\varphi, \cos\theta)\), i.e., \(\varphi=\tan^{-1}(m_{y}/m_{x})\) and \(\theta=\cos^{-1}m_{z}\). Then, we also introduce \(\mathbf{m}^{(1)}(t)=(\sin\theta^{(1)}\cos\varphi^{(1)},\sin\theta^{(1)}\sin \varphi^{(1)},\cos\theta^{(1)})\). Here, \(\theta^{(1)}\) and \(\varphi^{(1)}\) satisfy \(\epsilon=\sqrt{[\theta-\theta^{(1)}]^{2}+[\varphi-\varphi^{(1)}]^{2}}\), where \(\epsilon=1.0\times 10^{-5}\) is a fixed value. For convenience, let us introduce a notation,
\[\mathcal{D}[\mathbf{m}(t),\mathbf{m}^{(1)}(t)]=\sqrt{\left[\theta(t)-\theta^{ (1)}(t)\right]^{2}+\left[\varphi(t)-\varphi^{(1)}(t)\right]^{2}}\] (C.1)
Figure B.6: (a) Spatial distribution of the initial states prepared by solving the LLG equation with thermal fluctuation. (b) The samples of \(m_{x}\), \(m_{y}\), and \(m_{z}\) corresponding to the small-amplitude oscillation of the magnetization around the energetically minimum state excited by thermal fluctuation.
Solving the LLG equations of \(\mathbf{m}(t)\) and \(\mathbf{m}^{(1)}(t)\), we obtain \(\mathbf{m}(t+\Delta t)\) and \(\mathbf{m}^{(1)}(t+\Delta t)\). From them, we evaluate
\[\mathcal{D}[\mathbf{m}(t+\Delta t),\mathbf{m}^{(1)}(t+\Delta t)]=\sqrt{\left[ \theta(t+\Delta t)-\theta^{(1)}(t+\Delta t)\right]^{2}+\left[\varphi(t+\Delta t )-\varphi^{(1)}(t+\Delta t)\right]^{2}}.\] (C.2)
Then, a temporal Lyapunov exponent at \(t+\Delta t\) is given by
\[\Lambda^{(1)}=\frac{1}{\Delta t}\ln\frac{\mathscr{D}^{(1)}}{\epsilon},\] (C.3)
where \(\mathscr{D}^{(1)}=\mathcal{D}[\mathbf{m}(t+\Delta t),\mathbf{m}^{(1)}(t+ \Delta t)]\).
Next, we introduce \(\mathbf{m}^{(2)}(t+\Delta t)=(\sin\theta^{(2)}\cos\varphi^{(2)},\sin\theta^{( 2)}\sin\varphi^{(2)},\cos\theta^{(2)})\), where \(\theta^{(2)}\) and \(\varphi^{(2)}\) are defined as
\[\theta^{(2)}(t+\Delta t)=\theta(t+\Delta t)+\epsilon\frac{\theta ^{(1)}(t+\Delta t)-\theta(t+\Delta t)}{\mathcal{D}[\mathbf{m}(t+\Delta t), \mathbf{m}^{(1)}(t+\Delta t)]},\] (C.4) \[\varphi^{(2)}(t+\Delta t)=\varphi(t+\Delta t)+\epsilon\frac{ \varphi^{(1)}(t+\Delta t)-\varphi(t+\Delta t)}{\mathcal{D}[\mathbf{m}(t+ \Delta t),\mathbf{m}^{(1)}(t+\Delta t)]}.\] (C.5)
According to these definitions, we notice that
\[\mathcal{D}[\mathbf{m}(t+\Delta t),\mathbf{m}^{(2)}(t+\Delta t)]=\epsilon.\] (C.6)
In other words, \(\mathbf{m}^{(2)}(t+\Delta t)\) is defined by moving \(\mathbf{m}(t+\Delta t)\) to the direction of \(\mathbf{m}^{(1)}(t+\Delta t)\) with a distance \(\epsilon\) in the \((\theta,\varphi)\) phase space. Then, we solve the LLG equations for \(\mathbf{m}(t+\Delta t)\) and \(\mathbf{m}^{(2)}(t+\Delta t)\) and obtain \(\mathbf{m}(t+2\Delta t)\) and \(\mathbf{m}^{(2)}(t+2\Delta t)\). The temporal Lyapunov exponent at \(t+2\Delta t\) is
\[\Lambda^{(2)}=\frac{1}{\Delta t}\ln\frac{\mathscr{D}^{(2)}}{\epsilon},\] (C.7)
where \(\mathscr{D}^{(2)}=\mathcal{D}[\mathbf{m}(t+2\Delta t),\mathbf{m}^{(1)}(t+2 \Delta t)]\).
These procedures are generalized. At \(t+n\Delta t\), we have \(\mathbf{m}(t+n\Delta t)=(\sin\theta(t+n\Delta t)\cos\varphi(t+n\Delta),\sin \theta(t+n\Delta t)\sin\varphi(t+n\Delta t),\cos\theta(t+n\Delta t))\) and \(\mathbf{m}^{(n)}(t+n\Delta t)=(\sin\theta^{(n)}(t+n\Delta t)\cos\varphi^{(n)} (t+n\Delta),\sin\theta^{(n)}(t+n\Delta t)\sin\varphi^{(n)}(t+n\Delta t),\cos \theta^{(n)}(t+n\Delta t))\). Then, we define \(\mathbf{m}^{(n+1)}(t+n\Delta t)=(\sin\theta^{(n+1)}(t+n\Delta t)\cos\varphi^{( n+1)}(t+n\Delta),\sin\theta^{(n+1)}(t+n\Delta t)\sin\varphi^{(n+1)}(t+n\Delta t ),\cos\theta^{(n+1)}(t+n\Delta t))\) by moving \(\mathbf{m}(t+n\Delta t)\) to the direction of \(\mathbf{m}^{(n)}(t+n\Delta t)\) with a distance
\(\epsilon\) in the phase space as
\[\theta^{(n+1)}(t+n\Delta t)=\theta(t+n\Delta t)+\epsilon\frac{\theta^ {(n)}(t+n\Delta t)-\theta(t+n\Delta t)}{\mathcal{D}[\mathbf{m}(t+n\Delta t), \mathbf{m}^{(n)}(t+n\Delta t)]},\] (C.8) \[\varphi^{(n+1)}(t+n\Delta t)=\varphi(t+n\Delta t)+\epsilon\frac{ \varphi^{(n)}(t+n\Delta t)-\varphi(t+n\Delta t)}{\mathcal{D}[\mathbf{m}(t+n \Delta t),\mathbf{m}^{(n)}(t+n\Delta t)]}.\] (C.9)
Note that \(\mathcal{D}[\mathbf{m}(t+n\Delta t),\mathbf{m}^{(n+1)}(t+n\Delta t)]=\epsilon\). Then, solving the LLG equations of \(\mathbf{m}(t+n\Delta t)\) and \(\mathbf{m}^{(n+1)}(t+n\Delta t)\), we obtain \(\mathbf{m}(t+(n+1)\Delta t)\) and \(\mathbf{m}^{(n+1)}(t+(n+1)\Delta t)\). A temporal Lyapunov exponent at \(t+(n+1)\Delta t\) is
\[\Lambda^{(n+1)}=\frac{1}{\Delta t}\ln\frac{\mathscr{D}^{(n+1)}}{\epsilon},\] (C.10)
where \(\mathscr{D}^{(n+1)}=\mathcal{D}[\mathbf{m}(t+(n+1)\Delta t),\mathbf{m}^{(n+1) }(t+(n+1)\Delta t)]\). The Lyapunov exponent is defined as a long-time average of the temporal Lyapunov exponent as
\[\Lambda=\lim_{N\to\infty}\frac{1}{N}\sum_{i=1}^{N}\Lambda^{(i)}.\] (C.11)
Note that \(\mathbf{m}^{(1)}(t)\) given at the initial time can point in arbitrary directions, although it should satisfy the condition \(\mathcal{D}[\mathbf{m}(t),\mathbf{m}^{(1)}(t)]=\epsilon\). Shimada-Nagashima method assumes that, even if the initial perturbation points to in arbitrary directions, the difference of two solutions will arrive at the mostly expanded direction by repeating the procedure. Since the random input signals are injected from \(t=5.0\)\(\mu\)s to \(t=15.0\)\(\mu\)s and the time increment is 1 ps, we evaluate \((15.0-5.0)\mu\)s/\(1\)ps \(=10^{7}\) temporal Lyapunov exponents and evaluate the average.
|
2306.09401 | SMEFT Restrictions On Exclusive $b \to u \ell ν$ Decays | Exclusive semileptonic $b$ hadron decays ($b \to u \ell \nu$) serve as a
sandbox for probing strong and electroweak interactions and for extracting the
CKM element $V_{ub}$. Instead, this work investigates their underexplored
potential to reveal new short-distance physics. Utilizing SMEFT as a conduit to
chart territory beyond the SM, we demonstrate that substantive new physics
contributions in $b \to u \ell \nu$ are necessarily linked to correlated
effects in rare neutral-current $b$ decays, neutral $B$ meson mixing or
high-mass Drell-Yan tails. We find that measurements of the latter processes
strongly restrict the allowed deviations in the former. A complete set of
tree-level mediators, originating from a perturbative ultraviolet model and
matching at dimension 6, is thoroughly explored to support this assertion. As a
showcase application, we examine the feasibility of a new physics
interpretation of the recent tension in exclusive $|V_{ub}|$ extraction from $B
\to V \ell \nu$ where $V=(\rho,\omega)$. | Admir Greljo, Jakub Salko, Aleks Smolkovič, Peter Stangl | 2023-06-15T18:00:00Z | http://arxiv.org/abs/2306.09401v1 | # SMEFT Restrictions On Exclusive \(b\to u\ell\nu\) Decays
###### Abstract
Exclusive semileptonic \(b\) hadron decays (\(b\to u\ell\nu\)) serve as a sandbox for probing strong and electroweak interactions and for extracting the CKM element \(V_{ub}\). Instead, this work investigates their underexplored potential to reveal new short-distance physics. Utilizing SMEFT as a conduit to chart territory beyond the SM, we demonstrate that substantive new physics contributions in \(b\to u\ell\nu\) are necessarily linked to correlated effects in rare neutral-current \(b\) decays, neutral \(B\) meson mixing or high-mass Drell-Yan tails. We find that measurements of the latter processes strongly restrict the allowed deviations in the former. A complete set of tree-level mediators, originating from a perturbative ultraviolet model and matching at dimension 6, is thoroughly explored to support this assertion. As a showcase application, we examine the feasibility of a new physics interpretation of the recent tension in exclusive \(|V_{ub}|\) extraction from \(B\to V\ell\nu\) where \(V=(\rho,\omega)\).
Keywords:\(b\) decays, SMEFT, Drell-Yan, Global likelihood
## 1 Introduction
### 1.1 Introduction
The \(b\to u\ell\nu\) decay channel [1] is the \(b\to u\ell\nu\) decay channel [2]. The \(b\to u\ell\nu\) decay channel [3] is the \(b\to u\ell\nu\) decay channel [4]. The \(b\to u\ell\nu\) decay channel [5] is the \(b\to u\ell\nu\) decay channel [6]. The \(b\to u\ell\nu\) decay channel [7] is the \(b\to u\ell\nu\) decay channel [8]. The \(b\to u\ell\nu\) decay channel [9] is the \(b\to u\ell\nu\) decay channel [10]. The \(b\to u\ell\nu\) decay channel [11] is the \(b\to u\ell\nu\) decay channel [12]. The \(b\to u\ell\nu\) decay channel [13] is the \(b\to u\ell\nu\) decay channel [14]. The \(b\to u\ell\nu\) decay channel [15] is the \(b\to u\ell\nu\) decay channel [16]. The \(b\to u\ell\nu\) decay channel [17] is the \(b\to u\ell\nu\) decay channel [18]. The \(b\to u\ell\nu\) decay channel [19] is the \(b\to u\ell\nu\) decay channel [20]. The \(b\to u\ell\nu\) decay channel [21] is the \(b\to u\ell\nu\) decay channel [22]. The \(b\to u\ell\nu\) decay channel [23] is the \(b\to u\ell\nu\) decay channel [24]. The \(b\to u\ell\nu\) decay channel [25] is the \(b\to u\ell\nu\) decay channel [26]. The \(b\to u\ell\nu\) decay channel [27] is the \(b\to u\ell\nu\) decay channel [28]. The \(b\to u\ell\nu\) decay channel [29] is the \(b\to u\ell\nu\) decay channel [30]. The \(b\to u\ell\nu\) decay channel [31] is the \(b\to u\ell\nu\) decay channel [32]. The \(b\to u\ell\nu\) decay channel [33] is the \(b\to u\ell\nu\) decay channel [34]. The \(b\to u\ell\nu\) decay channel [35] is the \(b\to u\ell\nu\) decay channel [36]. The \(b\to u\ell\nu\) decay channel [37] is the \(b\to u\ell\nu\) decay channel [38]. The \(b\to u\ell\nu\) decay channel [39] is the \(b\to u\ell\nu\) decay channel [30]. The \(b\to u\ell\nu\) decay channel [30] is the \(b\to u\ell\nu\) decay channel [31]. The \(b\to u\ell\nu\) decay channel [32] is the \(b\to u\ell\nu\) decay channel [33]. The \(b\to u\ell\nu\) decay channel [34] is the \(b\to u\ell\nu\) decay channel [35]. The \(b\to u\ell\nu\) decay channel [36] is the \(b\to u\ell\nu\) decay channel [37]. The \(b\to u\ell\nu\) decay channel [38] is the \(b\to u\ell\nu\) decay channel [39]. The \(b\to u\ell\nu\) decay channel [39] is the \(b\to u\ell\nu\) decay channel [30]. The \(b\to u\ell\nu\) decay channel [30] is the \(b\to u\ell\nu\) decay channel [31]. The \(b\to u\ell\nu\) decay channel [32] is the \(b\to u\ell\nu\) decay channel [33]. The \(b\to u\ell\nu\) decay channel [34] is the \(b\to u\ell\nu\) decay channel [35]. The \(b\to u\ell\nu\) decay channel [36] is the \(b\to u\ell\nu\) decay channel [37]. The \(b\to u\ell\nu\) decay channel [38] is the \(b\to u\ell\nu\) decay channel [39]. The \(b\to u\ell\nu\) decay channel [39] is the \(b\to u\ell\nu\) decay channel [30]. The \(b\to u\ell\nu\) decay channel [30] is the \(b\to u\ell\nu\) decay channel [31]. The \(b\to u\ell\nu\) decay channel [32] is the \(b\to u\ell\nu\) decay channel [33]. The \(b\to u\ell\nu\) decay channel [34] is the \(b\to u\ell\nu\) decay channel [35]. The \(b\to u\ell\nu\) decay channel [36] is the \(b\to u\ell\nu\) decay channel [37]. The \(b\to u\ell\nu\) decay channel [38] is the \(b\to u\ell\nu\) decay channel [39]. The \(b\to u\ell\nu\) decay channel [39] is the \(b\to u\ell\nu\) decay channel [30]. The \(b\to u\ell\nu\) decay channel [30] is the \(b\to u\ell\nu\) decay channel [31]. The \(b\to u\ell\nu\) decay channel [31] is the \(b\to u\ell\nu\) decay channel [32]. The \(b\to u\ell\nu\) decay channel [33] is the \(b\to u\ell\nu\) decay channel [34]. The \(b\to u\ell\nu\) decay channel [35] is the \(b\to u\ell\nu\) decay channel [36]. The \(b\to u\ell\nu\) decay channel [37] is the \(b\to u\ell\nu\) decay channel [38]. The \(b\to u\ell\nu\) decay channel [39] is the \(b\to u\ell\nu\) decay channel [30]. The \(b\to u\ell\nu\) decay channel [30] is the \(b\to u\ell\nu\) decay channel [31]. The \(b\to u\ell\nu\) decay channel [32] is the \(b\to u\ell\nu\) decay channel [33]. The \(b\to u\ell\nu\) decay channel [34] is the \(b\to u\ell\nu\) decay channel [35]. The \(b\to u\ell\nu\) decay channel [36] is the \(b\to u\ell\nu\) decay channel [37]. The \(b\to u\ell\nu\) decay channel [38] is the \(b\to u\ell\nu\) decay channel [39]. The \(b\to u\ell\nu\) decay channel [39] is the \(b\to u\ell\nu\) decay channel [30]. The \(b\to u\ell\nu\) decay channel [30] is the \(b\to u\ell\nu\) decay channel [31]. The \(b\to u\ell\nu\) decay channel [30] is the \(b\to u\ell\nu\) decay channel [31]. The \(b\to u\ell\nu\) decay channel [32] is the \(b\to u\ell\nu\) decay channel [33]. The \(b\to u\ell\nu\) decay channel [34] is the \(b\to u\ell\nu\) decay channel [35]. The \(b\to u\ell\nu\) decay channel [36] is the \(b\to u\ell\nu\) decay channel [37]. The \(b\to u\ell\nu\) decay channel [38] is the \(b\to u\ell\nu\) decay channel [39]. The \(b\to u\ell\nu\) decay channel [39] is the \(b\to u\ell\nu\) decay channel [30]. The \(b\to u\ell\nu\) decay channel [39] is the \(b\to u\ell\nu\) decay channel [30]. The \(b\to u\ell\nu\) decay channel [30] is the \(b\to u\ell\nu\) decay channel [31]. The \(b\to u\ell\nu\) decay channel [30] is the \(b\to u\ell\nu\) decay channel [31]. The \(b\to u\ell\nu\) decay channel [32] is the \(b\to u\ell\nu\) decay channel [33]. The \(b\to u\ell\nu\) decay channel [34] is the \(b\to u\ell\nu\) decay channel [35]. The \(b\to u\ell\nu\) decay channel [36] is the \(b\to u\ell\nu\) decay channel [37]. The \(b\to u\ell\nu\) decay channel [38] is the \(b\to u\ell\nu\) decay channel [39]. The \(b\to u\ell\nu\) decay channel [39] is the \(b\to u\ell\nu\) decay channel [30]. The \(b\to u\ell\nu\) decay channel [39] is the \(b\to u\ell\nu\) decay channel [30]. The \(b\to u\ell\nu\) decay channel [30] is the \(b\to u\ell\nu\) decay channel [31]. The \(b\to u\ell\nu\) decay channel [32] is the \(b\to u\ell\nu\) decay channel [33]. The \(b\to u\ell\nu\) decay channel [34] is the \(b\to u\ell\nu\) decay channel [35]. The \(b\to u\ell\nu\) decay channel [36] is the \(b\to u\ell\nu\) decay channel [37]. The \(b\to u\ell\nu\) decay channel [38] is the \(b\to u\ell\nu\) decay channel [39]. The \(b\to u\ell\nu\) decay channel [39] is the \(b\to u\ell\nu\) decay channel [30]. The \(b\to u\ell\nu\) decay channel [30] is the \(b\to u\ell\nu\) decay channel [31]. The \(b\to u\ell\nu\) decay channel [30] is the \(b\to u\ell\nu\) decay channel [32]. The \(b\to u\ell\nu\) decay channel [33] is the \(b\to u\ell\nu\) decay channel [34]. The \(b\to u\ell\nu\) decay channel [35] is the \(b\to u\ell\nu\nu\) decay channel [36]. The \(b\to u\ell\nu\) decay channel [37] is the \(b\to u\ell\nu\) decay channel [38]. The \(b\to u\ell\nu\) decay channel [39] is the \(b\to u\ell\nu\) decay channel [39]. The \(b\to u\ell\nu\) decay channel [39] is the \(b\to u\ell\nu\) decay channel [30]. The \(b\to u\ell\nu\) decay channel [30] is the \(b\to u\ell\nu\) decay channel [31]. The \(b\to u\ell\nu\) decay channel [32] is the \(b\to u\ell\nu\) decay channel [33]. The \(b\to u\ell\
Introduction
The study of \(b\) hadron decays has attracted increasing attention in recent years and promises to remain an active research area in this decade. The LHCb experiment, presently underway at the Large Hadron Collider (LHC) at CERN, stands as a pivotal complement to the B-factory studies conducted at the late BaBar and Belle experiments. As we advance into the decade, the Belle II experiment at Super KEKB is rapidly gaining ground, with its luminosity nearing that of the BaBar dataset. Forecasts suggest that Belle II will outstrip its predecessor, the original Belle experiment, in the forthcoming years. The projected data sets from Belle II [1] and LHCb [2] have the potential to reshape our understanding of \(b\) physics.
The theoretical framework of weak decays is constructed on the foundation of effective field theory, which advocates the factorization of long- and short-distance contributions. Long-distance contributions are encapsulated by form factors, a domain wherein recent advancements in lattice QCD have been particularly substantial [3; 4; 5; 6; 7; 8]. Short-distance contributions, conversely, are expressed via the Wilson coefficients (WCs) of the Weak Effective Theory (WET) [9; 10; 11; 12] and can be calculated perturbatively in the Standard Model (SM). Any deviation in the WCs from their SM predictions would flag the presence of short-distance new physics (NP). When the NP scale surpasses the electroweak (EW) scale, an encompassing model-independent interpretation can be depicted through the Standard Model Effective Field Theory (SMEFT) [13; 14; 15; 16; 17; 18].
This succession of effective theories offers a methodical blueprint to describe the variety of possible short-distance physics beyond the SM (BSM). By hypothesizing a weakly-coupled ultraviolet (UV) theory as the subsequent layer of physics, we can facilitate perturbative matching calculations to the SMEFT [19; 20; 21; 22]. The renormalization group (RG) evolution from the NP scale down to the hadron scale [23; 24; 25; 26; 12], encompassing an intermediary SMEFT to WET matching [27; 11; 28], yields reliable predictions. This framework permits the systematic organization of BSM effects, which may be observable in a given weak hadron decay at each order in the EFT and loop expansion. In practice, our interest is often focused on the few leading orders which are capable of producing a substantial effect. Such a comprehensive classification study for a given weak decay enables us to identify other observables correlated at both low and high energies, thereby proposing a robust test for NP.
In accordance with this perspective, the principal objective of this paper is to explore potential BSM effects in exclusive \(b\) hadron decays, particularly those undergoing the quark-level transition \(b\to u\ell\nu\). Within the SM, these transitions are classified as tree-level weak decays, facilitated through the exchange of a \(W\) boson, and regulated by the Cabibbo-Kobayashi-Maskawa (CKM) matrix element \(V_{ub}\). Traditionally, these decays serve as a sandbox for probing strong and EW interactions and for extracting the absolute value of the CKM element \(|V_{ub}|\). In the course of this study, we venture into the relatively unexamined potential of these decays to reveal insights into new short-distance physics. While model-independent constraints on new physics from \(b\to u\ell\nu\) processes have been studied in the WET [29; 30; 31; 32; 33], employing the SMEFT allows us to consider crucial correlations imposed by
phenomena such as flavor-changing neutral currents, along with other observables like the tails of high-mass Drell-Yan distributions.
The extraction of the CKM matrix elements \(|V_{qb}|\) (where \(q=u,c\)) has exhibited slight discrepancies between determinations derived from exclusive and inclusive decays [34; 35; 36; 37; 38; 39], and the NP explanation was found to be challenging [40; 41; 42; 43; 44]. A recent reconsideration of the \(B\to\pi\) form factors [45] reveals a \(|V_{ub}|\) value congruent with the most recent inclusive determination from Belle [46]; see also [47]. While this puzzle seems to be settled, a tension remains in the \(|V_{ub}|\) determination from the exclusive \(B\to\{\rho,\omega\}\ell\nu\) decays [48]. As an ancillary application of our study, we offer insights into the feasibility of a NP interpretation.
The inclusive \(B\to X_{u}\ell\nu\) decays have sizable uncertainties, partially attributed to the subtraction of large \(B\to X_{c}\ell\nu\) backgrounds. Furthermore, the requisite background suppression cuts challenge the theoretical description based on the heavy quark expansion [49; 50; 51]. Consequently, the inclusive decays will not be further addressed, and the scope of our work is limited to the exclusive \(b\to u\ell\nu\) decays.
The structure of this paper is organized as follows: In Section 2, we undertake a comprehensive analysis of \(b\to u\ell\nu\) decays within the framework of the WET. Progressing to Section 3, we transition to the SMEFT and execute a global analysis that also takes into consideration correlations with other data sets. Subsequently, in Section 4, we enumerate all tree-level mediator models that match onto the SMEFT scenarios and probe the implied correlations in detail. The paper reaches its conclusion in Section 5. Furthermore, Appendix A provides additional details on the determination of \(|V_{ub}|\) in the WET, while Appendix B delves into tree-level models.
## 2 WET analysis of \(b\to u\ell\nu\) decays
This section presents the theoretical framework, the WET, to describe the effects of NP at short distances in \(b\to u\ell\nu\) decays (Subsection 2.1). We detail the relevant set of operators and their contributions to observables. Furthermore, we discuss the implementation of the experimental data and theoretical predictions within the flavio framework [52]. Finally, in Subsection 2.2, we offer a comprehensive interpretation of the data in the context of the WET. This serves as the starting point for the SMEFT analysis in Section 3.
### Setup
Here we focus on fully leptonic and semileptonic exclusive \(B\) meson decays with the underlying \(b\to ul\nu\) transition. We employ the following weak effective Hamiltonian,
\[\mathcal{H}_{\text{eff}}=\mathcal{H}_{\text{eff}}^{\text{SM}}+\frac{4G_{F}}{ \sqrt{2}}V_{ub}\sum_{i,l}C_{i}^{(l)}O_{i}^{(l)}+\text{h.c.}\,, \tag{1}\]
where \(O_{i}^{(l)}\) are local effective operators, \(C_{i}^{(l)}\) are WCs encoding contributions of short-distance NP, \(V_{ub}\) is the CKM matrix element, \(G_{F}\) is the Fermi constant, and \(l\) represents the lepton flavor (\(l=e,\mu,\tau\)).1 We consider the following set of local operators at mass
dimension 6:
\[O^{(l)}_{V_{L}} =(\bar{u}_{L}\gamma^{\mu}b_{L})(\bar{l}_{L}\gamma_{\mu}\nu_{lL})\,, O^{(l)}_{V_{R}} =(\bar{u}_{R}\gamma^{\mu}b_{R})(\bar{l}_{L}\gamma_{\mu}\nu_{lL})\,, \tag{2}\] \[O^{(l)}_{S_{L}} =(\bar{u}_{R}b_{L})(\bar{l}_{R}\nu_{lL})\,, O^{(l)}_{S_{R}} =(\bar{u}_{L}b_{R})(\bar{l}_{R}\nu_{lL})\,,\] (3) \[O^{(l)}_{T} =(\bar{u}_{R}\sigma^{\mu\nu}b_{L})(\bar{l}_{R}\sigma_{\mu\nu}\nu_ {lL})\,. \tag{4}\]
In the SM, only the left-handed vector operator is generated through a tree-level exchange of the \(W\) boson, leading to \(C^{(l)\rm SM}_{V_{L}}=1\) using the same normalization as in Eq. (1).2 On the other hand, short-distance NP can, in general, generate the full set of operators in Eqs. (2) - (4). Various low-energy observables, such as the branching ratios of leptonic and semileptonic exclusive \(B\) decays, are highly sensitive probes of various combinations of the corresponding WCs, as implied by Lorentz symmetry and invariance of QCD under parity.
Footnote 2: The EW corrections are included as \(C^{(l)\rm SM}_{V_{L}}=1+\frac{\alpha_{s}}{\pi}\log\left(\frac{m_{Z}}{\mu_{b}}\right)\) with \(\mu_{b}=4.8\) GeV.
The fully leptonic decays \(B\to l\nu_{l}\) are sensitive to axial, \(C^{(l)}_{A}\equiv C^{(l)}_{V_{R}}-C^{(l)}_{V_{L}}\), and pseudoscalar, \(C^{(l)}_{P}\equiv C^{(l)}_{S_{R}}-C^{(l)}_{S_{L}}\), combinations of the WCs, via
\[\frac{\text{BR}(B\to l\nu_{l})}{\text{BR}(B\to l\nu_{l})_{\text{SM}}}=\left|1 -(C^{(l)}_{V_{R}}-C^{(l)}_{V_{L}})+\frac{m_{B}^{2}}{m_{l}(m_{b}+m_{u})}(C^{(l )}_{S_{R}}-C^{(l)}_{S_{L}})\right|^{2}\,. \tag{5}\]
Note that the pseudoscalar contribution to the branching ratios of the leptonic \(B\) decays is helicity enhanced compared to the axial contribution, rendering them highly efficient probes of the pseudoscalar operator.
The semileptonic decay modes of \(B\) mesons into a pseudoscalar meson \(P\) are sensitive to vectorial, \(C^{(l)}_{V}\equiv C^{(l)}_{V_{R}}+C^{(l)}_{V_{L}}\), scalar \(C^{(l)}_{S}\equiv C^{(l)}_{S_{R}}+C^{(l)}_{S_{L}}\), and tensor \(C^{(l)}_{T}\) WCs. The differential decay width (relative to the SM one) can be written as [53; 54]
\[\begin{split}\frac{d\Gamma(B\to Pl\nu)/dq^{2}}{d\Gamma(B\to Pl \nu)^{\text{SM}}/dq^{2}}=&\left|1+(C^{(l)}_{V_{R}}+C^{(l)}_{V_{L} })\right|^{2}\left[\left(1+\frac{m_{l}^{2}}{2q^{2}}\right)H^{s\,2}_{V,0}+\frac {3}{2}\frac{m_{l}^{2}}{q^{2}}\,H^{s\,2}_{V,t}\right]\\ &+\frac{3}{2}|C^{(l)}_{S_{R}}+C^{(l)}_{S_{L}}|^{2}\,H^{s\,2}_{S}+ 8|C^{l}_{T}|^{2}\left(1+\frac{2m_{l}^{2}}{q^{2}}\right)\,H^{s\,2}_{T}\\ &+3\text{Re}[(1+(C^{(l)}_{V_{R}}+C^{(l)}_{V_{L}}))(C^{(l)*}_{S_{R }}+C^{(l)*}_{S_{L}})]\frac{m_{l}}{\sqrt{q^{2}}}\,H^{s}_{S}H^{s}_{V,t}\\ &-12\text{Re}[(1+(C^{(l)}_{V_{R}}+C^{(l)}_{V_{L}}))C^{(l)*}_{T}] \frac{m_{l}}{\sqrt{q^{2}}}\,H^{s}_{T}H^{s}_{V,0}\,,\end{split} \tag{6}\]
where \(q^{2}\) is the momentum transfer squared, and \(H^{s\,2}_{V,0},H^{s\,2}_{V,t},H^{s\,2}_{S},H^{s\,2}_{T}\) are hadronic matrix elements, parameterized by the three hadronic form factors \(f_{+,0,T}(q^{2})\) (see Ref. [54] for explicit expressions). We use the latest determination of the \(B\to\pi\) form factors from Ref. [45] where a combined fit to light-cone sum rules (LCSR) and lattice QCD data was performed.
The semileptonic decay modes of \(B\) mesons into vector meson \(V\) are rich in structure, with the differential decay width (relative to the SM one) given as [53; 54]
\[\frac{d\Gamma(B\to Vl\nu)/dq^{2}}{d\Gamma(B\to Vl\nu)^{\rm SM}/dq^{2}}= \left(|1+C_{V_{L}}^{(l)}|^{2}+|C_{V_{R}}^{(l)}|^{2}\right)\left[ \left(1+\frac{m_{l}^{2}}{2q^{2}}\right)\left(H_{V,+}^{2}+H_{V,-}^{2}+H_{V,0}^{ 2}\right)+\frac{3}{2}\frac{m_{l}^{2}}{q^{2}}\,H_{V,t}^{2}\right]\] \[-2\text{Re}[(1+C_{V_{L}}^{(l)})C_{V_{R}}^{(l)*}]\left[\left(1+ \frac{m_{l}^{2}}{2q^{2}}\right)\left(H_{V,0}^{2}+2H_{V,+}H_{V,-}\right)+\frac{ 3}{2}\frac{m_{l}^{2}}{q^{2}}\,H_{V,t}^{2}\right]\] \[+\frac{3}{2}|C_{S_{R}}^{(l)}-C_{S_{L}}^{(l)}|^{2}\,H_{S}^{2}+8|C_ {T}^{l}|^{2}\left(1+\frac{2m_{l}^{2}}{q^{2}}\right)\left(H_{T,+}^{2}+H_{T,-}^{ 2}+H_{T,0}^{2}\right)\] \[+3\text{Re}[(1-(C_{V_{R}}^{(l)}-C_{V_{L}}^{(l)}))(C_{S_{R}}^{(l)* }-C_{S_{L}}^{(l)*})]\frac{m_{l}}{\sqrt{q^{2}}}\,H_{S}H_{V,t}\] \[-12\text{Re}[(1+C_{V_{L}}^{(l)})C_{T}^{(l)*}]\frac{m_{l}}{\sqrt{q^ {2}}}\left(H_{T,0}H_{V,0}+H_{T,+}H_{V,+}-H_{T,-}H_{V,-}\right)\] \[+12\text{Re}[C_{V_{R}}^{(l)}C_{T}^{(l)*}]\frac{m_{l}}{\sqrt{q^{2}} }\left(H_{T,0}H_{V,0}+H_{T,+}H_{V,-}-H_{T,-}H_{V,+}\right)\,, \tag{7}\]
where again, the explicit expressions of the hadronic matrix elements can be found in Ref. [54], parameterized in terms of the form factors \(A_{0,1,2}(q^{2})\), \(V(q^{2})\) and \(T_{1,2,3}(q^{2})\). The form factors for the \(B\to\omega\) and \(B\to\rho\) transitions have been determined only by the light-cone sum rules [55].
In the following, we contrast the WET with the available experimental data on the leptonic and semileptonic \(b\to ul\nu\) decays, focusing mostly on channels with light leptons. We assume exact isospin symmetry, and the spectra are to be understood as CP-averaged. Currently, the best determination of \(\text{BR}(B\to e\nu)\) and \(\text{BR}(B\to\mu\nu)\) comes from Belle [56; 57], whereas for \(\text{BR}(B\to\tau\nu)\) we use the latest PDG average [58] of Belle [59; 60] and BaBar [61; 62] measurements,
\[\text{BR}(B\to e\nu)^{\rm exp} <9.8\times 10^{-7}\text{ at 90\% CL}\,, \tag{8}\] \[\text{BR}(B\to\mu\nu)^{\rm exp} =(5.3\pm 2.0\pm 0.9)\times 10^{-7}\,,\] \[\text{BR}(B\to\tau\nu)^{\rm exp} =(1.09\pm 0.24)\times 10^{-4}\,.\]
It is important to note that the obtained results are based on the assumption that backgrounds from semileptonic decays are NP-free. This assumption adds complexity to the global fit process, which will be discussed in detail later in this article.
As for the semileptonic transitions, we use the latest available data on the differential branching ratios of \(B\to\pi l\nu\), \(B\to\omega l\nu\) and \(B\to\rho l\nu\) from Belle [63; 64] and BaBar [65; 66; 67], as combined in the latest HFLAV averages [34; 48]. Unfortunately, the data is only reported as a combination of the electron and muon channels, hence for the majority of the following discussion (unless stated otherwise), we will assume lepton flavor universality (LFU) in light leptons, writing \(\ell\equiv e,\mu\). From the Belle analysis of \(B\to D\ell\nu\)[68], we expect a similar experimental sensitivity to electrons and muons in the decay modes discussed here. Hence we assume \(C^{(\ell)}=1/2(C^{(e)}+C^{(\mu)})\). There is also an upper limit on \(B\to\pi\tau\nu\) from Belle [69], which does not play a significant role in the scenarios studied here at the current level of experimental precision.
The predictions and measurements discussed above are available in the open source python package flavio[52] and part of the global SMEFT likelihood smelli[70; 71]. As part of performing the study discussed here, we have updated the measurements of \(B\to\mu\nu\), \(B\to\pi\ell\nu\), \(B\to\omega\ell\nu\) and \(B\to\rho\ell\nu\) in the database of measurements. In our numerical analysis, we take into account the new physics dependence of the theory uncertainties and their correlations, as described in Ref. [72].
### Results
Here we perform a comprehensive analysis of the constraining power of the above processes on the model-independent parameterization of short-distance NP effects by means of the WET Hamiltonian in Eq. (1). In the main results of this paper, we fix \(|V_{ub}|\) in Eq. (1) to \(|V_{ub}|=3.73\times 10^{-3}\), a value which is compatible with the recent global fits of the CKM matrix parameters [34]. We dedicate Appendix A to the issue of extracting \(|V_{ub}|\) in an EFT setting.
Firstly, we focus on the left- and right-handed vector operators in Eq. (2). On the upper left plot in Figure 1, we show the constraints in the \((C_{V_{L}}^{(\ell)},C_{V_{R}}^{(\ell)})\) plane, assuming real WCs and LFU in light leptons (see the discussion in the previous section). As anticipated from Eqs. (5) and (6), the branching ratios of the fully leptonic \(B\to\mu\nu\) and semileptonic \(B\to\pi\ell\nu\) decay modes are sensitive to perpendicular directions in this plane, the former to the axial direction and the latter to the vectorial direction. The constraint from \(B\to e\nu\) is not competitive in this plane, and we do not show it. On the other hand, the branching ratios of the semileptonic decay modes to final states with vector mesons are sensitive to both directions; see Eq. (7). As already anticipated in Ref. [48] and further demonstrated in Ref. [33], we observe a tension in the global fit at the level of \(\sim 2\sigma\) with respect to the SM point. The tension is a consequence of the measured differential spectra of \(B\to\omega\ell\nu\) and \(B\to\rho\ell\nu\) being consistently below the SM predictions (see e.g. [48]). A negative interference with the SM contribution is needed in order to explain the data, hence \(C_{V_{L}}^{(\ell)}<0\) is preferred. Notice that two degenerate best-fit regions are found, the first one being close to the SM point, where NP is a small correction, while the second one represents the region in which NP is almost canceling the SM contribution. Note, however, that the best-fit regions are in slight tension with the current constraint from \(B\to\mu\nu\). This constraint, however, comes with the caveat that the semileptonic \(B\to\rho\ell\nu\) and \(B\to\omega\ell\nu\) represent a part of the background events in the analysis of \(B\to\mu\nu\). These, in turn, depend on the WCs considered here. To account for this properly, a WET analysis would have to be performed already at the level of the experimental analysis. At this stage, it is hard to quantify this effect properly. However, moving to the best-fit point would decrease these backgrounds and potentially slightly increase the significance of the signal. This would further worsen the disagreement between the region preferred by \(B\to V\ell\nu\) and that preferred by \(B\to\mu\nu\).
The \(C_{V_{L}}^{(\ell)}=-C_{V_{R}}^{(\ell)}\) direction is not sensitive to \(B\to\pi\ell\nu\) decays and is preferred by the global fit in the \((C_{V_{L}}^{(\ell)},C_{V_{R}}^{(\ell)})\) plane as a solution to a mild \(B\to\{\rho,\omega\}\ell\nu\) discrepancy. In the upper right plot of Figure 1, we show the contours in the complex plane of this direction. Although a sizable imaginary part of the WCs is allowed by the global fit, a
negative \(\text{Re}C^{(\ell)}_{V_{L}}=-\text{Re}C^{(\ell)}_{V_{R}}\) is needed so as to interfere with the SM, as discussed above, destructively. For this reason, in the following, we will only assume NP that is aligned with the phase of the SM contributions.
Finally, in the bottom plot of Figure 1 we show the dependence of \(\Delta\chi^{2}\equiv\chi^{2}-\chi^{2}_{\text{min.}}\) on the assumed direction of real \(C^{(\ell)}_{V_{L}}=-C^{(\ell)}_{V_{R}}\). The constraint from \(B\to V\ell\nu\) is again showcasing the \(\sim 2.5\sigma\) tension with respect to the SM, whereas the constraint from \(B\to\mu\nu\)
is in slight tension. Furthermore, we assume two well-motivated assumptions about the WCs for the \(\tau\) lepton flavor - either lepton flavor universality in \(C_{V_{R}}^{(\tau)}=C_{V_{R}}^{(\ell)}\), or lepton flavor universality in both \(C_{V_{L}}^{(\tau)}=C_{V_{L}}^{(\ell)}\) and \(C_{V_{R}}^{(\tau)}=C_{V_{R}}^{(\ell)}\) (see Section 3 for the reasoning behind these assumptions). The constraints from \(B\to\tau\nu\) further challenge the region preferred by \(B\to V\ell\nu\) and will play an important role in the scenarios considered in Section 3.
We now turn our attention to the scalar operators defined in Eq. (3). In Figure 2, we show the constraints in the plane of \((C_{S_{L}},C_{S_{R}})\), assuming NP in either electrons (left plot) or in muons (right plot). As anticipated from Eqs. (5) and (6), the fully leptonic decay modes are sensitive to the pseudoscalar direction, whereas the semileptonic decay modes to a pseudoscalar final state meson are sensitive to the perpendicular (scalar) direction. The constraints from \(B\to V\ell\nu\) are not competitive in this plane, and hence we omit them in the plots.3 Note that here we relax the assumption of lepton flavor universality, which in the case of \(B\to\pi\ell\nu\) amounts to assuming NP only in one of the lepton channels, whereas still accounting for the SM prediction for the other channel. Due to the chirality enhanced sensitivity of \(B\to\ell\nu\), the pseudoscalar direction is much better constrained compared to the scalar one. Furthermore, the branching ratio of \(B\to\mu\nu\) is significantly better constrained than \(B\to e\nu\), rendering the constraints in the muon scenario more stringent than the electron scenario. In the muon scenario, we again observe two minima, one of which amounts to a significant cancelation between (large) NP and the SM contributions to \(B\to\mu\nu\). Finally, we comment that in this plane, there is less ambiguity expected in the contours from \(B\to\ell\nu\) due to the dependence of the backgrounds on the WCs, i.e., \(B\to\pi\ell\nu\) is blind to the direction to which \(B\to\ell\nu\) is sensitive, whereas \(B\to V\ell\nu\) are only
Figure 2: 2D contours in the scenario \((C_{S_{L}},C_{S_{R}})\), assuming NP either in electrons (left) or in muons (right).
marginally sensitive to the scalar WCs at the current experimental precision.
Next, in Table 1, we collect the \(2\sigma\) bounds on the WET WCs in the \(b\to u\ell\nu\) sector, as organized by the Lorentz structure and parity of their respective operators. These results were obtained by considering one nonzero combination of WET WCs at a time. In all cases, except for the pseudoscalar coefficient, we assume LFU. In each case, a single process (we group the vector final states into \(V=(\rho,\omega)\)) dominates the global fit, as can be seen by comparing the \(2\sigma\) regions from the processes, pointed out in each column of the table, with the global fit underneath. The constraint on the vector operator is completely dominated by measurements of the \(B\to\pi\ell\nu\) branching ratios, as is already confirmed in Figure 1. The perpendicular, axial-vector direction is, on the other hand, most constrained by \(B\to V\ell\nu\), where \(V=(\omega,\rho)\), and shows a slight tension with the SM prediction at the level of \(\sim 2.5\sigma\). As can already be expected from Figure 2, the scalar Wilson coefficient is most constrained from measurements of \(B\to\pi\ell\nu\), whereas the perpendicular, pseudoscalar direction is best constrained from fully leptonic modes \(B\to\ell\nu\), with the constraint on the pseudoscalar coefficient being much tighter due to the chirally enhanced sensitivity of fully leptonic modes. Lastly, the current best bound on the Wilson coefficient of the tensor operator comes from \(B\to V\ell\nu\), which can be understood from Eq. (7). The tensor operator contributes constructively to \(B\to V\ell\nu\), worsening the tension between predictions and experimental measurements. This effect is enough to render the \(2\sigma\) constraint from \(B\to V\ell\nu\) a factor of \(\sim 2\) better compared to the constraint from \(B\to\pi\ell\nu\).
## 3 Global SMEFT analysis
In this section, we presume that the NP scale surpasses the EW scale, allowing the SMEFT to represent infrared (IR) physics adequately. By limiting the SMEFT expansion to the leading, dimension-6 order, we identify correlations among various low-energy observables. Moreover, starting with the minimal set of operators for \(b\to u\ell\nu\) decays at the NP matching scale, significant effects are produced through the RG evolution down to the EW scale.
In Subsection 3.1, we detail the relevant SMEFT operators and elucidate their matching to the WET operators. Subsequently, in Subsection 3.2, we explore various observables that are correlated with \(b\to u\ell\nu\) decays in the SMEFT context. Ultimately, in Subsection 3.3, we offer an extensive global study of these processes in the SMEFT.
\begin{table}
\begin{tabular}{c|c|c|c|c|c} \(C_{V}^{\ell}\) & \(C_{A}^{\ell}\) & \(C_{S}^{e}\) & \(C_{P}^{\mu}\) & \(C_{T}^{\ell}\) \\ \hline \(B\to\pi\ell\nu\): & \(B\to V\ell\nu\): & \(B\to\pi\ell\nu\): & \(B\to e\nu\): & \(B\to\mu\nu\): & \(B\to V\ell\nu\): \\ \([-0.057,0.108]\) & \([0.079,0.397]\) & \([-0.178,0.162]\) & \([-0.027,0.027]\) & \([-0.008,0.009]\) & \([-0.091,0.095]\) \\ \hline global: & global: & global: & global: & global: \\ \([-0.066,0.094]\) & \([0.039,0.382]\) & \([-0.178,0.162]\) & \([-0.027,0.027]\) & \([-0.008,0.009]\) & \([-0.09,0.095]\) \\ \end{tabular}
\end{table}
Table 1: \(2\sigma\) bounds on the vector, axial-vector, scalar, pseudoscalar, and tensor WCs in the WET. In each case, we point out the process that dominates the global fit (note that \(V=(\rho,\omega)\)). Each column represents a 1D fit with only a single combination of WET WCs active at a time.
### Setup
Above the EW scale, we use the following SMEFT effective Lagrangian at mass dimension 6 to parameterize model-independent effects of heavy NP,
\[\mathcal{L}_{\text{eff}}=\mathcal{L}_{\text{SM}}+\sum_{Q_{i}=Q_{i}^{\dagger}} \frac{C_{i}}{\Lambda^{2}}Q_{i}+\sum_{Q_{i}\neq Q_{i}^{\dagger}}\left(\frac{C_{i }}{\Lambda^{2}}Q_{i}+\frac{C_{i}^{*}}{\Lambda^{2}}Q_{i}^{\dagger}\right)\,, \tag{10}\]
where \(Q_{i}\) are local effective operators in the Warsaw basis [14], \(C_{i}\) are WCs, and \(\Lambda\) is the cutoff scale. In Table 2, we collect the subset of dimension-6 operators we focus on in this work: those that contribute at the tree level to \(b\to u\ell\nu\) processes and the closely related operators.4 The operators \(Q_{lq}^{(3)},Q_{ledq},Q_{lequ}^{(1)}\) and \(Q_{lequ}^{(3)}\) contribute to \(b\to u\ell\nu\) as contact interactions, whereas \(Q_{\phi q}^{(3)}\) and \(Q_{\phi ud}\) modify the left- and right-handed \(W\) couplings with quarks, respectively. Already at this stage, we omit the operator \(Q_{\phi l}^{(3)}\), which could contribute to \(b\to u\ell\nu\) through a modified \(W\) coupling to leptons, however, these operators are tightly constrained from other processes (see Section 3.2).
Footnote 4: That is, operators contributing to the same set of complementary observables and (or) are generated when integrating a perturbative UV model.
For completeness, the tree level matching of the operators in Table 2 to the WET Hamiltonian in Eq. (1), assuming the down-diagonal quark mass basis, is5
Footnote 5: Throughout this work, we consider a set of SMEFT operators that do not modify \(v\) and \(G_{F}\).
\[C_{V_{L}}^{(l)} =-\frac{V_{ud}}{V_{ub}}\frac{v^{2}}{\Lambda^{2}}\left[C_{lq}^{(3) }\right]_{ll3}+\frac{V_{ud}}{V_{ub}}\frac{v^{2}}{\Lambda^{2}}\left[C_{\phi q}^ {(3)}\right]_{13}\,, C_{V_{R}}^{(l)} =\frac{1}{2V_{ub}}\frac{v^{2}}{\Lambda^{2}}\left[C_{\phi ud} \right]_{13}\,, \tag{11}\] \[C_{S_{L}}^{(l)} =-\frac{1}{2V_{ub}}\frac{v^{2}}{\Lambda^{2}}\left[C_{lequ}^{(1)} \right]_{ll31}^{*}\,, C_{S_{R}}^{(l)} =-\frac{V_{ud}}{2V_{ub}}\frac{v^{2}}{\Lambda^{2}}\left[C_{ledq} \right]_{ll31}^{*}\,,\] (12) \[C_{T}^{(l)} =-\frac{1}{2V_{ub}}\frac{v^{2}}{\Lambda^{2}}\left[C_{lequ}^{(3)} \right]_{ll31}^{*}\,. \tag{13}\]
Interestingly, at dimension-6 SMEFT, the left-handed vector operator can be generated either through a contact interaction \(C_{lq}^{(3)}\) or through a modification of the \(W\) coupling with
left-handed quarks via \(C^{(3)}_{\phi q}\), whereas the right-handed vector operator can only be generated through a modification of the \(W\) coupling with right-handed quarks \(C_{\phi ud}\).6
Footnote 6: The non-universal right-handed vector operator in the WET is generated by the dimension-8 SMEFT operator \((\tilde{\phi}^{\dagger}\sigma^{a}\phi)(\bar{u}\gamma^{\mu}d)(\bar{l}\gamma_{\mu} \sigma^{a}l)\).
Note that, assuming the standard CKM parameterization, \(V_{ub}\) in Eqs. (10) - (11) is complex and carries an imaginary part which is almost 3 times as large as its real part. Assuming real SMEFT WCs would thus result in WET WCs with substantial imaginary parts, firmly selecting a direction, for e.g., the upper right plot on Figure 1. In order to circumvent the computational complexity of assuming complex WCs in the SMEFT, we choose a direction in the complex plane of each WC that is aligned with the phase of \(V_{ub}\) in the following way:
\[C_{i}=\frac{V_{ub}}{|V_{ub}|}\tilde{C}_{i}\,,\qquad\tilde{C}_{i}\in\mathbb{R}\,. \tag{12}\]
It is straightforward to see that in Eqs. (10) - (11) the \(V_{ub}\) factor will cancel out. However, we keep its absolute value so as not to change the magnitude of WCs. In the following, we will present results in terms of \(\tilde{C}_{i}\), unless otherwise stated.
As for the numerical analysis, both in this and in the following section we use the same setup as already described in Section 2. Most of the complementary constraints discussed in the following are already a part of the public smelli[70; 71] likelihood, with the exception of high-mass Drell-Yan tails, which we add based on their implementation in flavio[73]. As already mentioned in Section 2, we take into account the dependence of the theory uncertainties on the NP parameters [72; 73]. We choose the initial scale at which the SMEFT operators are defined to be \(\Lambda=1\) TeV, and rely on wilson[74] for the running and matching7 of the WCs, both above and below the EW scale, and on the Wilson coefficient exchange format (WCxf) [75] to represent the Wilson coefficients and to fix the EFT bases.
Footnote 7: Note, that the current official version of wilson is neglecting two insertions of vertex corrections in the matching of the SMEFT to the WET, which is necessary for some complementary constraints, see Section 3.2.3. We have added these neglected terms in a private version of wilson, and they will be available in a future official release.
### Complementary constraints
In this section, we discuss the phenomena correlated to \(b\to u\ell\nu\) transitions within the SMEFT framework. These include: neutral-current rare \(b\) decays, high-mass Drell-Yan production \(pp\to\ell^{+}\ell^{-}\) and \(pp\to\ell\nu\), \(B^{0}-\bar{B}^{0}\) oscillations, and EW gauge boson vertex corrections.
#### 3.2.1 Rare \(b\) decays
Due to \(SU(2)_{L}\) relations, the operators \(Q^{(3)}_{lq},Q_{ledq},Q^{(3)}_{\phi q}\), together with the related operators \(Q^{(1)}_{lq}\) and \(Q^{(1)}_{\phi q}\), enter the leptonic and semileptonic rare \(B\) decays with the underlying transition \(b\to d\ell\ell\) with no additional suppression, either as contact interactions or as modifications of the \(Z\) boson couplings with quarks. This leads to important constraints from measurements of the leptonic branching ratios \(B\to ee\), reported by LHCb [76], and
\(B\to\mu\mu\), reported by LHCb [77; 78], CMS [79] and ATLAS [80], and semileptonic branching ratios \(B\to\pi ee\), the upper limit of which was reported by Belle [81], \(B\to\pi\mu\mu\), measured differentially by LHCb [82], and \(B_{s}\to K^{*0}\mu\mu\) observed by LHCb [83] with a significance of \(3.4\sigma\). See Refs. [73; 84] for dedicated studies of this sector in the WET. The related FCNC process \(b\to d\nu\nu\) is sensitive to the operators \(Q_{lq}^{(1)},Q_{lq}^{(3)}\), \(Q_{\phi q}^{(1)}\) and \(Q_{\phi q}^{(3)}\). The upper limits on \(B\to\pi\nu\nu\) and \(B\to\rho\nu\nu\) have been determined by Belle using either hadronic [85] or semileptonic tagging [86].
In both \(b\to d\ell\ell\) and \(b\to d\nu\nu\) we expect unconstrained directions in the \((C_{lq}^{(1)},C_{lq}^{(3)})\) and \((C_{\phi q}^{(1)},C_{\phi q}^{(3)})\) pairs of WCs. Namely, in the case of contact interactions, there will be no contribution to \(b\to d\ell\ell\) when aligned with the direction of \(C_{lq}^{(1)}=-C_{lq}^{(3)}\), and no contribution to \(b\to d\nu\nu\) in the perpendicular direction of \(C_{lq}^{(1)}=C_{lq}^{(3)}\). These directions are also stable under RG effects. On the other hand, in the case of modified \(Z\) vertices, both \(b\to d\ell\ell\) and \(b\to d\nu\nu\) have the same flat direction of \(C_{\phi q}^{(1)}=-C_{\phi q}^{(3)}\) at the tree level, as this combination does not contribute to down-quark FCNCs. This relation is, however, mildly broken due to the different renormalization of the singlet and triplet operators. The effect is dominated by the \(y_{t}\)-enhanced contribution from the diagrams in which one of the Higgs legs is attached to a quark leg to form a loop, and a Higgs is emitted from the quark in the loop. Writing only the dominant parts of the RG equations (RGE) contributing to the effect relevant to our discussion, we have [24]
\[\begin{split}\left[\dot{C}_{\phi q}^{(1)}\right]_{pr}& \propto\frac{3}{2}\left[C_{\phi q}^{(1)}\right]_{pt}[Y_{u}^{ \dagger}Y_{u}]_{tr}-\frac{9}{2}\left[C_{\phi q}^{(3)}\right]_{pt}[Y_{u}^{ \dagger}Y_{u}]_{tr}\,,\\ \left[\dot{C}_{\phi q}^{(3)}\right]_{pr}&\propto \frac{1}{2}\left[C_{\phi q}^{(3)}\right]_{pt}[Y_{u}^{\dagger}Y_{u}]_{tr}- \frac{3}{2}\left[C_{\phi q}^{(1)}\right]_{pt}[Y_{u}^{\dagger}Y_{u}]_{tr}\,, \end{split} \tag{3.6}\]
where \(\dot{C}\equiv 16\pi^{2}\mu\frac{d}{d\mu}C\). This will be relevant for understanding Figure 4.
Finally we comment on the \(Q_{\phi ud}\) operator, which in principle runs into \(Q_{\phi u}\) and \(Q_{\phi d}\)[24]. These operators contribute to modified \(Z\) boson couplings with right-handed quarks. However, these terms in the SMEFT RGE are \(Y_{d}\) suppressed, rendering rare \(b\) decays an inefficient probe of \(Q_{\phi ud}\), when compared to bounds obtained from \(b\to u\ell\nu\).
#### 3.2.2 High-mass Drell-Yan
The constraints from measurements of Drell-Yan (DY) processes at high \(p_{T}\) have been shown to be highly complementary to various low energy flavor processes, in particular in the sectors of rare \(b\) decays \(b\to s\ell\ell\) and \(b\to d\ell\ell\)[73; 87; 88], charged-current \(b\to c\tau\nu\) decays [89; 90; 91; 92], lepton flavor violating transitions [93; 94], and in the charm sector [95; 96]. In this work, we study the complementarity between low-energy \(b\to u\ell\nu\) processes and high-mass DY.
High-mass DY processes are especially sensitive to contact interactions due to the favorable energy enhancement of the EFT amplitudes in the tails of high-\(p_{T}\) distributions,8 which can overcome the PDF suppression due to potential sea quarks in the initial state. An
important property of the dependence of high-\(p_{T}\) spectra to contact interactions is the lack of interference terms between different SMEFT operators. This means that the majority of the operators, barring the ones that interfere with the SM, will contribute to the spectra only constructively, and the extracted bounds will be free from unconstrained directions.
It is important to note here, that the high-mass DY tails are effectively probing energies up to the TeV scale. The EFT validity arguments suggest that the bounds derived from the high-mass tails are applicable primarily to models with energy scales beyond the TeV scale. Consequently, the constraints imposed by the DY tails are sensitive to a smaller subset of models when compared to the constraints obtained from low-energy observables. See e.g. [73] for a recent discussion.
We consider the latest data on the differential spectra of both neutral (\(pp\to\ell\ell\)) and charged current (\(pp\to\ell\nu\)) DY processes with light leptons in final states, both from CMS [97; 98] and ATLAS [99; 100]. We have recently implemented both the theoretical predictions, including SMEFT at mass-dimension 6, and the latest data into flavio, see Ref. [73] for details.
#### 3.2.3 \(B^{0}\) meson mixing
The \(C^{(1)}_{\phi q}\) and \(C^{(3)}_{\phi q}\) WCs, generating flavor-changing modifications of the \(Z\) boson couplings with left-handed quarks, contribute to \(\Delta F=2\) processes at the tree-level, through two insertions of the modified vertex. However, it turns out that there is a numerically important contribution generated in the process of RG evolving these WCs from a high scale to a low scale. Namely, keeping only the terms relevant for this discussion, we have the following \(y_{t}\) enhanced terms in the SMEFT RGE of the four-quark operators [24]9
Footnote 9: Note, that the argument holds irrespective of the quark mass alignment. In the down-diagonal basis, Eq. (3.7) directly produces the off-diagonal elements of the flavor tensors, as required by \(\Delta F=2\) processes. In the up-diagonal basis, the necessary off-diagonal entries are instead generated in the process of matching SMEFT to the LEFT.
\[\begin{split}\left[\dot{C}^{(1)}_{qq}\right]_{prst}& \propto\frac{1}{2}[Y^{\dagger}_{u}Y_{u}]_{pr}\left[C^{(1)}_{\phi q} \right]_{st}+\frac{1}{2}[Y^{\dagger}_{u}Y_{u}]_{st}\left[C^{(1)}_{\phi q} \right]_{pr}\,,\\ \left[\dot{C}^{(3)}_{qq}\right]_{prst}&\propto- \frac{1}{2}[Y^{\dagger}_{u}Y_{u}]_{pr}\left[C^{(3)}_{\phi q}\right]_{st}-\frac {1}{2}[Y^{\dagger}_{u}Y_{u}]_{st}\left[C^{(3)}_{\phi q}\right]_{pr}\,.\end{split} \tag{3.7}\]
Note, that there is an important difference between the tree-level contributions due to modified \(Z\) vertices, and the RGE-induced contributions to four-quark operators. Namely, in the case when \(C^{(1)}_{\phi q}=-C^{(3)}_{\phi q}\), the tree-level SMEFT to WET matching does not induce modified \(Z\)-boson vertices. On the contrary, the RGE-induced contributions in Eq. (3.7) contribute even for this combination of WCs, as can be seen in the sign difference between the singlet and triplet operator contributions in Eq. (3.7). As we will see in Section 3.3, the interplay of both effects is responsible for closing the contours from \(\Delta F=2\) processes.
In this work we are interested in the \(Q^{(1)}_{\phi q}\) and \(Q^{(3)}_{\phi q}\) operators with the first and third generation quarks. Thus we expect important constraints from the \(B^{0}-\bar{B}^{0}\) mixing observables, namely the mass difference \(\Delta M_{d}\) and the CP asymmetry \(S_{\psi K}\) from the interference between \(B^{0}-\bar{B}^{0}\) mixing and the decay \(B^{0}\to\psi K_{S}\). We consider the experimental measurements of these quantities, as determined by the latest HFLAV average [34].
#### 3.2.4 \(W\) and \(Z\) vertex corrections
The modified couplings of the \(W\) and \(Z\) bosons to fermions undergo constraints from on-shell vector boson production and decay processes at both the LEP and LHC colliders. The leptonic vertex corrections, arising from the operator \(Q^{(3)}_{\phi l}\), have been found to be bounded at a (sub)percent level, as discussed in [101]. Consequently, within the current precision, the contribution of this operator to the variation of \(b\to u\ell\nu\) transitions is deemed negligible.
The bounds on quark vertex corrections derived from on-shell processes, specifically the observables associated with \(Z\) and \(W\) boson pole measurements, exhibit relatively weak constraints compared to the complementary constraints originating from low-energy observables investigated in this study (see, for instance, Eq. (4.10) in [101]). Furthermore, the evaluation of CKM-suppressed top quark decays at the LHC presents considerable challenges, as elaborated in [102]. Moreover, top quark flavor-changing neutral current decays (\(t\to Zq\)) and the corresponding \(pp\to t+Z\) production, both sensitive to the \(Q^{(3)}_{\phi q}\) operator, impose additional constraints that are also subdominant in comparison to those derived from low-energy observables discussed above [103; 104; 105]. For example, the ATLAS limit on \(\mathcal{B}(t\to Zu)<6.2\times 10^{-5}\) at 95% CL [103] translates as \(|\tilde{C}^{(3)}_{\phi q}|\lesssim 0.2\), which is two orders of magnitude worse than the scale in Figure 4.
### Results
In this subsection we present the results of a SMEFT analysis, focusing on \(b\to u\ell\nu\) processes and the important complementary constraints discussed in the previous subsection. All of the results presented here assume a minimalistic flavor structure of the SMEFT operators, aligned with a maximal expected effect on \(b\to u\ell\nu\). From the results we have presented and discussed in the WET (see Section 2), we pinpoint groups of SMEFT operators in which we expect interesting correlations to appear.
Firstly, we consider the SMEFT operators, which match onto the vector WET operators, since we have seen interesting correlations among them already in the WET (see Figure 1). For the right-handed vector operator \(O_{V_{R}}\) there is only a single SMEFT operator generating it at dimension 6, the \(Q_{\phi ud}\). As for the left-handed vector operator \(O_{V_{L}}\), we consider either the contact interaction operator \(Q^{(3)}_{lq}\), together with its related operator \(Q^{(1)}_{lq}\), or the vertex correction operator \(Q^{(3)}_{\phi q}\), together with its related operator \(Q^{(1)}_{\phi q}\). This leaves us with two groups of 3 SMEFT operators to explore, and to clearly present the results we resort to either profiling over one of the directions, or projecting onto a well-motivated plane in the 3-dimensional space.
**Group I** -- In Figure 3 we focus on the group of WCs \((\tilde{C}^{(1)}_{lq},\tilde{C}^{(3)}_{lq},\tilde{C}_{\phi ud})\). The left plot shows the constraints in the \(([\tilde{C}^{(1)}_{lq}]_{\ell 13},[\tilde{C}^{(3)}_{lq}]_{\ell\ell 13})\) plane, profiling over the \([\tilde{C}_{\phi ud}]_{13}\) direction. We emphasize, that the \(b\to ul\nu\) processes are the only ones considered here sensitive to this direction. In fact, allowing for the combination of \(C_{V_{L}}\) and \(C_{V_{R}}\) to be generated in this scenario, the exclusive semi-leptonic \(B\) decays are in tension with the SM at the level of \(\sim 2.5\sigma\), as already anticipated in Figure 1. This seems to be further supported by the \(b\to d\nu\nu\) constraint, exhibiting a slight tension with the SM at the level of \(\sim 1.5\sigma\). By far the most constraining processes in this plane are the FCNC \(b\to d\ell\ell\)
processes, which are however insensitive in the direction of \(\tilde{C}^{(1)}_{lq}=-\tilde{C}^{(3)}_{lq}\) unconstrained from these processes, resulting in a narrow, elongated global fit. The global fit exhibits a slight tension with the SM at the level of \(\sim 1.5\sigma\). The reason that the tension is weaker than with exclusive \(b\to u\ell\nu\) alone is that the global fit also includes the leptonic decays \(B\to l\nu\). As already anticipated in Figure 1, \([\tilde{C}_{\phi ud}]_{13}\) generates a universal right-handed vector operator, leading to important constraints not only from \(B\to\ell\nu\) but also from \(B\to\tau\nu\). Notably, the bounds obtained from Drell-Yan tails are not competitive in this scenario and, thus are not represented in the plot.
The right plot in Figure 3 depicts the bounds obtained for the case aligned with the \(b\to d\ell\ell\) flat direction, where \(\tilde{C}^{(1)}_{lq}=-\tilde{C}^{(3)}_{lq}\), against \(\tilde{C}_{\phi ud}\). In this scenario, the stringent bounds from FCNC \(b\to d\ell\ell\) processes are absent, allowing more freedom for the global fit. The contours from \(b\to u\ell\nu\) processes, including both semileptonic and leptonic transitions with light leptons, again show tension with the SM point. This is however challenged by the \(B\to\tau\nu\) constraint. We also show the complementary constraints from \(b\to d\nu\nu\) processes and charged current high-mass DY tails, both of which only mildly impact the global fit. However, the global fit, mildly incompatible with the SM, will be further challenged by future measurements of these complementary processes.
**Group II** -- In Figure 4 we turn our attention to the group of WCs \((\tilde{C}^{(1)}_{\phi q},\tilde{C}^{(3)}_{\phi q},\tilde{C}_{\phi ud})\), constituting only left- and right-handed quark vertex corrections. In the left panel of the figure, the constraints on the \((\tilde{C}^{(1)}_{\phi q},\tilde{C}^{(3)}_{\phi q})\) parameter space are presented, profiling over \(\tilde{C}_{\phi ud}\). The \(b\to u\ell\nu\) contours exhibit a tension with the SM, for the same reason as already discussed in the previous case. However, the situation changes drastically in relation to the complementary constraints. As discussed in Subsection 3.2.1, both \(b\to d\ell\ell\) and \(b\to d\nu\nu\)
now constrain the same direction, with the former being vastly more constraining, rendering the latter irrelevant for the global fit. Notice also the misalignment between the \(b\to d\ell\ell\) contour and the direction of \(\tilde{C}^{(1)}_{\phi q}=-\tilde{C}^{(3)}_{\phi q}\), as anticipated from the RG effects discussed in Subsection 3.2.1. Furthermore, as anticipated in Subsection 3.2.3, neutral \(B\)-meson mixing constraints are important in this plane from two effects: two insertions of the modified \(Z\)-vertices at the tree level, and RGE-induced contributions to the four quark operators, as discussed in Subsection 3.2.3. The \(\Delta M_{d}\) constraint is more relaxed, however the mixing-induced CP asymmetry \(S_{\psi K}\) is highly constraining, dominating and closing the global fit together with \(b\to d\ell\ell\). This is due to our choice of basis in Eq. (3.5), where a substantial imaginary part is imposed on the coefficients \(C\), which is, contrary to the \(b\to u\ell\nu\) sector, not canceled in the matching to other sectors.
This assumption is relaxed in the upper right plot of Figure 4, which momentarily omits the use of the \(\tilde{C}\) basis (Eq. (3.5)). We choose the direction of \(C^{(1)}_{\phi q}=-C^{(3)}_{\phi q}\), and present contours in the complex plane of this scenario. We still profile over only the \(\tilde{C}_{\phi ud}\in\mathbb{R}\) direction, to reduce the computational complexity, however, this has no meaningful impact on the results. The \(b\to u\ell\nu\) contours prefer an imaginary part of the considered WCs, further supporting our choice of basis in Eq. (3.5), especially when considering the direction of \(\tilde{C}\) as overlaid on the plot. The complementary constraints from \(b\to d\ell\ell\) and \(\Delta M_{d}\) could in principle support the tension exhibited by the \(b\to u\ell\nu\) processes, however, note that there is no region in which the latter would be compatible with the stringent constraint from the mixing-induced CP asymmetry \(S_{\psi K}\).
**Other operators** -- Lastly, we consider the scalar and tensor operators \(Q^{(1)}_{lequ}\), \(Q^{(3)}_{lequ}\) and \(Q_{ledq}\). Motivated by the discussion in the following Section 4, we group them so that
we first consider the pair (\(Q^{(1)}_{lequ}\), \(Q^{(3)}_{lequ}\)), and then consider \(Q_{ledq}\) separately. As we expect the bounds on these to be dominated by fully leptonic \(B\) decays, we also separate each scenario by the lepton flavor (see Section 2), omitting our assumption of LFU.
In Figure 5 we study the operators \(Q^{(1)}_{lequ}\), \(Q^{(3)}_{lequ}\) contributing to \(b\to u\ell\nu\), assuming NP only in electrons (left plot) or only in muons (right plot). The \(Q^{(1)}_{lequ}\) and \(Q^{(3)}_{lequ}\) operators match onto the left-handed scalar and tensor operators in the WET, respectively. As these substantially mix under RGE, the fully leptonic \(B\to\ell\nu\) processes are sensitive not only to the scalar coefficient but also to the tensor one. Note that the constraints from leptonic decays are notably stronger in the muon channel compared to the electron channel, as already anticipated from Section 2. In order to close the flat direction appearing in the constraints from these processes, complementary constraints are needed. We overlay in both cases the constraints from exclusive semileptonic \(b\to u\ell\nu\) transitions, as well as measurements of high-mass DY tails in the charged current channels. The former currently presents a better constraint in the considered scenarios, however, the latter is almost as important, especially for the tensor operator in the electron channel. The muon channel of high-mass DY exhibits a small tension with respect to the SM in this scenario, resulting in degenerate minima appearing in the fit and comparatively worsening the constraint. Lastly, we comment that complementary constraints from leptonic charm decays \(D\to\ell\ell\) are currently not competitive when compared to fully leptonic \(B\to\ell\nu\) transitions in these planes, however, the situation might change in the future with more data [106].
Finally, in Figure 6 we consider the \(Q_{ledq}\) operator, assuming NP in either electrons (left) or muons (right). In these 1D scenarios, we present the \(\Delta\chi^{2}\) distributions, demonstrating the hierarchy between each of the relevant constraints. In both cases, the chirality
enhanced fully leptonic FCNC decays \(B\to\ell\ell\) are the most efficient probes. They are followed by the FCNC transitions \(b\to d\ell\ell\) which are comparable to chirality-enhanced charged current decays \(B\to\ell\nu\). Note the difference between the solid and dashed orange lines in the right plot, corresponding to the positive and negative values of \(\tilde{C}_{ledq}\), respectively. The second minimum is possible for the positive values of the WC due to cancellation with the SM, as already discussed in Section 2. The rest of the bounds are symmetric with respect to the sign of the WCs. The exclusive \(b\to u\ell\nu\) processes are last in sensitivity among the low-energy probes, followed by the complementary constraints from high-mass DY tails.
## 4 Models
There is a finite number of heavy new field representations under the SM gauge group in a perturbative UV model that integrate out to the dimension-6 SMEFT operators at the tree level [19]. Those contributions, that are leading both in the EFT and the loop expansion, are expected to dominate the phenomenology at low energies. In Subsection 4.1, we map out all such mediators and their minimal set of couplings needed to generate a sizeable effect in \(b\to u\ell\nu\) decays. Then, in Subsection 4.2, we elaborate on a specific example.
### Tree-level mediators
We understand the SMEFT as a low-energy limit of a generic UV theory. Focusing on perturbative extensions of the SM, the natural next step after considering the SMEFT is to study the tree-level models that can be integrated out and generate the SMEFT operators at mass dimension 6. It turns out there are only a finite number of scalar, fermion, and vector UV mediators one can consider under these conditions, as determined by the requirement
of linear couplings to the SM fields [19]. Moreover, the coupling structure of each mediator can impose non-trivial correlations in the SMEFT parameter space.
In this section, we explore such UV mediators, which can be integrated out in order to generate the operators interesting for \(b\to u\ell\nu\) transitions, as collected in Table 2 and discussed in detail in Section 3. We find that there are 4 scalar, 5 fermion, and 5 vector degrees of freedom to which the exclusive semileptonic \(b\to u\ell\nu\) decays can be in principle sensitive. We collect them in Table 3, where we follow the naming convention from Ref. [19], and provide their quantum numbers under the SM gauge group, as well as their spin. We group them by the operator, which they can generate so that each mediator can appear more than once in the table.
The flavor structure of each mediator depends on its couplings with the SM fermions and is, in principle, completely free. One approach to lowering the number of free parameters is to endow them with a particular flavor assumption, see e.g. [107] for such a study under the MFV assumption. In this work, we take the minimalistic approach, in which we only consider the minimal set of couplings required to generate the correct flavor structure so as to contribute to \(b\to u\ell\nu\) at the tree level. As a further simplification, we assume in all cases LFU in light leptons, as most of the data for \(b\to u\ell\nu\) is reported as averages; see Section 2 for details. In the case of leptoquarks, this requires assuming them to be doublets under a leptonic \(SU(2)\) flavor symmetry, which furthermore forbids lepton flavor violating operators, see [73] for a recent example. In the case of mediators which couple to leptons diagonally, only the leptons themselves have to be assumed to transform under a (diagonal) lepton flavor symmetry. As we will see in the following, even under this minimalistic approach, there are interesting correlations generated between different SMEFT operators and, moreover, between observables from different sectors.
In Appendix B, we provide the minimal UV interaction Lagrangian of each mediator,
\begin{table}
\begin{tabular}{c|c|c}
**Operator** & **Mediator** \\ \hline & \(\omega_{1}\sim(\mathbf{3},\mathbf{1},-\frac{1}{3})_{S}\) \\ & \(\zeta\sim(\mathbf{3},\mathbf{3},-\frac{1}{3})_{S}\) \\ \([Q_{lq}^{(3)}]_{\ell\ell 13}\) & \(\mathcal{W}\sim(\mathbf{1},\mathbf{3},0)_{V}\) \\ & \(\mathcal{U}_{2}\sim(\mathbf{3},\mathbf{1},\frac{2}{3})_{V}\) \\ & \(\mathcal{X}\sim(\mathbf{3},\mathbf{3},\frac{2}{3})_{V}\) \\ \hline & \(U\sim(\mathbf{3},\mathbf{1},\frac{2}{3})_{F}\) \\ & \(D\sim(\mathbf{3},\mathbf{1},-\frac{1}{3})_{F}\) \\ \([Q_{\phi q}^{(3)}]_{13}\) & \(T_{1}\sim(\mathbf{3},\mathbf{3},-\frac{1}{3})_{F}\) \\ & \(T_{2}\sim(\mathbf{3},\mathbf{3},\frac{2}{3})_{F}\) \\ & \(\mathcal{W}\sim(\mathbf{1},\mathbf{3},0)_{V}\) \\ \end{tabular}
\begin{tabular}{c|c}
**Operator** & **Mediator** \\ \hline & \(\varphi\sim(\mathbf{1},\mathbf{2},\frac{1}{2})_{S}\) \\ \([Q_{ledq}]_{\ell\ell 31}\) & \(\mathcal{U}_{2}\sim(\mathbf{3},\mathbf{1},\frac{2}{3})_{V}\) \\ & \(\mathcal{Q}_{5}\sim(\mathbf{3},\mathbf{2},-\frac{5}{6})_{V}\) \\ \hline & \(\varphi\sim(\mathbf{1},\mathbf{2},\frac{1}{2})_{S}\) \\ \([Q_{lequq}^{(1)}]_{\ell\ell 31}\) & \(\omega_{1}\sim(\mathbf{3},\mathbf{1},-\frac{1}{3})_{S}\) \\ & \(\Pi_{7}\sim(\mathbf{3},\mathbf{2},\frac{7}{6})_{S}\) \\ \hline & \(\omega_{1}\sim(\mathbf{3},\mathbf{1},-\frac{1}{3})_{S}\) \\ & \(\Pi_{7}\sim(\mathbf{3},\mathbf{2},\frac{7}{6})_{S}\) \\ \hline & \(Q_{\phi ud}\)[13] & \(Q_{1}\sim(\mathbf{3},\mathbf{2},\frac{1}{6})_{F}\) \\ \(\mathcal{B}_{1}\sim(\mathbf{1},\mathbf{1},1)_{V}\) \\ \end{tabular}
\end{table}
Table 3: List of all tree-level mediators generating operators contributing to exclusive \(b\to u\ell\nu\) processes in the SMEFT at mass dimension 6. The quantum numbers of the mediators are indicated as \((SU(3)_{c},SU(2)_{L},U(1)_{Y})\) with the subscript denoting the spin of the mediator.
and the corresponding tree-level matching onto the SMEFT WCs of mass dimension 6. We point out that each mediator requires two couplings to generate the operators interesting for \(b\to u\ell\nu\) transitions. This, in turn, means that many of them generate further operators beyond those of initial interest, such as four-quark and four-lepton operators. In Subsection 4.2, we will explore the implications of this proliferation of operators in a concrete model example with two mediators, \(\omega_{1}\) and \(Q_{1}\).
For the remainder of this subsection, we delve deeper into the correlations imposed by the UV mediators on the operators relevant to \(b\to u\ell\nu\) processes. By studying the matching conditions in Appendix B, we find that interesting directions are generated by the UV
mediators in the sets of WCs (\(C^{(1)}_{lq},C^{(3)}_{lq}\)), (\(C^{(1)}_{\phi q},C^{(3)}_{\phi q}\)), and (\(C^{(1)}_{lequ},C^{(3)}_{lequ}\)). In Figure 7, we illustrate these directions in each of these planes, with each line corresponding to a correlation as implied by the interaction Lagrangian of each respective UV mediator. Moreover, we overlay the \(1\sigma\) and \(2\sigma\) contours as obtained from global SMEFT fits in Section 3. These are the same as in Figures 3 and 4, whereas in the case of (\(C^{(1)}_{lequ},C^{(3)}_{lequ}\)) we redo the global fit from Figure 5 assuming lepton flavor universality, in line with our assumptions about the UV mediators.
In the upper left plot of Figure 7 we study the plane of (\(\tilde{C}^{(1)}_{lq},\tilde{C}^{(3)}_{lq}\)). As discussed in Section 3, the direction of \(\tilde{C}^{(1)}_{lq}=\tilde{C}^{(3)}_{lq}\) is tightly constrained from FCNC \(b\to d\ell\ell\) processes. Only in the perpendicular direction of \(\tilde{C}^{(1)}_{lq}=-\tilde{C}^{(3)}_{lq}\) do exclusive \(b\to u\ell\nu\) processes play a significant role on the global fit. When interpreting the figure in terms of the directions in the SMEFT parameter space imposed by various UV mediators, the scalar leptoquark \(\omega_{1}\) is singled out as the only viable mediator which can be probed by exclusive \(b\to u\ell\nu\) processes. All of the other UV mediators, which are shown on the plot, and which, in principle, generate operators that could contribute to these processes, are constrained significantly more through complementary measurements. Combining several mediators at the same time would allow for cancellations of new physics contributions in \(b\to d\ell\ell\), which would consequently align with the SM prediction. However, we find this scenario less desirable from our standpoint.
A similar conclusion can be made when considering the (\(\tilde{C}^{(1)}_{\phi q},\tilde{C}^{(3)}_{\phi q}\)) plane in the upper right plot of Figure 7. In this case, the global fit is not dominated by the exclusive \(b\to u\ell\nu\), as can be seen in Figure 4 left by comparing the global fit with the blue region. The least constrained single-mediator extension, not prone to significant constraints from complementary measurements, is the vectorlike fermion \(U\), as the direction it implies aligns with the flat direction in stringent \(b\to d\ell\ell\) processes, up to small RG corrections, as discussed in Section 3. Given the complementary bounds from neutral meson mixing, even this mediator can not substantially contribute to exclusive \(b\to u\ell\nu\) decays.
The bottom panel of Figure 7 shows that all the mediators contributing to (\(\tilde{C}^{(1)}_{lequ},\tilde{C}^{(3)}_{lequ}\)) generate directions which are best constrained from fully leptonic \(B\to\ell\nu\) transitions, and hence the exclusive semileptonic \(b\to u\ell\nu\) transitions are inefficient probes of their effects. Similarly, the mediators from Table 3, generating \(Q_{ledq}\), will be constrained from the fully leptonic decays; see Eq. (3.4), Table 1, and Figure 6.
Finally, we acknowledge again that the above conclusions come under the assumption of a single UV mediator. Although this assumption might be unrealistic, it is clear that cancellations tuned to keep observables SM-like in a model of multiple mediators would be needed in order to make exclusive \(b\to u\ell\nu\) decays relevant. Barring such cancellations, we demonstrated that these decays could act as potential probes of only a handful of UV degrees of freedom that integrate out onto the dimension 6 SMEFT at the tree level. Other than \(\omega_{1}\) already pointed out in the text so far, \(Q_{1}\) and \(\mathcal{B}_{1}\) are the only mediators since they match onto the operator \(Q_{\phi ud}\).
### Explicit example: \(\omega_{1}\) and \(Q_{1}\)
As a final example, consider extending the SM with two heavy fields: a scalar leptoquark \(\omega_{1}\sim(\mathbf{3},\mathbf{1},-\frac{1}{3})_{S}\) of mass \(M_{\omega_{1}}\) and a vector-like partner of the left-handed quark doublet \(Q_{1}\sim(\mathbf{3},\mathbf{2},\frac{1}{6})_{F}\) of mass \(M_{Q_{1}}\). For simplicity, we focus only on a minimal set of couplings required to produce effects in semileptonic \(b\to u\ell\nu\) transitions and investigate the consequences from complementary constraints (see e.g. [108] for a recent study of VLQs coupling to first and second generation quarks).
We focus on the interesting scenario, in which \(\omega_{1}\) is responsible for generating the operator \([Q_{lq}^{(3)}]_{\ell\ell 13}=-[Q_{lq}^{(1)}]_{\ell\ell 13}\) and \(Q_{1}\) generates the operator \([Q_{\phi ud}]_{13}\). The UV interaction Lagrangians for this scenario are given in Eq. (100) for the leptoquark and in Eq. (110) for the vector-like quark. There are no terms in the UV Lagrangian involving both \(\omega_{1}\) and \(Q_{1}\) simultaneously that would match onto SMEFT operators of dimension 6 at the tree level. Both \(\omega_{1}\) and \(Q_{1}\) have two couplings of interest: \((y_{\omega_{1}}^{ql})_{1}\) and \((y_{\omega_{1}}^{ql})_{3}\), denoting the couplings of the leptoquark to the left-handed first and third generation quark doublets and light leptons (universally), and \((\lambda_{Q_{1}}^{u})_{1}\) and \((\lambda_{Q_{1}}^{d})_{3}\), denoting the Yukawa couplings between the vector-like quark, the Higgs, and the first generation right-handed up quarks and third generation right-handed down quarks. This set of couplings is however not only responsible for generating the set of operators required for \(b\to u\ell\nu\) transitions, but also a set of other SMEFT operators. For completeness, we list here the full matching conditions of the model onto the set of generated SMEFT operators:10
Footnote 10: We omit here the operators \([Q_{u\phi}]_{i1}\) and \([Q_{d\phi}]_{i3}\) that modify the quark interactions with the Higgs boson, as they are not expected to pose any competitive bounds.
\[\left[C_{lq}^{(1)}\right]_{\ell\ell 13} =-\left[C_{lq}^{(3)}\right]_{\ell\ell 13}=\frac{(y_{\omega_{1}}^{ql}) _{1}^{*}(y_{\omega_{1}}^{ql})_{3}}{4M_{\omega_{1}}^{2}}\,, \tag{111}\] \[\left[C_{lq}^{(1)}\right]_{\ell\ell 11} =-\left[C_{lq}^{(3)}\right]_{\ell\ell 11}=\frac{|(y_{\omega_{1}}^{ql}) _{1}|^{2}}{4M_{\omega_{1}}^{2}}\,,\] (112) \[\left[C_{lq}^{(1)}\right]_{\ell\ell 33} =-\left[C_{lq}^{(3)}\right]_{\ell\ell 33}=\frac{|(y_{\omega_{1}}^{ql}) _{3}|^{2}}{4M_{\omega_{1}}^{2}}\,,\] (113) \[\left[C_{\phi ud}\right]_{13} =\frac{(\lambda_{Q_{1}}^{d})_{3}(\lambda_{Q_{1}}^{u})_{1}^{*}}{M _{Q_{1}}^{2}}\,,\] (114) \[\left[C_{\phi u}\right]_{11} =-\frac{|(\lambda_{Q_{1}}^{u})_{1}|^{2}}{2M_{Q_{1}}^{2}}\,,\] (115) \[\left[C_{\phi d}\right]_{33} =\frac{|(\lambda_{Q_{1}}^{d})_{3}|^{2}}{2M_{Q_{1}}^{2}}\,. \tag{116}\]
In order for the WCs to carry a sufficient complex phase, in line with the assumption already presented in Eq. (10), we assume that one of the couplings for each mediator is complex,
and define
\[(y_{\omega_{1}}^{ql})_{3} =\frac{V_{ub}}{|V_{ub}|}(\tilde{y}_{\omega_{1}}^{ql})_{3}\,, \tag{4.7}\] \[(\lambda_{Q_{1}}^{d})_{3} =\frac{V_{ub}}{|V_{ub}|}(\tilde{\lambda}_{Q_{1}}^{d})_{3}\,, \tag{4.8}\]
where \(\tilde{y}\) and \(\tilde{\lambda}\) are real parameters. Note, that this model does not contribute significantly to the loop processes that can be used to extract the CKM parameters [109; 110]. We therefore fix \(|V_{ub}|\) as elsewhere in the main part of this paper, which is consistent with Eq. (47) of [109].
We perform a study of the 4-dimensional model parameter space using the same numerical setup as described in Sections 2 and 3. For presentation purposes, we decide to show the results in the plane of the LQ couplings (\((y_{\omega_{1}}^{ql})_{1}/M_{\omega_{1}},(\tilde{y}_{\omega_{1}}^{ql})_{3}/M_{ \omega_{1}}\)) by profiling the global likelihood over the two parameters of the vector-like quark \(Q_{1}\). The individual likelihoods of the relevant processes are then evaluated using the same parameter values as for the global likelihood. The results are shown in Figure 8.
A few comments are in order, firstly, the global fit of the model marginally improves the global \(\chi^{2}\) by \(\Delta\chi^{2}=-3.72\) with respect to the SM. Moreover, there are many complementary constraints participating in constraining the model parameter space. The \(b\to u\ell\nu\) processes show a slight tension with the SM point, as discussed already in Sections 2 and 3. However, notice that the high-mass DY tails, especially the charged current processes, provide severe constraints on the direction in which the LQ couples to valence quarks. Both directions in Figure 8 are constrained also from measurements of super-allowed \(\beta\)-dec
Figure 8: Fit to the data in the explicit model example (\(\omega_{1}\) plus \(Q_{1}\)). For details see Section 4.2.
either at the tree level (the \([Q_{lq}^{(3)}]_{\ell\ell 11}\) operator), or through RG effects (\([Q_{lq}^{(3)}]_{\ell\ell 33}\) mixing into \([Q_{lq}^{(3)}]_{\ell\ell 11}\)). Lastly, the EWPT and leptonic \(\tau\) decay constraints, both showing slight tensions in participating observables (see Appendix in Ref. [70]), contribute to the pull of the global best fit region away from the SM point. Both \(\omega_{1}\) and \(Q_{1}\) can contribute to these processes, either through modified \(Z\) couplings with right-handed quarks (\(Q_{1}\)), or via further RGE-induced contributions (\(\omega_{1}\)) [113].
It is instructive to contrast our findings with the direct search limits emerging from the LHC. These limits are predominantly derived from the QCD-induced pair production of scalar leptoquarks, which decay chiefly to third-generation quarks and light leptons, with \(M_{\omega_{1}}\gtrsim 1.4\,\mathrm{TeV}\)[114]. In summary, while the complementary constraints diminish the significance of the \(B\to V\ell\nu\) tension, it remains noteworthy that there exists a parameter space where a leptoquark, consistent with direct search results and exhibiting \(\mathcal{O}(1)\) couplings to third-generation quarks and substantially smaller couplings with light quarks, renders the exclusive \(b\to u\ell\nu\) decays relevant. Lastly, we note that both of the vector-like quark couplings are perturbative in the whole plot range of Figure 8. At the best fit point, they take the values of \(((\lambda_{Q_{1}}^{u})_{3}/M_{Q_{1}},(\tilde{\lambda}_{Q_{1}}^{d})_{3}/M_{Q_{1 }})=(0.13,0.08)\) TeV\({}^{-1}\), comfortably allowing for a VLQ of mass \(M_{Q_{1}}\gtrsim 1.2\,\mathrm{TeV}\) with perturbative couplings, in compliance with direct searches [115].
## 5 Conclusions
The decays of \(B\) hadrons provide an exceptional environment for probing the intricacies of particle physics. This research program is notably promising, given the breadth of experimental activity currently underway and anticipated for this decade. An enhanced understanding of strong dynamics further bolsters it, attributable to contemporary advancements in lattice QCD. The ultimate goal of this precision frontier is to examine the SM and possibly uncover NP effects rigorously. The pressing question arises: what kind of new physics can we anticipate to investigate? The bottom-up EFT approach provides us with a systematic framework for addressing this question for an arbitrary short-distance new physics.
This work investigates the NP potential of exclusive \(b\to u\ell\nu\) decays, commonly used to extract the SM input parameter \(|V_{ub}|\). We start out in Section 2, where we perform a comprehensive analysis in the context of the WET within the flavio framework, building the likelihood from the experimental data and theoretical predictions, and deriving the optimal parameter space for the WCs. Our primary findings are encapsulated in Figures 1 and 2, and in Table 1. Most operators are constrained by semileptonic decays, with the exception of the pseudoscalar operator, which is dominantly restricted by the fully leptonic decay. While many preferred regions encompass the SM prediction, the axial vector operator slightly prefers a non-zero value due to the tension in \(B\to V\ell\nu\) channel where \(V=(\rho,\omega)\). To understand the role of \(|V_{ub}|\) in the presence of such NP, we fit \(|V_{ub}|\) concurrently with the axial vector operator, as illustrated in Figure 9.
What are the implications of the WET analysis on short-distance physics? To shed light on this query, we advance our examination by employing the SMEFT in Section 3.
This framework inevitably predicts significant correlations with other processes. Our objective within this context was to conduct comprehensive SMEFT fits, presuming a complete set of dimension-6 operators at a high-energy scale (set to \(\Lambda=1\,\text{TeV}\)), which offer substantial contributions to exclusive \(b\to u\ell\nu\) decays. This set is augmented by closely related operators that aid in attenuating the complementary bounds or that necessarily come along in a tree-level UV completion. The predictions of the SMEFT, intrinsically caused by the \(SU(2)_{L}\) gauge symmetry and renormalization group evolution down to low energies, impact rare natural-current \(b\) decays, \(B^{0}-\bar{B}^{0}\) mixing, high-mass Drell-Yan production, and \(W/Z\) vertex corrections. These influences collectively drive the global fit results, succinctly presented in Figures 3, 4, 5 and 6. The conclusive finding of this examination indicates that exclusive \(b\to u\ell\nu\) decays are instrumental in probing the set of operators defined as \(([Q_{lq}^{(1)}]_{\ell\ell 13}=-[Q_{lq}^{(3)}]_{\ell\ell 13},[Q_{\phi ud}]_{13})\), a result that is depicted in Figure 3 (right).
Perturbative UV completions that match onto the dimension-6 SMEFT operators at the tree level can be systematically classified based on the gauge and Lorentz representation of the mediating heavy fields. A comprehensive list of all such cases that contribute at the leading order to exclusive \(b\to u\ell\nu\) decays is presented in Table 3 and expanded upon in Appendix B. A particularly noteworthy result is visually encapsulated in Figure 7, elegantly portraying the correlations imposed by diverse heavy field mediators. Upon close scrutiny, it can be concluded that, absent any conspiring cancellations maintaining SM-like observables, promising cases are represented by \(Q_{1}\) and \(\mathcal{B}_{1}\), which correspond to the operator \(Q_{\phi ud}\), along with \(\omega_{1}\) generating the operator \(Q_{lq}^{(1)}=-Q_{lq}^{(3)}\). In Subsection 4.2, we undertake an in-depth analysis of two such mediators simultaneously present, constraining our focus to a minimal set of couplings essential for the \(b\to u\ell\nu\) decay process; see Figure 8. The apparent difficulty in addressing the \(B\to V\ell\nu\) tension, given the complementary constraints, underscores the necessity for further advancements in lattice QCD computations. In this regard, pioneering efforts, such as those represented by the work detailed in [116], are of substantial importance and provide a promising path forward to resolving this puzzle.
Future improvements of this study could involve discarding the assumption of lepton flavor universality in exclusive \(b\to u\ell\nu\) decays, a constraint currently necessitated by the scope of available data. We fervently advocate for experimental collaborations to conduct measurements of theoretically pristine \(\mu/e\) ratios, in analogy with the proposals [117, 118, 119, 120]. Additionally, it could be intriguing to go further than the current scope, exploring beyond tree-level matching and beyond the dimension-6 operators. However, we anticipate this would lead to tighter constraints. This is because any new heavy mediators brought into the picture would need to either be lighter or interact more strongly in order to make up for the effects of suppression. Finally, investigating correlations with the \(b\to c\ell\nu\) decays, predicated by several motivated NP flavor structures within the quark sector [121, 122], could significantly enhance our understanding of the relevance of these decays in the pursuit of physics beyond the SM.
## Acknowledgements
We thank Meril Reboud for the useful comparison with [33]. This work received funding from the Swiss National Science Foundation (SNF) through the Eccellenza Professorial Fellowship "Flavor Physics at the High Energy Frontier" project number 186866. AG is also partially supported by the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation program, grant agreement 833280 (FLAY).
## Appendix A \(|V_{ub}|\) determination
In this Appendix, we discuss the issue of extracting the CKM matrix parameter \(|V_{ub}|\) together with potential short-distance NP effects, in light of the tension between \(B\to\pi\ell\nu\) and \(B\to V\ell\nu\) with \(V=(\rho,\omega)\), as discussed in Section 2. In Figure 1 we have shown that the tension appears in the axial-vector direction of the WET WC parameter space, by assuming no NP contributions to \(|V_{ub}|\). Here we argue that this is a valid assumption.
In Figure 9 we perform a combined fit of \(|V_{ub}|\) and the axial-vector WET operator, under three scenarios, as motivated by the SMEFT (see Section 3). We consider the constraints in these planes from \(B\to\pi\ell\nu\), \(B\to V\ell\nu\), as well as fully leptonic modes \(B\to\mu\nu\) and \(B\to\tau\nu\). The three scenarios presented on Figure 9 correspond to the assumptions of no NP in the \(\tau\) WCs (left plot), universal contributions to all leptons only in the right-handed vector WC (right plot), or universal contributions in both the left- and right-handed vector WCs (bottom plot). Note, that in all three cases \(B\to\pi\ell\nu\) is by far the most constraining process in the \(|V_{ub}|\) direction. The global fits, although changing slightly between the three scenarios, remain dominated by \(B\to\pi\ell\nu\), which is completely insensitive to the axial-vector WC direction. We conclude that the phenomenological analysis presented in this paper, assuming the value of \(|V_{ub}|\) consistent with the global fits using \(B\to\pi\ell\nu\) only, is a valid and reasonable approach.
Lastly, in the case of vector, scalar and tensor WCs, the bounds on which are discussed both in the WET in Section 2, \(B\to\pi\ell\nu\) can play an important role in constraining the NP effects. In the scope of our paper, we can assume that in these cases the CKM matrix parameters are fixed by \(\Delta F=2\) processes when those are unpolluted by NP effects. Of course, for a global SMEFT analysis, one should be cautious and perform a global fit of the SM input parameters together with the WCs, as argued in Ref. [123].
## Appendix B Lagrangians for the tree-level mediators
This Appendix presents the single mediator Lagrangians for the tree-level models listed in Table 3. We focus on cases where only the minimal set of non-zero couplings is present, resulting in the generation of a single effective operator contributing to \(b\to u\ell\nu\). The Lagrangians considered here are restricted to renormalizable terms, with the kinetic and mass terms omitted for brevity.
Additionally, we provide the SMEFT Lagrangians obtained after integrating out the heavy mediators. It is important to note that each SMEFT Lagrangian implicitly includes
the appropriate Hermitian conjugate terms to ensure that the final Lagrangian takes the form of Eq. (3.1). The Lagrangians and the matching to the SMEFT framework are based on the work presented in [19].
Furthermore, we note that the effective operators associated with lepton flavor indices exhibit lepton flavor universality for electrons and muons, as discussed in Section 4. Finally, the notation \(C_{ijkl}Q_{ijkl}+(ijkk+ijll)\) indicates that the Lagrangian also includes terms with the flavor indices specified within the parentheses, with appropriate substitutions made for both the operator and the WC.
### Scalars
\(\varphi\sim(\mathbf{1},\mathbf{2},\frac{1}{2})_{S}\)
* UV Lagrangian: \[-\mathcal{L}^{\leq 4}\supset(y_{\varphi}^{d})_{31}\ \varphi^{\dagger}\bar{d}_{3}q_{1}+(y_{ \varphi}^{e})\ \varphi^{\dagger}\bar{e}_{\ell}l_{\ell}+\mathrm{h.c.}\] (124)
* Matching to SMEFT: \[\begin{split}\mathcal{L}_{\mathrm{SMEFT}}\supset& \frac{(y_{\varphi}^{d})_{31}(y_{\varphi}^{e})^{*}}{M_{\varphi}^{2}}[Q_{ledq}]_{ \ell\ell 31}\\ &-\frac{|(y_{\varphi}^{d})_{31}|^{2}}{6M_{\varphi}^{2}}\left([Q_{ qq}^{(1)}]_{1133}+6[Q_{qq}^{(8)}]_{1133}\right)-\frac{|(y_{\varphi}^{e})|^{2}}{2M_{ \varphi}^{2}}[Q_{le}]_{\ell\ell^{\prime}\ell}\end{split}\] (125)
* UV Lagrangian: \[-\mathcal{L}^{\leq 4}\supset(y_{\varphi}^{u})_{31}\ \varphi^{\dagger}i\sigma_{2}q_{3}^{T}u_{1}+(y_{ \varphi}^{e})\ \varphi^{\dagger}\bar{e}_{\ell}l_{\ell}+\mathrm{h.c.}\] (126)
* Matching to SMEFT: \[\begin{split}\mathcal{L}_{\mathrm{SMEFT}}\supset& \frac{(y_{\varphi}^{u})_{31}(y_{\varphi}^{e})_{\ell\ell}^{*}}{M_{ \varphi}^{2}}[Q_{lequ}^{(1)}]_{\ell\ell 31}\\ &-\frac{|(y_{\varphi}^{u})_{31}|^{2}}{6M_{\varphi}^{2}}\left([Q_{ qu}^{(1)}]_{3311}+6[Q_{qu}^{(8)}]_{3311}\right)-\frac{|(y_{\varphi}^{e})|^{2}}{2M_{ \varphi}^{2}}[Q_{le}]_{\ell\ell^{\prime}\ell}\end{split}\] (127)
\(\omega_{1}\sim(\mathbf{3},\mathbf{1},-\frac{1}{3})_{S}\)
* UV Lagrangian: \[-\mathcal{L}^{\leq 4}\supset(y_{\omega_{1}}^{ql})_{1}\ \omega_{1}^{ \dagger}\bar{d}_{1}^{e}i\sigma_{2}l_{\ell}+(y_{\omega_{1}}^{ql})_{3}\ \omega_{1\ell}^{ \dagger}\bar{d}_{3}^{e}i\sigma_{2}l_{\ell}+\mathrm{h.c.}\] (128)
* Matching to SMEFT: \[\begin{split}\mathcal{L}_{\mathrm{SMEFT}}\supset& \frac{(y_{\omega_{1}}^{ql})_{1}^{*}(y_{\omega_{1}}^{ql})_{3}}{4M_{ \omega_{1}}^{2}}\left([Q_{lq}^{(1)}]_{\ell\ell 13}-[Q_{lq}^{(3)}]_{\ell\ell 13} \right)+(\ell\ell 11+\ell\ell 33)\end{split}\] (129)
* \(Q_{lequ}^{(1)}\) and \(Q_{lequ}^{(3)}\)
UV Lagrangian: \[-\mathcal{L}^{\leq 4}\supset(y^{ql}_{\omega_{1}})_{3}\ \omega^{1}_{1\ell}\bar{q}^{c}_{3}i \sigma_{2}l_{\ell}+(y^{eu}_{\omega_{1}})_{i}\ \omega^{1}_{1\ell}\bar{e}^{c}_{\ell}u_{1}+\text{h.c.}\] (114) * Matching to SMEFT: \[\begin{split}\mathcal{L}_{\text{SMEFT}}\supset& \frac{(y^{eu}_{\omega_{1}})_{1}(y^{ql}_{\omega_{1}})_{3}^{*}}{8M_{ \omega_{1}}^{2}}\left(4[Q^{(1)}_{lequ}]_{\ell\ell 31}-[Q^{(3)}_{lequ}]_{\ell \ell 31}\right)\\ &+\frac{|(y^{ql}_{\omega_{1}})_{3}|^{2}}{4M_{\omega_{1}}^{2}} \left([Q^{(1)}_{lq}]_{\ell\ell 33}-[Q^{(3)}_{lq}]_{\ell\ell 33}\right)+\frac{|(y^{ eu}_{\omega_{1}})_{1}|^{2}}{2M_{\omega_{1}}^{2}}[Q_{eu}]_{\ell\ell 11} \end{split}\] (115)
\(\Pi_{7}\sim(\mathbf{3},\mathbf{2},\frac{7}{6})_{S}\)
* \(Q^{(1)}_{lequ}\) and \(Q^{(3)}_{lequ}\)
* UV Lagrangian: \[-\mathcal{L}^{\leq 4}\supset(y^{lu}_{\Pi_{7}})_{1}\ \Pi^{\dagger}_{7\ell}i \sigma_{2}\bar{l}^{T}_{\ell}u_{1}+(y^{eq}_{\Pi_{7}})_{3}\ \Pi^{\dagger}_{7\ell}\bar{e}_{\ell}q_{3}+\text{h.c.}\] (116)
* Matching to SMEFT: \[\begin{split}\mathcal{L}_{\text{SMEFT}}\supset& \frac{(y^{eq}_{\Pi_{7}})_{3}^{*}(y^{lu}_{\Pi_{7}})_{1}}{8M_{\Pi_{7}}^{2}}\left( 4[Q^{(1)}_{lequ}]_{\ell\ell 31}+[Q^{(3)}_{lequ}]_{\ell\ell 31}\right)\\ &-\frac{|(y^{lu}_{\Pi_{7}})_{1}|^{2}}{2M_{\Pi_{7}}^{2}}[Q_{lu}]_ {\ell\ell 11}-\frac{|(y^{eq}_{\Pi_{7}})_{3}|^{2}}{2M_{\Pi_{7}}^{2}}[Q_{oe}]_{33 \ell\ell}\end{split}\] (117)
\(\zeta\sim(\mathbf{3},\mathbf{3},-\frac{1}{3})_{S}\)
* UV Lagrangian: \[-\mathcal{L}^{\leq 4}\supset(y^{ql}_{\zeta})_{1}\ \zeta^{a\dagger}_{\ell}\bar{q}^{c}_{1}i \sigma_{2}\sigma^{a}l_{\ell}+(y^{ql}_{\zeta})_{3}\ \zeta^{a\dagger}_{\ell}\bar{q}^{c}_{3}i \sigma_{2}\sigma^{a}l_{\ell}+\text{h.c.}\] (118)
* Matching to SMEFT: \[\begin{split}\mathcal{L}_{\text{SMEFT}}\supset& \frac{(y^{ql}_{\zeta})_{1}^{*}(y^{ql}_{\zeta})_{3}}{4M_{\zeta}^{2}}\left(3[Q^ {(1)}_{lq}]_{\ell\ell 13}+[Q^{(3)}_{lq}]_{\ell\ell 13}\right)+(\ell\ell 11+\ell \ell 33)\end{split}\] (119)
### Vector-like fermions
\(U\sim(\mathbf{3},\mathbf{1},\frac{2}{3})_{F}\)
* \(Q^{(3)}_{\phi q}\)
UV Lagrangian: \[-\mathcal{L}^{\leq 4}\supset(\lambda_{U})_{1}\ \bar{U}_{R}\tilde{\phi}^{\dagger}q_{1}+( \lambda_{U})_{3}\ \bar{U}_{R}\tilde{\phi}^{\dagger}q_{3}+\text{h.c.}\] (114) * Matching to SMEFT: \[\mathcal{L}_{\text{SMEFT}}\supset \frac{(\lambda_{U})_{3}(\lambda_{U})_{1}^{*}}{4M_{U}^{2}}\left([Q_ {\phi q}^{(1)}]_{13}-[Q_{\phi q}^{(3)}]_{13}\right)+(11+33)\] \[+\left(\frac{\hat{y}_{i1}^{u*}|(\lambda_{U})_{1}|^{2}}{2M_{U}^{2 }}+\frac{\hat{y}_{i3}^{u*}(\lambda_{U})_{3}(\lambda_{U})_{1}^{*}}{2M_{U}^{2}} \right)[Q_{u\phi}]_{1i}\] (115) \[+\left(\frac{\hat{y}_{i3}^{u*}|(\lambda_{U})_{3}|^{2}}{2M_{U}^{2 }}+\frac{\hat{y}_{i1}^{u*}(\lambda_{U})_{1}(\lambda_{U})_{3}^{*}}{2M_{U}^{2}} \right)[Q_{u\phi}]_{3i}\] with the index \(i\) running over all three generations. \(D\sim(\mathbf{3},\mathbf{1},-\frac{1}{3})_{F}\)
* \(Q_{\phi q}^{(3)}\)
* UV Lagrangian: \[-\mathcal{L}^{\leq 4}\supset(\lambda_{D})_{1}\ \bar{D}_{R}\phi^{\dagger}q_{L1}+( \lambda_{D})_{3}\ \bar{D}_{R}\phi^{\dagger}q_{L3}+\text{h.c.}\] (116)
* Matching to SMEFT: \[\mathcal{L}_{\text{SMEFT}}\supset -\frac{(\lambda_{D})_{3}(\lambda_{D})_{1}^{*}}{4M_{D}^{2}}\left( [Q_{\phi q}^{(1)}]_{13}+[Q_{\phi q}^{(3)}]_{13}\right)+(11+33)\] \[+\left(\frac{\hat{y}_{i1}^{d*}|(\lambda_{D})_{1}|^{2}}{2M_{D}^{2 }}+\frac{\hat{y}_{i3}^{d*}(\lambda_{D})_{3}(\lambda_{D})_{1}^{*}}{2M_{D}^{2}} \right)[Q_{d\phi}]_{1i}\] (117) \[+\left(\frac{\hat{y}_{i3}^{d*}|(\lambda_{D})_{3}|^{2}}{2M_{D}^{2 }}+\frac{\hat{y}_{i1}^{d*}(\lambda_{D})_{1}(\lambda_{D})_{3}^{*}}{2M_{D}^{2}} \right)[Q_{d\phi}]_{3i}\] with the index \(i\) running over all three generations.
\(Q_{1}\sim(\mathbf{3},\mathbf{2},\frac{1}{6})_{F}\)
* \(Q_{\phi ud}\)
* UV Lagrangian: \[-\mathcal{L}^{\leq 4}\supset(\lambda_{Q_{1}}^{u})_{1}\ \bar{Q}_{1L}\tilde{\phi}u_{1}+( \lambda_{Q_{1}}^{d})_{3}\ \bar{Q}_{1L}\phi d_{3}+\text{h.c.}\] (118)
* Matching to SMEFT: \[-\mathcal{L}^{\leq 4}\supset(\lambda_{Q_{1}}^{u})_{1}\ \bar{Q}_{1L}\tilde{\phi}u_{1}+( \lambda_{Q_{1}}^{d})_{3}\ \bar{Q}_{1L}\phi d_{3}+\text{h.c.}\] (119)
* Matching to SMEFT: \[-\mathcal{L}^{\leq 4}\supset(\lambda_{Q_{1}}^{u})_{1}\ \bar{Q}_{1L}\tilde{\phi}u_{1}+( \lambda_{Q_{1}}^{d})_{3}\ \bar{Q}_{1L}\phi d_{3}+\text{h.c.}\] (120)
* Matching to SMEFT: \[-\mathcal{L}^{\leq 4}\supset(\lambda_{Q_{1}}^{u})_{1}\ \bar{Q}_{1L}\tilde{\phi}u_{1}+( \lambda_{Q_{1}}^{d})_{3}\ \bar{Q}_{1L}\phi d_{3}+\text{h.c.}\] (121)
* Matching to SMEFT: \[-\mathcal{L}^{\leq 4}\supset(\lambda_{Q_{1}}^{u})_{1}\ \bar{Q}_{1L}\tilde{\phi}u_{1}+( \lambda_{Q_{1}}^{d})_{3}\ \bar{Q}_{1L}\phi d_{3}+\text{h.c.}\] (122)
* Matching to SMEFT: \[-\mathcal{L}^{\leq 4}\supset(\lambda_{Q_{1}}^{u})_{1}\ \bar{Q}_{1L}\tilde{\phi}u_{1}+( \lambda_{Q_{1}}^{d})_{3}\ \bar{Q}_{1L}\phi d_{3}+\text{h.c.}\] (123)
* Matching to SMEFT: \[-\mathcal{L}^{\leq 4}\supset(\lambda_{Q_{1}}^{u})_{1}\ \bar{Q}_{1L}\tilde{\phi}u_{1}+( \lambda_{Q_{1}}^{d})_{3}\ \bar{Q}_{1L}\phi d_{3}+\text{h.c.}\] (124)
* Matching to SMEFT: \[-\mathcal{L}^{\leq 4}\supset(\lambda_{Q_{1}}^{u})_{1}\ \bar{Q}_{1L}\tilde{\phi}u_{1}+( \lambda_{Q_{1}}^{d})_{3}\ \bar{Q}_{1L}\phi d_{3}+\text{h.c.}\] (125)
* Matching to SMEFT: \[-\mathcal{L}^{\leq 4}\supset(\lambda_{Q_{1}}^{u})_{1}\ \bar{Q}_{1L}\tilde{\phi}u_{1}+( \lambda_{Q_{1}}^{d})_{3}\ \bar{Q}_{1L}\phi d_{3}+\text{h.c.}\] (126)
* Matching to SMEFT: \[-\mathcal{L}^{\leq 4}\supset(\lambda_{Q_{1}}^{u})_{1}\ \bar{Q}_{1L}\tilde{\phi}u_{1}+( \lambda_{Q_{1}}^{d})_{3}\ \bar{Q}_{1L}\phi d_{3}+\text{h.c.}\] (127)
* Matching to SMEFT: \[-\mathcal{L}^{\leq 4}\supset(\lambda_{Q_{1}}^{u})_{1}\ \bar{Q}_{1L}\tilde{\phi}u_{1}+( \lambda_{Q_{1}}^{d})_{3}\ \bar{Q}_{1L}\phi d_{3}+\text{h.c.}\] (128)
* Matching to SMEFT: \[-\mathcal{L}^{\leq 4}\supset(\lambda_{Q_{1}}^{u})_{1}\ \bar{Q}_{1L}\tilde{\phi}u_{1}+( \lambda_{Q_{1}}^{d})_{3}\ \bar{Q}_{1L}\phi d_{3}+\text{h.c.}\] (129)
* Matching to SMEFT: \[-\mathcal{L}^{\leq 4}\supset(\lambda_{Q_{1}}^{u})_{1}\ \bar{Q}_{1L}\tilde{\phi}u_{1}+( \lambda_{Q_{1}}^{d})_{3}\ \bar{Q}_{1L}\phi d_{3}+\text{h.c.}\] (130)
* Matching to SMEFT: \[-\mathcal{L}^{\leq 4}\supset(\lambda_{Q_{1}}^{u})_{1}\ \bar{Q}_{1L}\tilde{\phi}u_{1}+( \lambda_{Q_{1}}^{d})_{3}\ \bar{Q}_{1L}\phi d_{3}+\text{h.c.}\] (131)
* Matching to SMEFT: \[-\mathcal{L}^{\leq 4}\supset(\lambda_{Q_{1}}^{u})_{1}\ \bar{Q}_{1L}\tilde{\phi}u_{1}+( \lambda_{Q_{1}}^{d})_{3}\ \bar{Q}_{1L}\phi d_{3}+\text{h.c.}\] (132)
* Matching to SMEFT: \[-\mathcal{L}^{\leq 4}\supset(\lambda_{Q_{1}}^{u})_{1}\ \bar{Q}_{1L}\tilde{\phi}u_{1}+( \lambda_{Q_{1}}^{d})_{3}\ \bar{Q}_{1L}\phi d_{3}+\text{h.c.}\] (133)
* Matching to SMEFT: \[-\mathcal{L}^{\leq 4}\supset(\lambda_{Q_{1}}^{u})_{1}\ \bar{Q}_{1L}\tilde{\phi}u_{1}+( \lambda_{Q_{1}}^{d})_{3}\ \bar{Q}_{1L}\phi d_{3}+\text{h.c.}\] (134)
* Matching to SMEFT: \[-\mathcal{L}^{\leq 4}\supset(\lambda_{Q_{1}}^{u})_{1}\ \bar{Q}_{1L}\tilde{\phi}u_{1}+( \lambda_{Q_{1}}^{d})_{3}\ \bar{Q}_{1L}\phi d_{3}+\text{h.c.}\] (135)
* Matching to SMEFT: \[-\mathcal{L}^{\leq 4}\supset(\lambda_{Q_{1}}^{u})_{1}\ \bar{Q}_{1L}\tilde{\phi}u_{1}+( \lambda_{Q_{1}}^{d})_{3}\ \bar{Q}_{1L}\phi d_{3}+\text{h.c.}\] (13
\[\begin{split}\mathcal{L}_{\text{SMEFT}}\supset&\frac{( \lambda_{Q_{1}}^{d})_{3}(\lambda_{Q_{1}}^{u})_{1}^{*}}{M_{Q_{1}}^{2}}[Q_{ \phi ud}]_{13}-\frac{|(\lambda_{Q_{1}}^{u})_{1}|^{2}}{2M_{Q_{1}}^{2}}[Q_{\phi u }]_{11}+\frac{|(\lambda_{Q_{1}}^{d})_{3}|^{2}}{2M_{Q_{1}}^{2}}[Q_{\phi d}]_{33} \\ &+\frac{\tilde{g}_{i1}^{u*}|(\lambda_{Q_{1}}^{u})_{1}|^{2}}{2M_{ Q_{1}}^{2}}[Q_{u\phi}]_{i1}+\frac{\hat{g}_{3i}^{d*}|(\lambda_{Q_{1}}^{d})_{3}|^{2} }{2M_{Q_{1}}^{2}}[Q_{d\phi}]_{i3}\end{split} \tag{121}\]
with the index \(i\) running over all three generations.
\(T_{1}\sim(\mathbf{3},\mathbf{3},-\frac{1}{3})_{F}\)
* \(Q_{\phi q}^{(3)}\)
* UV Lagrangian: \[-\mathcal{L}^{\leq 4}\supset\frac{1}{2}(\lambda_{T_{1}})_{1}\ \bar{T}_{1R}^{a}\phi^{ \dagger}\sigma^{a}q_{1}+\frac{1}{2}(\lambda_{T_{1}})_{3}\ \bar{T}_{1R}^{a}\phi^{\dagger}\sigma^{a}q_{3}+\text{h.c.}\] (122)
* Matching to SMEFT: \[\begin{split}\mathcal{L}_{\text{SMEFT}}\supset& -\frac{(\lambda_{T_{1}})_{3}(\lambda_{T_{1}})_{1}^{*}}{16M_{T_{1}}^{2}}\left(3[ Q_{\phi q}^{(1)}]_{13}-[Q_{\phi q}^{(3)}]_{13}\right)+(11+33)\\ &+\left(\frac{\hat{g}_{i1}^{d*}|(\lambda_{T_{1}})_{1}|^{2}}{8M_{T _{1}}^{2}}+\frac{\hat{g}_{i3}^{d*}(\lambda_{T_{1}})_{3}(\lambda_{T_{1}})_{1}^ {*}}{8M_{T_{1}}^{2}}\right)[Q_{d\phi}]_{1i}\\ &+\left(\frac{\hat{g}_{i3}^{d*}|(\lambda_{T_{1}})_{3}|^{2}}{8M_{T _{1}}^{2}}+\frac{\hat{g}_{i1}^{d*}(\lambda_{T_{1}})_{1}(\lambda_{T_{1}})_{3}^ {*}}{8M_{T_{1}}^{2}}\right)[Q_{d\phi}]_{3i}\\ &+\left(\frac{\hat{g}_{i1}^{u*}|(\lambda_{T_{1}})_{1}|^{2}}{4M_{ T_{1}}^{2}}+\frac{\hat{g}_{i3}^{u*}(\lambda_{T_{1}})_{3}(\lambda_{T_{1}})_{1}^ {*}}{4M_{T_{1}}^{2}}\right)[Q_{u\phi}]_{1i}\\ &+\left(\frac{\hat{g}_{i3}^{u*}|(\lambda_{T_{1}})_{3}|^{2}}{4M_{ T_{1}}^{2}}+\frac{\hat{g}_{i1}^{u*}(\lambda_{T_{1}})_{1}(\lambda_{T_{1}})_{3}^ {*}}{4M_{T_{1}}^{2}}\right)[Q_{u\phi}]_{3i}\end{split}\] (123) with the index \(i\) running over all three generations.
\(T_{2}\sim(\mathbf{3},\mathbf{3},\frac{2}{3})_{F}\)
* \(Q_{\phi q}^{(3)}\)
* UV Lagrangian: \[-\mathcal{L}^{\leq 4}\supset\frac{1}{2}(\lambda_{T_{2}})_{1}\ \bar{T}_{2R}^{a}\tilde{\phi}^{ \dagger}\sigma^{a}q_{1}+\frac{1}{2}(\lambda_{T_{2}})_{3}\ \bar{T}_{2R}^{a}\tilde{\phi}^{ \dagger}\sigma^{a}q_{3}+\text{h.c.}\] (124)
* Matching to SMEFT:
\[\mathcal{L}_{\text{SMEFT}}\supset \frac{(\lambda_{T_{2}})_{3}(\lambda_{T_{2}})_{1}^{2}}{16M_{T_{2}}^{ 2}}\left(3[Q_{\phi q}^{(1)}]_{13}+[Q_{\phi q}^{(3)}]_{13}\right)+(11+33)\] \[+\left(\frac{\hat{y}_{i1}^{d\ast}|(\lambda_{T_{1}})_{1}|^{2}}{4M_ {T_{2}}^{2}}+\frac{\hat{y}_{i3}^{d\ast}(\lambda_{T_{1}})_{3}(\lambda_{T_{1}})_ {1}^{\ast}}{4M_{T_{2}}^{2}}\right)[Q_{d\phi}]_{1i}\] \[+\left(\frac{\hat{y}_{i3}^{d\ast}|(\lambda_{T_{1}})_{3}|^{2}}{4M_ {T_{2}}^{2}}+\frac{\hat{y}_{i3}^{d\ast}(\lambda_{T_{1}})_{1}(\lambda_{T_{1}})_ {3}^{\ast}}{4M_{T_{2}}^{2}}\right)[Q_{d\phi}]_{3i}\] (B.22) \[+\left(\frac{\hat{y}_{i1}^{u\ast}|(\lambda_{T_{1}})_{1}|^{2}}{8M_ {T_{2}}^{2}}+\frac{\hat{y}_{i3}^{u\ast}(\lambda_{T_{1}})_{3}(\lambda_{T_{1}}) _{1}^{\ast}}{8M_{T_{2}}^{2}}\right)[Q_{u\phi}]_{1i}\] \[+\left(\frac{\hat{y}_{i3}^{u\ast}|(\lambda_{T_{1}})_{3}|^{2}}{8M_ {T_{2}}^{2}}+\frac{\hat{y}_{i3}^{u\ast}(\lambda_{T_{1}})_{1}(\lambda_{T_{1}}) _{3}^{\ast}}{8M_{T_{2}}^{2}}\right)[Q_{u\phi}]_{3i}\] with the index \(i\) running over all three generations.
### Vectors
\(\mathcal{B}_{1}\sim(\mathbf{1},\mathbf{1},1)_{V}\)
* \(Q_{\phi ud}\)
* UV Lagrangian: \[-\mathcal{L}^{\leq 4}\supset(g_{\mathcal{B}_{1}}^{du})_{31}\ \mathcal{B}_{1}^{\mu \dagger}\bar{d}_{3}\gamma_{\mu}u_{1}+(g_{\mathcal{B}_{1}}^{\phi})\ \mathcal{B}_{1}^{\mu \dagger}iD_{\mu}\phi^{T}i\sigma_{2}\phi+\text{h.c.}\] (B.23)
* Matching to SMEFT: \[\mathcal{L}_{\text{SMEFT}}\supset -\frac{(g_{\mathcal{B}_{1}}^{\phi})(g_{\mathcal{B}_{1}}^{du})_{3 1}^{\ast}}{M_{\mathcal{B}_{1}}^{2}}[Q_{\phi ud}]_{13}-\frac{|(g_{\mathcal{B}_ {1}}^{du})_{31}|^{2}}{3M_{\mathcal{B}_{1}}^{2}}\left([Q_{ud}^{(1)}]_{1133}+6[ Q_{ud}^{(8)}]_{1133}\right)\] \[-\frac{\hat{y}_{ji}^{u\ast}|g_{\mathcal{B}_{1}}^{\phi}|^{2}}{2M_{ \mathcal{B}_{1}}^{2}}[Q_{u\phi}]_{ij}-\frac{\hat{y}_{ji}^{d\ast}|g_{\mathcal{ B}_{1}}^{\phi}|^{2}}{2M_{\mathcal{B}_{1}}^{2}}[Q_{d\phi}]_{ij}-\frac{\hat{y}_{ji}^{e \ast}|g_{\mathcal{B}_{1}}^{\phi}|^{2}}{2M_{\mathcal{B}_{1}}^{2}}[Q_{e\phi}]_{ij}\] \[+\frac{|g_{\mathcal{B}_{1}}^{\phi}|^{2}}{M_{\mathcal{B}_{1}}^{2}} Q_{\phi D}-\frac{|g_{\mathcal{B}_{1}}^{\phi}|^{2}}{2M_{\mathcal{B}_{1}}^{2}}Q_{ \phi\Box}-\frac{2\hat{\lambda}_{\phi}|g_{\mathcal{B}_{1}}^{\phi}|^{2}}{M_{ \mathcal{B}_{1}}^{2}}Q_{\phi}+\frac{\mu_{\phi}^{2}|g_{\mathcal{B}_{1}}^{\phi}| ^{2}}{M_{\mathcal{B}_{1}}^{2}}Q_{\phi 4}\] with indices \(i,j\) running over all three generations, \(\hat{\lambda}_{\phi}=\lambda_{\phi}-C_{\phi 4}\), \(\lambda_{\phi}\) and \(\mu_{\phi}\) being the parameters of the Higgs potential and \(C_{\phi 4}\) being the coefficient of the operator \(Q_{\phi 4}\) given in the last term. \(\mathcal{W}\sim(\mathbf{1},\mathbf{3},0)_{V}\)
* UV Lagrangian: \[-\mathcal{L}^{\leq 4}\supset\frac{1}{2}(g_{\mathcal{W}}^{q})_{13}\ \mathcal{W}^{\mu a} \bar{q}_{1}\sigma^{a}\gamma_{\mu}q_{3}+\frac{1}{2}(g_{\mathcal{W}}^{l})\ \mathcal{W}^{\mu a}\bar{l}_{\ell}\sigma^{a}\gamma_{\mu}l_{\ell}\] (B.25)
Matching to SMEFT: \[\begin{split}\mathcal{L}_{\text{SMEFT}}\supset&-\frac{(g_{ \mathcal{W}}^{q})_{13}(g_{\mathcal{W}}^{l})}{4M_{\mathcal{W}}^{2}}[Q_{lq}^{(3) }]_{\ell\ell 13}-\frac{(g_{\mathcal{W}}^{q})_{13}^{2}}{8M_{\mathcal{W}}^{2}}[Q_{qq} ^{(3)}]_{1313}\\ &-\frac{(g_{\mathcal{W}}^{l})^{2}}{4M_{\mathcal{W}}^{2}}[Q_{l}]_{ \ell\ell^{\prime}\ell^{\prime}\ell}+\frac{(g_{\mathcal{W}}^{l})^{2}}{8M_{ \mathcal{W}}^{2}}[Q_{l}]_{\ell\ell\ell^{\prime}\ell^{\prime}}\end{split}\] (103)
* \(Q_{\phi q}^{(3)}\)
* UV Lagrangian: \[\mathcal{L}\supset\frac{1}{2}(g_{\mathcal{W}}^{q})_{13}\ \mathcal{W}^{\mu a}\bar{q}_{1} \sigma^{a}\gamma_{\mu}q_{3}+\left\{\frac{1}{2}(g_{\mathcal{W}}^{\phi})\ \mathcal{W}^{\mu a}\phi^{ \dagger}\sigma^{a}iD_{\mu}\phi+\text{h.c.}\right\}\] (104)
* Matching to SMEFT: \[\begin{split}\mathcal{L}_{\text{SMEFT}}\supset& -\frac{\text{Re}(g_{\mathcal{W}}^{\phi})(g_{\mathcal{W}}^{q})_{13}}{4M_{ \mathcal{W}}^{2}}[Q_{\phi q}^{(3)}]_{13}-\frac{(g_{\mathcal{W}}^{q})_{13}^{2}} {8M_{\mathcal{W}}^{2}}[Q_{qq}^{(3)}]_{1313}\\ &-\left(\frac{\tilde{y}_{ji}^{u*}|g_{\mathcal{W}}^{\phi}|^{2}}{4M _{\mathcal{W}}^{2}}-\frac{i\hat{y}_{ji}^{u*}\text{Im}\left((g_{\mathcal{W}}^{ \phi})^{2}\right)}{8M_{\mathcal{W}}^{2}}\right)[Q_{u\phi}]_{ij}\\ &-\left(\frac{\hat{y}_{ji}^{d*}|g_{\mathcal{W}}^{\phi}|^{2}}{4M_ {\mathcal{W}}^{2}}+\frac{i\hat{y}_{ji}^{d*}\text{Im}\left((g_{\mathcal{W}}^{ \phi})^{2}\right)}{8M_{\mathcal{W}}^{2}}\right)[Q_{d\phi}]_{ij}\\ &-\left(\frac{\hat{y}_{ji}^{e*}|g_{\mathcal{W}}^{\phi}|^{2}}{4M_ {\mathcal{W}}^{2}}+\frac{i\hat{y}_{ji}^{e*}\text{Im}\left((g_{\mathcal{W}}^{ \phi})^{2}\right)}{8M_{\mathcal{W}}^{2}}\right)[Q_{e\phi}]_{ij}\\ &-\frac{i\hat{y}_{i3}^{u*}\text{Im}(g_{\mathcal{W}}^{\phi})(g_{ \mathcal{W}}^{q})_{13}}{4M_{\mathcal{W}}^{2}}[Q_{u\phi}]_{1i}+\frac{i\hat{y}_{ i3}^{d*}\text{Im}(g_{\mathcal{W}}^{\phi})(g_{\mathcal{W}}^{q})_{13}}{4M_{ \mathcal{W}}^{2}}[Q_{d\phi}]_{1i}\\ &-\left(\frac{\text{Re}((g_{\mathcal{W}}^{\phi})^{2})}{4M_{ \mathcal{W}}^{2}}-\frac{|g_{\mathcal{W}}^{\phi}|^{2}}{4M_{\mathcal{W}}^{2}} \right)Q_{\phi D}-\left(\frac{\text{Re}((g_{\mathcal{W}}^{\phi})^{2})}{8M_{ \mathcal{W}}^{2}}+\frac{|g_{\mathcal{W}}^{\phi}|^{2}}{4M_{\mathcal{W}}^{2}} \right)Q_{\phi\square}\\ &-\frac{\hat{\lambda}_{\phi}|g_{\mathcal{W}}^{\phi}|^{2}}{M_{ \mathcal{W}}^{2}}Q_{\phi}+\frac{\hat{\mu}_{\phi}^{2}|g_{\mathcal{W}}^{\phi}|^{ 2}}{2M_{\mathcal{W}}^{2}}Q_{\phi 4}\end{split}\] (105) with indices \(i,j\) running over all three generations, \(\hat{\lambda}_{\phi}=\lambda_{\phi}-C_{\phi 4}\), \(\lambda_{\phi}\) and \(\mu_{\phi}\) being the parameters of the Higgs potential and \(C_{\phi 4}\) being the coefficient of the operator \(Q_{\phi 4}\) given in the last term.
\(\mathcal{U}_{2}\sim(\mathbf{3},\mathbf{1},\frac{2}{3})_{V}\)
* \(Q_{lq}^{(3)}\)
UV Lagrangian: \[-\mathcal{L}^{\leq 4}\supset(g^{lq}_{\mathcal{U}_{2}})_{1}\ \mathcal{U}^{\mu \dagger}_{2\ell}\bar{l}_{\ell}\gamma_{\mu}q_{1}+(g^{lq}_{\mathcal{U}_{2}})_{3} \ \mathcal{U}^{\mu\dagger}_{2\ell}\bar{l}_{\ell}\gamma_{\mu}q_{3}+\text{h.c.}\] (114) * Matching to SMEFT: \[\mathcal{L}_{\text{SMEFT}}\supset-\frac{(g^{lq}_{\mathcal{U}_{2}})^{*}_{1}(g^{ lq}_{\mathcal{U}_{2}})_{3}}{2M^{2}_{\mathcal{U}_{2}}}\left([Q^{(1)}_{lq}]_{\ell \ell 13}+[Q^{(3)}_{lq}]_{\ell\ell 13}\right)+(\ell\ell 11+\ell\ell 33)\] (115)
* UV Lagrangian: \[-\mathcal{L}^{\leq 4}\supset(g^{lq}_{\mathcal{U}_{2}})_{1}\ \mathcal{U}^{\mu \dagger}_{2\ell}\bar{l}_{\ell}\gamma_{\mu}q_{1}+(g^{ed}_{\mathcal{U}_{2}})_{ 3}\ \mathcal{U}^{\mu\dagger}_{2\ell}\bar{\varepsilon}_{\ell}\gamma_{\mu}d_{3}+ \text{h.c.}\] (116)
* Matching to SMEFT: \[\mathcal{L}_{\text{SMEFT}}\supset \frac{2(g^{lq}_{\mathcal{U}_{2}})_{1}(g^{ed}_{\mathcal{U}_{2}})_{ 3}}{M^{2}_{\mathcal{U}_{2}}}[Q_{ledq}]_{\ell\ell 31}-\frac{|(g^{lq}_{\mathcal{U}_{2}})_{ 1}|^{2}}{2M^{2}_{\mathcal{U}_{2}}}\left([Q^{(1)}_{lq}]_{\ell\ell 11}+[Q^{(3)}_{ lq}]_{\ell\ell 11}\right)\] (117) \[-\frac{|(g^{ed}_{\mathcal{U}_{2}})_{3}|^{2}}{M^{2}_{\mathcal{U}_{2}}}[Q _{ed}]_{\ell\ell 33}\] \(\mathcal{Q}_{5}\sim(\mathbf{3},\mathbf{2},-\frac{5}{6})_{V}\)
* UV Lagrangian: \[-\mathcal{L}^{\leq 4}\supset(g^{eq}_{\mathcal{Q}_{5}})_{1}\ \mathcal{Q}^{\mu \dagger}_{5\ell}\bar{\varepsilon}^{c}_{\ell}\gamma_{\mu}q_{1}+(g^{dl}_{ \mathcal{Q}_{5}})_{3}\ \mathcal{Q}^{\mu\dagger}_{5\ell}\bar{d}_{3}^{c}\gamma_{\mu}l_{\ell}+\text{h.c.}\] (118)
* Matching to SMEFT: \[\mathcal{L}_{\text{SMEFT}}\supset-\frac{2(g^{eq}_{\mathcal{Q}_{5}})_{1}(g^{dl}_ {\mathcal{Q}_{5}})_{3}^{*}}{M^{2}_{\mathcal{Q}_{5}}}[Q_{ledq}]_{\ell\ell 31}+ \frac{|(g^{eq}_{\mathcal{Q}_{5}})_{\ell 1}|^{2}}{M^{2}_{\mathcal{Q}_{5}}}[Q_{ qe}]_{11\ell\ell}\] (119) \(\mathcal{X}\sim(\mathbf{3},\mathbf{3},\frac{2}{3})_{V}\)
* UV Lagrangian: \[-\mathcal{L}^{\leq 4}\supset\frac{1}{2}(g_{\mathcal{X}})_{1}\ \mathcal{X}^{\mu \alpha\dagger}_{\ell}\bar{l}_{\ell}\gamma_{\mu}\sigma^{a}q_{1}+\frac{1}{2}(g_{ \mathcal{X}})_{3}\ \mathcal{X}^{\mu\alpha\dagger}_{\ell}\bar{l}_{\ell}\gamma_{\mu}\sigma^{a}q_{3}+ \text{h.c.}\] (120)
* Matching to SMEFT: \[\mathcal{L}_{\text{SMEFT}}\supset-\frac{(g_{\mathcal{X}})^{*}_{1}(g_{ \mathcal{X}})_{3}}{8M^{2}_{\mathcal{X}}}\left(3[Q^{(1)}_{lq}]_{\ell\ell 13}-[Q^{(3)}_{ lq}]_{\ell\ell 11}\right)+(\ell\ell 11+\ell\ell 33)\] (121) |
2306.06732 | Effect of the growth orientation on the physical properties of
Sr$_2$CoNbO$_6$ thin films | We study the effect of the growth orientation on the structural, electronic,
and hence transport properties of Sr$_2$CoNbO$_6$ thin films grown on the
orthorhombic NGO(100) and cubic MgO(100) substrates. The x-ray diffraction
patterns show the growth of the thin film along $a$-axis resulting in the
asymmetric ($b\neq c$) in-plane compressive strain in case of NGO(100), whereas
along $c$-axis with tensile strain in case of MgO(100) substrate. The
temperature dependent resistivity measurements indicate the lower electronic
conductivity for the film grown on the NGO(100) substrate, which is found to be
correlated with the higher degree of the oxygen deficiencies and hence larger
concentration of the insulating Co$^{2+}$ in this sample. Further, the x-ray
photoemission spectroscopy measurements show that Sr and Nb are present in the
2+ and 4+ valence state, whereas Co exist in the 2+, 3+ as well as 4+ states,
fraction of which was found to vary with the growth orientation. Moreover, the
analysis of leakage current using the sum exponent model indicate the presence
of two different relaxation mechanisms in these samples. | Ajay Kumar, Ramcharan Meena, M. Miryala, K. Ueno, Rajendra S. Dhaka | 2023-06-11T17:46:16Z | http://arxiv.org/abs/2306.06732v1 | Effect of the growth orientation on the physical properties of Sr\({}_{2}\)CoNbO\({}_{6}\) thin films
###### Abstract
We study the effect of the growth orientation on the structural, electronic, and hence transport properties of Sr\({}_{2}\)CoNbO\({}_{6}\) thin films grown on the orthorhombic NGO(100) and cubic MgO(100) substrates. The x-ray diffraction patterns show the growth of the thin film along \(a\)-axis resulting in the asymmetric (\(b\neq c\)) in-plane compressive strain in case of NGO(100), whereas along \(c\)-axis with tensile strain in case of MgO(100) substrate. The temperature dependent resistivity measurements indicate the lower electronic conductivity for the film grown on the NGO(100) substrate, which is found to be correlated with the higher degree of the oxygen deficiencies and hence larger concentration of the insulating Co\({}^{2+}\) in this sample. Further, the x-ray photoemission spectroscopy measurements show that Sr and Nb are present in the 2+ and 4+ valence state, whereas Co exist in the 2+, 3+ as well as 4+ states, fraction of which was found to vary with the growth orientation. Moreover, the analysis of leakage current using the sum exponent model indicate the presence of two different relaxation mechanisms in these samples.
## I Introduction
A stable crystal structure and hence possibility to accommodate the wide range of the elements in the perovskite structure (ABO\({}_{3}\); A: rare earth/ alkali earth metals, B: transition metals) give rise to the exotic physical properties such as giant magnetoresistance, spin frustration, multiferroicity, etc.[1; 2; 3; 4; 5], resulting their important technological applications in resistive switching devices, magnetocaloric effect, solid oxide fuel cells, photovoltaics, etc. [6; 7; 8; 9]. Several external perturbations like temperature, mechanical pressure, chemical pressure (doping) have been extensively used to systematically tune these properties [10; 11; 12]; however, a fine control on the oxygen stoichiometry, which govern most of their physical properties, is still a major challenge in the family of complex oxides [13; 14]. In this regard, substrate induced strain in the epitaxial thin films has become novel tool to engineer the oxygen concentration, and resulting electronic structure and magnetic properties of these samples [15; 16; 17; 18; 19]. Also, the single crystalline oxide substrates with different lattice parameters can give the flexibility to tailor the oxygen content for the desired physical properties [20; 21]. Moreover, the double perovskite oxides with the general formula A\({}_{2}\)BB\({}^{\prime}\)O\({}_{6}\) have further attracted the research community, where the degree of rock salt like ordering in the BO\({}_{6}\) and B\({}^{\prime}\)O\({}_{6}\) octahedra has been widely used to engineer their physical properties [22; 23; 24]. The recent neutron powder diffraction and the x-ray absorption spectroscopy (XAS) measurements on the La substituted Sr\({}_{2-x}\)La\({}_{x}\)CoNbO\({}_{6}\) samples demonstrate that the degree of octahedral distortion plays a key role in controlling the magnetic and electronic properties of these samples [25; 26]. Interestingly, the substrate induced strain in the thin film samples can significantly rotate/ tilt the (B/B\({}^{\prime}\))O\({}_{6}\) octahedra by manipulating the B/B\({}^{\prime}\)-O bond length and/or B/B\({}^{\prime}\)-O-B/B\({}^{\prime}\) bond angles [16; 27; 28]. For example, Kleibeuker \(et\)\(al.\) proposed an interesting approach of growing the ordered thin films from the disordered bulk double perovskite target materials, where the formation of the B-site cages of two different volume has been observed, resulting from the tilting of the two adjacent (B/B\({}^{\prime}\))O\({}_{6}\) octahedra towards in-plane and out-of plane directions of the (111) oriented substrate, respectively [29].
More importantly, the epitaxial thin films of the Co-based perovskite oxides are of particular interest due to the various possible valence and spin-states of Co, which can be easily altered by changing the crystal field splitting with the help of the misfit induced strain in the epitaxial thin films [16; 30; 31; 32]. Chen \(et\)\(al.\) reported that a mechanical pressure of 40 GPa on SrCo\({}_{0.5}\)Ru\({}_{0.5}\)O\({}_{3-\delta}\) sample can completely transforms the Co\({}^{3+}\) ions from high spin (HS) to low spin (LS) state due to reduction in the Co-O bond distance [10]. This suggest that the substrate induced strain can be used as an alternative tool to manipulate the electronic structure of such compounds in the epitaxial thin films. For example, even a small degree of the compressive strain in the La\({}_{2}\)CoMnO\({}_{6}\) thin films (grown on LSAT and LaAlO\({}_{3}\) substrates) favors the in-plane magnetic anisotropy, whereas a tensile strain (on SrTiO\({}_{3}\) substrate) results in the out-of-plane magnetic anisotropy due to difference in the cell parameter in the two cases [30]. In this line, Sr\({}_{2}\)CoNbO\({}_{6}\) due to its moderate charge and ionic radii difference between two B-site atoms (Co\({}^{3+}\) and Nb\({}^{5+}\)), which is crucial for achieving the B-site ordering [33; 34], results in the intriguing physical properties like colossal dielectric response and complex ac impedance spectroscopy [35; 36; 37]. Recently, we have extensively studied the magnetic, transport, and electronic properties of La substituted Sr\({}_{2-x}\)La\({}_{x}\)CoNbO\({}_{6}\) bulk samples, which shows the low
temperature cluster-glass-like behavior for Sr\({}_{2}\)CoNbO\({}_{6}\) and insulating/semiconducting nature with the electronic activation energy of 0.27 eV [38; 39]. Also, Wang \(et\)\(al.\) reported the colossal dielectric properties in Sr\({}_{2}\)CoNbO\({}_{6}\) and its strong correlation with the conductivity of the sample [36]. Theoretical calculations by He \(et\)\(al.\) claim the Sr\({}_{2}\)CoNbO\({}_{6}\) as an indirect band gap semiconductor with a band gap of 2.926 eV [40]. In order to tune the electronic band structure in a controlled manner, the epitaxial thin films of Sr\({}_{2}\)CoNbO\({}_{6}\) were grown on different substrates with the varying degree as well as the direction of the substrate induced strain, where the compressive and tensile strains were found to decreases and increases the electronic band gap, respectively [20]. The analysis suggest that the cumulative effect of the substrate induced strain, oxygen non-stoichiometry, and degree of covalency character in the bonding govern the underlined transport mechanism in the compound [20]. Thus, a precise understanding of its electronic structure is necessary for the device fabrication. However, the effect of growth orientation of the films on the electronic structure has not been studied, which can be useful to disentangle the contribution of various factors on the electronic structure of Sr\({}_{2}\)CoNbO\({}_{6}\). Moreover, an estimation of the steady-state leakage current is also crucial for it's possible use in the energy storage devices, dynamic random access memory (DRAM), transistors etc. [41; 42; 43].
Therefore, we investigate the epitaxial thin films of Sr\({}_{2}\)CoNbO\({}_{6}\) on orthorhombic NGO(100) substrate with the asymmetric in-plane compressive strain and cubic MgO(100) substrate with the symmetric tensile strain. The out-of-plane x-ray diffraction (XRD) measurements reveal the growth of the thin films along \(a\) and \(c\)-axis on the NGO(100) and MgO(100) substrates. A periodic pattern in the surface topography is observed with the average roughness of around 4 nm for both the films using atomic force microscopy. The temperature dependent resistivity show the lower electronic conductivity in case of film grown on the NGO(100) substrate, which is supported by the higher activation energy and lower effective density of states near the Fermi level as compared to that on MgO(100) substrate. The x-ray photoemission measurements clearly show the higher oxygen deficiencies and resulting larger fraction of the Co in 2+ valence state in case of the film on NGO(100) substrate, causing its lower electronic conductivity. Further, the possible mechanisms governing the leakage current in the film on NGO(100) substrate, using the dc step voltage, has been discussed to understand its possible use in the charge storage devices.
## II Experimental
The details about the sample preparation and the physical properties of the bulk Sr\({}_{2}\)CoNbO\({}_{6}\) target sample can be found in Refs. [38; 39; 26]. The thin films of 70\(\pm\)5 nm were grown from the bulk target material using the pulsed laser deposition (PLD) technique [32] on NdGaO\({}_{3}\)(NGO)(100) and MgO(100) substrates. In order to optimize the thickness of the films, we first deposited a film on Si(100) by partially shadowing the substrate to make a sharp step and estimated the film thickness using a stylus profiler across the step. We then used the same deposition parameters for both the films, which were grown at 800\({}^{\circ}\)C temperature and 10\({}^{-3}\) mbar oxygen partial pressure using 1.5-2 Jul cm\({}^{-2}\) laser fluence and 5Hz repetition rate. A post-growth annealing for 15 minutes was performed at 50 mbar oxygen pressure for both the samples at the deposition temperature.
We carried out the XRD measurements in the Bragg-Brentano geometry using PANalytical X'Pert\({}^{3}\) diffractometer. The atomic force microscope (AFM) was used in the non-contact mode to study the surface topography of the films. The low temperature resistivity measurements were performed using physical property measurement system (PPMS) of Quantum design, USA, with the four probe contact method using 0.1 \(\mu\)A excitation current. The I-V and leakage current measurements were performed in somewhat unconventional two probe configuration on top of the films using the 2612B source meter and 6517B electrometer from Keithely. The x-ray photomission spectroscopy (XPS) measurements were done with the monochromatic Al-K\(\alpha\) (h\(\nu\) = 1486.6 eV) x-ray source. Due to the insulting nature of both the samples, a charge neutralizer was used during the measurements. Each spectrum is calibrated _w.r.t._ the offset in C 1\(s\) peak from 284.6 eV and then fitted with the mixed Lorentzian and Gaussian peak shape after subtracting the inelastic Tougaard background. We use Renishew inVia confocal microscope with 514.5 nm laser source and 1 mW laser power to perform unpolarized Raman spectroscopy measurements.
## III Results and Discussion
The orthorhombic crystal structure of NGO substrate with all the three different lattice axes (\(a_{\text{NGO}}\)=5.433 A, \(b_{\text{NGO}}\)=5.503 A, \(c_{\text{NGO}}\)=7.716 A) results in the several possible growth orientations with varying degree of the asymmetric in-plane lattice mismatch on the different surface planes [44; 45]. In the present case, Sr\({}_{2}\)CoNbO\({}_{6}\) with the tetragonal structure (\(a_{\text{S2CN}}\)=\(b_{\text{S2CN}}\)=5.602 A and \(c_{\text{S2CN}}\) =7.921 A [38]) can be epitaxially grown on the NGO(100) substrate, such that \(b\) and \(c\)-axis lies in-plane of the substrate and \(a\)-axis in the out-of-plane direction. This produces an asymmetric in-plane lattice misfit [(\(b_{\text{NGO}}\)-\(b_{\text{S2CN}}\))/\(b_{\text{S2CN}}\)] of -1.77% and -2.59% along the \(b\) and \(c\)-axis, respectively. The presence of asymmetric in-plane strain in the Sr\({}_{2}\)CoNbO\({}_{6}\) film grown on NGO(100) substrate is schematically illustrated in Fig. 1(a). The longer out-of-plane lattice parameter of Sr\({}_{2}\)CoNbO\({}_{6}\) as compared to NGO(100) substrate results in its corresponding peak at lower 2\(\theta\) value in the XRD measurements, as shown in Fig. 1(b). The calculated out-of
plane (\(a\)-axis) lattice parameter of Sr\({}_{2}\)CoNbO\({}_{6}\) from the XRD data is found to be 5.656 A, which is 0.96% longer than the bulk lattice parameter (5.602 A), indicating the significant effect of the in-plane compressive strain on the lattice structure of Sr\({}_{2}\)CoNbO\({}_{6}\), which results in the change in orbital hybridization, and consequently the electronic and transport properties in the thin films, discussed later. Also, the presence of oxygen deficiencies in the perovskite thin films are also widely known to expand the unit cell [46; 47] and hence a cumulative effect of both substrate induced strain and oxygen vacancies (discussed below) are expected to govern the lattice parameter in the present case. The peak marked by asterisk symbol in Fig. 1(b) results from the crystal imperfection in the substrate, as evident from the XRD pattern of the bare NGO(100) substrate.
Further, the epitaxial growth of Sr\({}_{2}\)CoNbO\({}_{6}\) on the cubic MgO(100) substrate (\(a_{\text{MgO}}\)=4.216 A) is illustrated in Fig. 1(c), where \(a\) and \(b\)-axis lie in-plane of the substrate and \(c\)-axis along the out-of-plane direction. The in-plane lattice parameters of Sr\({}_{2}\)CoNbO\({}_{6}\) in the pseudocubic representation are \(a/\sqrt{2}\)=3.961 A, which results in a large lattice misfit of 6.44% with the MgO(100) substrate producing tensile strain. This relatively large in-plane lattice misfit may also lead in the gradual relaxation in the film as we move away from the interface and reciprocal space mapping (RSM) measurements can be further helpful to directly probe the strain state in these films [48]. The presence of the film peak at the higher 2\(\theta\) value in the XRD pattern as compared to the substrate, as shown in Fig. 1(d), indicate the smaller out-of-plane pseudocubic lattice parameter (in the present case, \(c_{\text{pc}}=c/2\), where subscript pc represent the pseudocubic). It is important to note here that the out-of-plane lattice parameter of Sr\({}_{2}\)CoNbO\({}_{6}\) film grown on MgO(100) substrate found to be 4.006 A, which is 3.8% larger than that of bulk sample in spite of the tensile in-plane strain in the film. This indicate the dominant role of oxygen vacancies in the anisotropic expansion of the unit cell of film grown on the MgO(100) substrate [47]. Further, the AFM images for both the samples are recorded to study their surface topography, as shows in Figs. 1(e) and (f) for the films grown on NGO(100) and MgO(100) substrates, respectively. The periodic patterns are observed on the surface of both the samples in (1x1)\(\mu\)m scan areas. However, these features are more clearly visible in case of the
Figure 1: (a) Schematic illustration of the growth orientation of the Sr\({}_{2}\)CoNbO\({}_{6}\) (S2CN) thin films deposited on NGO(100) and (c) MgO(100) substrates. (b) Room temperature \(\theta\)-2\(\theta\) x-ray diffraction pattern of Sr\({}_{2}\)CoNbO\({}_{6}\) thin films grown on NGO [around (200) reflection] and (d) MgO [around (002) reflection] substrates. The asterisk symbols indicate the peaks originating from the substrates. (e) (1x1) \(\mu\)m AFM images of the same films grown on NGO(100) and (f) MgO(100) substrates.
film grown on NGO(100), possibly due to the relatively smaller lattice misfit of the film with the substrate as compared to the MgO(100) substrate [20]. The calculated average roughness of the films are 4.3 nm in case of NGO(100) and 3.9 nm in case of MgO(100) substrate, which is slightly higher considering the epitaxial growth. The post-growth annealing of the films in the higher oxygen partial pressure may be the possible reason for this large surface roughness.
It has been well established that the degree and direction of the lattice strain is strongly related to the presence of oxygen non-stoichiometry in the epitaxial thin films of the oxide materials due to change in the molar volume [17; 18; 19]. Further, the presence of compressive and tensile substrate induced in-plane lattice strain are known to respectively strengthen and weaken the _p-d_ orbital hybridization between the oxygen and transition metal atoms, resulting in the enhancement and reduction of the electronic conductivity in the two cases, respectively [20; 49]. Thus, in order to probe the effect of growth orientation and asymmetric in-plane strain on the electronic band structure of Sr\({}_{2}\)CoNbO\({}_{6}\), we record the temperature dependent resistivity of both the films from 185-380 K, as shown in Fig. 2(a). We observe semiconducting/ insulating behavior in the measured temperature range for both the films. It is interesting to note that the film grown on the MgO(100) substrate shows the higher electronic conductivity as compared to that on the NGO(100) substrate, despite the compressive in-plane strain in the latter. This suggest that the effect of growth orientation of the films on their electronic structure is more prominent as compared to strain induced change in the metal-ligand orbital hybridization.
In order to understand the electronic transport mechanism in the films, the resistivity curves were modeled with the Arrhenius equation, described as
\[\rho(T)=\rho(0)exp(E_{a}/k_{B}T), \tag{1}\]
where E\({}_{a}\) is the activation energy require for the conduction of the charge carriers and \(\rho(0)\) is the pre-exponential factor. The linear fitting of the ln(\(\rho\)) versus 1/T plots from 284-380K gives activation energies of 196 meV and 190 meV for the films deposited on NGO(100) and MgO(100) substrates, as shown in Figs. 2(b) and (c), respectively. This indicate the higher conductivity of the film grown on the MgO(100) substrate, as also evident from the Fig. 2(a). However, the conduction behavior of the films deviates from the Arrhenius model at the low temperatures, as shown by the green dashed lines in Figs. 2(b) and (c). We find that the conduction mechanism of the films in the low temperature region follow the variable range hopping (VRH) model of the localized charge carriers, given as
\[\rho(T)=\rho(0)exp(T_{0}/T)^{1/4}, \tag{2}\]
where T\({}_{0}\) is the characteristic temperature defined as, T\({}_{0}=18\)/k\({}_{B}\)N(E\({}_{F}\))\(L^{3}\), where N(E\({}_{F}\)) is the localized density of states (DOS) near the Fermi level and \(L\) is the hopping length of the charge carriers. The slope of the ln(\(\rho\)) versus T\({}^{-1/4}\) curves in the low temperature region gives the value of T\({}_{0}\), as shown by black solid lines in Figs. 2(d) and (e) for both the films grown on the NGO(100) and MgO(100) substrates, respectively. The estimated values of the effective DOS near the Fermi level are 17.9\(\times\)10\({}^{19}\) and 18.9\(\times\)10\({}^{19}\) eV\({}^{-1}\)cm\({}^{-3}\) for the films on NGO(100) and MgO(100) substrates, respectively, where we assume Co-O\(\approx\)2A as the localization length of the charge carriers. The low DOS value near the Fermi level and higher activation energy required by the charge carriers to take part in the conduction mechanism in case of the film deposited on the NGO(100) substrate results in its lower electronic conductivity as compared to the film deposited on the MgO(100) substrate. A similar reduction in the electronic conductivity of anisotropically strained La\({}_{0.67}\)Ca\({}_{0.33}\)MnO\({}_{3}\) films grown on
Figure 2: (a) The temperature dependent resistivity of Sr\({}_{2}\)CoNbO\({}_{6}\) films grown on NGO(100) and MgO(100) substrates. (b, c) The Arrhenius fitting of the high temperature resistivity data, and (g, h) the VRH fitting in the low temperature regime, for the films grown on NGO(100) and MgO(100) substrates, respectively. Here, the green dashed lines represent the deviation from the Arrhenius and VRH models in the lower and higher temperature regimes, respectively.
NGO(100) and NGO(001) substrates has been attributed to the rotation/tilting and distortion in the MnO\({}_{6}\) octahedra, as compared to the unstrained (bulk like) film on NGO(110) substrate [45]. A higher degree of distortion in the (Co/Nb)O\({}_{6}\) octahedra in case of NGO(100) substrate due to the anisotropic in-plane strain can also be the possible reason for this observed reduction in the electronic conduction in the film grown on NGO(100) substrate.
In order to further quantify this effect, the temperature dependent current (I)-voltage (V) measurements are performed and the data are shown in Figs. 3(a, b) for the films grown on NGO(100)and MgO(100) substrates, respectively. The I-V curves show the linear behavior, particularly in the high voltage region and the slope of linear fit [shown by the solid lines in Figs. 3(a, b)] is used to calculate the conductance and hence resistivity of the samples as a function of temperature, as presented in Fig. 3(c). The observed higher resistivity of the film grown on the NGO(100) substrate as compared to that on MgO(100) in I-V data is consistent with the above \(\rho\)-T data, measured at the fixed excitation current.
To understand this observed change in the conduction mechanism, the x-ray photoelectron spectroscopy (XPS) measurements are performed on these samples [50; 51]. Figs. 4(a, b) show the XPS survey spectra of Sr\({}_{2}\)CoNbO\({}_{6}\) films grown on NGO(100) and MgO(100) substrates, respectively where all the prominent peaks in the spectra are assigned to the binding energies (BEs) of the constituent elements of the target material, discarding the possibility of any elemental impurity in both the samples. Figs. 4(c, d) represent the Sr 3\(d\) core level spectra of these samples, which show a separation of 1.7 eV between 3\(d_{3/2}\) and 3\(d_{5/2}\) components resulting from the spin-orbit coupling. The peak position of these components (Sr 3\(d_{5/2}\) at 132.5 eV and Sr 3\(d_{3/2}\) at 134.3 eV) indicate the presence of Sr in 2+ valence state in both the samples [50; 52]. Further, the recorded Nb 3\(d\) core-level spectra are shown in Figs. 4(e) and (f) for films grown on NGO(100) and MgO(100) substrates, respectively. The comparison of the observed peak position of the spin-orbit splitted components (Nb 3\(d_{5/2}\) at 206.1 eV and Nb 3\(d_{3/2}\) at 208.9 eV) with the reported values in the literature indicate the presence of the Nb predominantly in 4+ valence state for both the samples [53; 54]. The presence of Nb in tetravalent state indicate the strong possibility of the oxygen deficiency in the thin film samples as compared to the bulk Sr\({}_{2}\)CoNbO\({}_{6}\), where our recent XAS measurements indicate the presence of Nb purely in the pentavalent state [26]. These oxygen deficiencies are expected to play a key role in governing the underlined transport properties of these samples, as discussed below.
Furthermore, the Co 2\(p\) core level XPS spectra are measured to understand the effect of the substrate induced strain and possible oxygen non-stoichiometry on the chemical environment of Co, as shown in Figs. 4(g) and (h) for the films grown on NGO(100) and MgO(100) substrates, respectively. We observe a clear splitting in the main peaks for both the 2\(p_{1/2}\) and 2\(p_{3/2}\) components, indicating the presence of the two different valence states of Co in both the samples. Further, the deconvolution of the spectra reveal the presence of an additional component, as evident from the asymmetryticy in the main peaks towards the higher binding energy in Figs. 4(g) and (h). The position of these three components are presented in the Table 1, which resembles well with the reported values for Co\({}^{2+}\), Co\({}^{3+}\), and Co\({}^{4+}\)[55; 57; 58]. This deviation in the valence states of Co in the thin films (from the 3+ state in the bulk Sr\({}_{2}\)CoNbO\({}_{6}\)[26]) further indicate the possibility of strain induced oxygen non-stoichiometry in both the samples. Interestingly, the ratio of Co\({}^{2+}\) to Co\({}^{4+}\) is higher (0.61) in case of the film grown on NGO(100) substrate as compared to that (0.47) on MgO(100). An asymmetric in-plane compressive strain in case of NGO(100) substrate results in the elongation in the out-of-plane lattice parameter of Sr\({}_{2}\)CoNbO\({}_{6}\), which possibly stabilize the Co\({}^{2+}\) in the system due to its larger ionic radii as compared to Co\({}^{3+}\)[59]. This small change in the fraction of the Co\({}^{2+}\) with respect
Figure 3: The I-V data of the films grown on (a) NGO(100) and (b) MgO(100) substrates at the selected temperatures. (c) The temperature dependent resistivity curves extracted from the slope of I-V curves in the higher voltage regime, as shown by the solid black lines in (a) and (b).
to Co\({}^{4+}\) with the growth orientation may also result from the fitting procedure due to the over parameterization resulting from the several components present in these samples. However, the oxygen \(1s\) core-level XPS spectra discussed below clearly validate this point, which show the higher oxygen deficiencies in case of the film grown on the NGO(100) substrate as compared to MgO(100) substrate. Here, it is important to note that Co\({}^{2+}\) is insulating in nature due to the lowest possible oxidation state of Co, which significantly suppress the conduction channels in the sample [38]. This results in the lower electronic conductivity of the film grown on the NGO(100) substrate as compared to that on MgO(100) substrate, as evident from the \(\rho\)-T measurements, presented in Fig. 2(a). Further, we observe the two broad satellite features around 784.8 eV and 788.9 eV which can be assigned to the Co\({}^{2+}\) and Co\({}^{3+}\) states, respectively [55]. Here, it is well known that Co\({}^{2+}\) shows much stronger satellite feature as compared to Co\({}^{3+}\); however, significantly large fraction of the Co\({}^{3+}\) as compared to Co\({}^{2+}\) in the present case results in the comparable strength of the satellite features for both the states.
Moreover, we measured the O \(1s\) core-level XPS spectra to estimate the possible oxygen defects in these samples, as presented in Figs. 4(i, j). We find three well-resolved components for both the samples; however, two features at the higher BE are more clearly distinguishable in the case of film grown on MgO(100) substrate. Here, the first component around 529.1(1) eV (O\({}_{L}\)) is attributed to the lattice oxygen in the Co/Nb-O octahedra, whereas third component around 532.6(2) (O\({}_{A}\)) is resulting from the chemisorbed oxygen atoms, i.e., surface contamination by organic molecules [56]. Importantly, the central component around 531.0(1) (O\({}_{D}\)) results either from the oxygen atoms with the formal charge less than -\(2e\) due to the covalent character of the Co/Nb-O bonds [60; 61] or presence of oxygen deficiencies in the samples [62; 63; 64]. The intensity of this central component with respect to the lattice oxygen is much higher in case of NGO(100) film as compared to the MgO(100) [see Figs. 4(i, j)], which clearly indicate the higher oxygen deficiencies in the former. The integrated area ratio of O\({}_{D}\) and O\({}_{L}\), which can be used as the measure of the oxygen deficiencies in the thin film samples [62; 63], is found to be 0.78 in case of NGO(100) substrate and 0.59 for the film grown on MgO(100) substrate. This higher oxygen deficiency in the film grown on NGO(100) substrate results in the larger concentration of Co\({}^{2+}\) ions and hence lower
Figure 4: (a, b) The XPS survey spectra, (c, d) Sr \(3d\) core-level, (e, f) Nb \(3d\) core-level, (g, h) Co \(2p\) core-level, and (i, j) and O \(1s\) core-level spectra of the Sr\({}_{2}\)CoNbO\({}_{6}\) thin films grown on NGO(100) and MgO(100) substrates, respectively.
electronic conductivity as compared to the film grown on MgO(100) substrate, which is also evident from the Co \(2p\) core-level spectra, as discussed above. However, the exact quantification of oxygen vacancies using the high end techniques such as aberration corrected transmission electron microscopy (TEM) and/or positron annihilation can give much deeper insight into their role in governing the conduction mechanism of these samples [65; 66].
Interestingly, the colossal dielectric properties observed in Sr\({}_{2}\)CoNbO\({}_{6}\)[36] indicate its possible use in the energy storage devices, where the study of the leakage current is important to find its practical usefulness [41]. Thus, in order to estimate that, we apply a step voltage of 500 V/cm on Sr\({}_{2}\)CoNbO\({}_{6}\)/NGO(100) sample for 3 minutes and then record the current response as a function of time (until current drops to 1 nA), as shown in Figs. 5(a-d) at some representative temperatures. Interestingly, the current persists up to several minutes at the lower temperatures and decay more rapidly with increase in the temperature. This non-linear response of current is fitted with the sum exponent model defined as [43]
\[J(t)=\sum_{i=1}^{n}J_{mi}e^{-t/\tau_{i}}+J_{0}, \tag{3}\]
where \(J_{mi}\), \(\tau_{i}\), and \(J_{0}\) represent the initial current density, relaxation time, and steady-state current density, respectively and the summation is over the different relaxation processes. First, we try to fit the \(J-t\) curves using the single exponent model. However, a significant deviation from the experimental data has been observed, as shown by the green dashed line in Fig. 5(e) for 100 K. Thus, two exponent mode with different relaxation times is used for the fitting and a nice agreement between the experimental and fitted curves can be seen, as represented by the solid black line in Fig. 5(e). This indicates the presence of two different relaxation mechanisms in the sample. Therefore, all the \(J\)-t curves at different temperatures
Figure 5: (a–d) The time dependence of the current density after applying a step voltage of 500 V/cm for 3 minutes on the Sr\({}_{2}\)CoNbO\({}_{6}\)/NGO(100) sample at the selected temperatures. (e) Fitting of the \(J-t\) curve at 100 K using single (green dashed curve) and double (solid black line) exponent models (see text for more details). (f) Temperature evolution of the relaxation times for both the relaxation processes.
are fitted using the two exponent model and the temperature evolution of the two relaxations times is presented in Fig. 5(f). It is interesting to note that the relaxation time \(\tau_{1}\) is 5-6 times higher than \(\tau_{2}\), which indicate significantly different origin of the two processes. The similar behavior of the \(J-t\) curves with two relaxation processes has been also observed in the polycrystalline films of Pb(Zr\({}_{0.48}\)Ti\({}_{0.52}\))O\({}_{3}\), which are speculated to originate from the bulk and grain boundaries/sample-electrode interface [43; 67]. The effect of grain boundaries is expected to be negligible on the single crystalline film and the two relaxation processes are most likely originate from the sample and interfacial polarization in the present case. However, further investigations of the defects, twinning in the film, oxygen vacancies, carrier hopping, traps filling, etc., can shed light on the nature of these relaxation processes [68; 69; 70; 71]. Moreover, both the relaxation times decreases with increase in the temperature up to around 200 K and then attain a very small value (\(<1\) min) at the higher temperatures. This is due to the enhancement in the electronic conductivity and hence availability of the more charge carriers at the higher temperature. Thus, a large dielectric constant observed in Sr\({}_{2}\)CoNbO\({}_{6}\) near the room temperature [36] is possible accompanied with the higher electronic conductivity in the sample, which limit its practical use in the charge storage applications and hence further engineering of the electronic band gap is required for its practical use.
Finally, we show the Raman spectra for the films grown on NGO(100) and MgO(100) substrates, in Figs. 6(a, b), respectively. We observe several Raman active modes between 380-770cm\({}^{-1}\) for both the samples, unlike bulk Sr\({}_{2}\)CoNbO\({}_{6}\) which shows only two very weak Raman modes [38]. This suggest the reduction in the crystal symmetry in the thin film samples as compared to the bulk Sr\({}_{2}\)CoNbO\({}_{6}\) possibly due to the distortion in the (Co/Nb)O\({}_{6}\) octahedra resulting from the substrate induced strain. For example, the the presence of the three Raman modes between 400-550 cm\({}^{-1}\) represent the oxygen bending modes in the \(P2_{1}/n\) (monoclinic) symmetry, as no Raman active modes are expected at this wave number for the \(I4/m\) (tetragonal) space group of the bulk Sr\({}_{2}\)CoNbO\({}_{6}\) material [20; 38; 72]. Note that the lowering in the crystal symmetry in the double perovskite oxides is usually accompanied with the enhancement in the B-site ordering. Thus, the presence of these Raman active modes suggests the growth of B-site ordered thin films from the almost disordered bulk Sr\({}_{2}\)CoNbO\({}_{6}\) sample [26; 38]. This is consistent as the substrate induced strain is considered an effective way to grow the ordered thin films from the disordered target materials [73; 74; 75; 29; 75]. However, no significant change is observed in the Raman spectra for both the sample, which indicate that any change in the degree of the octahedral distortion and/or B-site ordering due to change in the growth orientation can not be probed using the unpolarized Raman spectroscopy in the present case and the polarization dependent Raman spectra can be more useful [28].
\begin{table}
\begin{tabular}{l l l l l} \hline \hline Substrate & Peak & Position & FWHM & Area \\ (100) & & (eV) & (eV) & \\ \hline \multicolumn{5}{c}{Co \(2p\)} \\ NGO & 2+ & 778.8 & 1.1 & 0.62 \\ & 3+ & 779.8 & 2.4 & 2.36 \\ & 4+ & 781.7 & 2.9 & 1.01 \\ MgO & 2+ & 778.8 & 1.1 & 0.56 \\ & 3+ & 779.8 & 2.4 & 2.40 \\ & 4+ & 781.7 & 2.9 & 1.18 \\ \hline \multicolumn{5}{c}{O \(1s\)} \\ NGO & O\({}_{L}\) & 529.2 & 1.3 & 1.61 \\ & O\({}_{D}\) & 531.0 & 1.9 & 1.26 \\ & O\({}_{A}\) & 532.5 & 2.1 & 0.71 \\ MgO & O\({}_{L}\) & 529.1 & 1.3 & 1.58 \\ & O\({}_{D}\) & 530.9 & 1.8 & 0.93 \\ & O\({}_{A}\) & 532.6 & 2.2 & 0.34 \\ \hline \hline \end{tabular}
\end{table}
Table 1: The fitting parameters of the Co \(2p\) (\(2p_{3/2}\)) and O \(1s\) core-level spectra of Sr\({}_{2}\)CoNbO\({}_{6}\) films deposited on the NGO(100) and MgO(100) substrates.
Figure 6: The room temperature Raman spectra of Sr\({}_{2}\)CoNbO\({}_{6}\) thin films grown on (a) NGO(100) and (b) MgO(100) substrates using the 514.5 nm excitation wavelength.
Conclusions
The oxygen stoichiometry and hence the resulting electronic and transport properties of Sr\({}_{2}\)CoNbO\({}_{6}\) have been engineered with change in the growth orientation of the epitaxial thin films using pulsed laser deposition. The films of Sr\({}_{2}\)CoNbO\({}_{6}\) have been grown along \(a\) and \(c\)-axis on orthorhombic NGO(100) and cubic MgO(100) substrates, resulting in the asymmetric compressive and symmetric tensile in-plane strain, respectively. The film grown on the NGO(100) substrate have the higher degree of the oxygen deficiency, which results in the larger fraction of Co in 2+ valence state and hence derive the system toward the insulating regime as compared to that grown on MgO(100) substrate. The XPS measurements shows the presence of Co in the 2+, 3+ as well as 4+ valence states, resulting from the in-plane compressive (tensile) and hence out-of-plane tensile (compressive) strain in case of the film grown on the NGO(100) [MgO(100)] substrate. Moreover, the divalent and tetravalent states of Sr and Nb, respectively, are found be remain invariant with the growth orientation in the two cases. Interestingly, the investigation of the leakage current indicate that the colossal dielectric properties observed in bulk Sr\({}_{2}\)CoNbO\({}_{6}\) near room temperature originate from the higher electronic conductivity in the sample. Moreover, the Raman spectra evident the significant reduction in the crystal symmetry in the thin films as compared to the bulk Sr\({}_{2}\)CoNbO\({}_{6}\).
## V Acknowledgment
AK acknowledges the UGC India for fellowship and the physics department of IIT Delhi for providing the the XRD, AFM, and PPMS EVERCOOL facilities, and central research facilities (CRF) of IIT Delhi for providing the Raman spectrometer. AK thanks Rishabh Shukla for his help during the thin film deposition. AK also thanks Ploybussara Gomasang for help in the XPS measurements at the Shibaura Institute of Technology, Japan, which were supported by Sakura Science Program (aPBL). RM thanks IUAC for providing the experimental facilities. The PLD instrument used for thin film growth is financially supported by IIT Delhi through seed grant with Reference No. BPHY2368 and SERB-DST through early career research (ECR) award with Project Reference No. ECR/2015/000159. RSD also acknowledges the SERB-DST for financial support through a core research grant (project reference no. CRG/2020/003436).
|
2302.05914 | Variational Voxel Pseudo Image Tracking | Uncertainty estimation is an important task for critical problems, such as
robotics and autonomous driving, because it allows creating statistically
better perception models and signaling the model's certainty in its predictions
to the decision method or a human supervisor. In this paper, we propose a
Variational Neural Network-based version of a Voxel Pseudo Image Tracking
(VPIT) method for 3D Single Object Tracking. The Variational Feature Generation
Network of the proposed Variational VPIT computes features for target and
search regions and the corresponding uncertainties, which are later combined
using an uncertainty-aware cross-correlation module in one of two ways: by
computing similarity between the corresponding uncertainties and adding it to
the regular cross-correlation values, or by penalizing the uncertain feature
channels to increase influence of the certain features. In experiments, we show
that both methods improve tracking performance, while penalization of uncertain
features provides the best uncertainty quality. | Illia Oleksiienko, Paraskevi Nousi, Nikolaos Passalis, Anastasios Tefas, Alexandros Iosifidis | 2023-02-12T13:34:50Z | http://arxiv.org/abs/2302.05914v1 | # Variational Voxel Pseudo Image Tracking
###### Abstract
Uncertainty estimation is an important task for critical problems, such as robotics and autonomous driving, because it allows creating statistically better perception models and signaling the model's certainty in its predictions to the decision method or a human supervisor. In this paper, we propose a Variational Neural Network-based version of a Voxel Pseudo Image Tracking (VPIT) method for 3D Single Object Tracking. The Variational Feature Generation Network of the proposed Variational VPIT computes features for target and search regions and the corresponding uncertainties, which are later combined using an uncertainty-aware cross-correlation module in one of two ways: by computing similarity between the corresponding uncertainties and adding it to the regular cross-correlation values, or by penalizing the uncertain feature channels to increase influence of the certain features. In experiments, we show that both methods improve tracking performance, while penalization of uncertain features provides the best uncertainty quality.
3D Single Object Tracking, Point Cloud, Uncertainty Estimation, Bayesian Neural Networks, Variational Neural Networks
## I Introduction
3D Singe Object Tracking (3D SOT) is the task of tracking an object in a 3D scene based on the given initial object position. This task combines challenges from both 3D Object Detection, as objects have to be accurately located in space, and 3D Multiple Object Tracking, as the object of interest has to be distinguished from similar objects. There is a variety of sensors that can be used for 3D SOT, including single or double camera setups, Lidar and Radar. While the camera setups are the cheapest option, they capture images which lack valuable for 3D SOT depth information, which can be provided by Lidar sensors. Lidars generate point clouds, which are sets of 3D points detected as the positions in the 3D scene of light beam reflections. The explicit depth information makes Lidar the most common choice for many 3D perception methods, including 3D SOT. The SOT is performed by predicting the offset of the object's position with respect to its previous known position. This has been approached by using correlation filters [1, 2], deep learning methods to directly predict the object's offset [3], or by using Siamese methods which search for the position with the highest similarity score [4, 5, 6, 7, 8]. Since 3D perception methods are often used in critical fields, such as robotics or autonomous driving, it is important to provide accurate predictions and confidence estimations to avoid costly damages.
Uncertainty estimation in neural networks allows for using the network's outputs to better indicate the confidence in its predictions and to improve their statistical qualities, leading to better performance. The practical applications of uncertainty estimation are studied for several perception tasks, including 3D Object Detection [9, 10, 11], 3D Object Tracking [12, 13], 3D Human Pose Tracking [14], and Steering Angle Prediction [15]. These methods provide an improvement in perception and control by using an uncertainty estimation process. However, most of these methods adopt single deterministic approaches to estimate different types of uncertainty, or use Monte Carlo Dropout (MCD) [16] as an approach to estimate epistemic uncertainty. According to experiments in [17] on the uncertainty quality of different types of Bayesian Neural Networks (BNNs), MCD achieves the worst uncertainty quality.
In this paper, we introduce a Variational Neural Network (VNN) [18] based version of the fastest 3D SOT method called Voxel Pseudo Image Tracking (VPIT) [8] and propose two ways, i.e., the uncertainty similarity approach and the penalization approach, to utilize the estimated uncertainty and improve the tracking performance of the model. The similarity-based approach computes a similarity between the estimated uncertainties to serve as an additional similarity score, while the penalization approach focuses on certain features by penalizing the feature values corresponding to high uncertainties. We train a VNN version of PointPillars for 3D Object Detection to serve as backbone for the proposed Variational VPIT (VVPIT) method. We, then, train the whole network following the VPIT's training procedure, but use the uncertainty-aware cross-correlation function and multiple samples of the Variational Feature Generation Network to compute uncertainty in the produced features. In experiments, we show that the use of uncertainty leads to an improvement in the model's tracking performance, and the choice of the penalty-based uncertainty utilization strategy leads to the highest improvement in Success and Precision metrics.
The remainder of the paper is structured as follows. Section II describes related and prior work. In Section III we describe the proposed approach, including the Variational TANet model and its training and the proposed uncertainty-aware AB3DMOt. Section IV outlines the experimental protocol and provides experimental results. Section V concludes this paper1.
[FOOTNO
## II Related Work
Gawlikowski et al. [19] define four main categories of uncertainty estimation methods, based on the strategies they use to estimate the uncertainty of the model. Deterministic Methods [12, 20] use a single deterministic network and either predict its uncertainty by using an additional regression branch, or estimate it by analyzing the output of the model. Bayesian Neural Networks (BNNs) [21, 22] consider a distribution over weights of the network and compute the outputs of multiple model samples for the same input. The variance in the network's outputs expresses the estimated uncertainty, while the mean of outputs is used as the prediction value. Ensemble Methods [23, 24] consider a categorical distribution over the weights of the network and train multiple models at once. Test-Time Data Augmentation methods [25, 26, 27] apply data augmentations commonly used in the training phase during the inference to pass distorted inputs to a single deterministic network and compute the variance in the model's outputs.
Variational Neural Networks [18, 28] are similar to BNNs, but instead of considering a distribution over weights, they place a Gaussian distribution over the outputs of each layer and estimate its mean and variance values by the corresponding sub-layers. All types of uncertainty estimation methods, except those in the Deterministic Methods category, use multiple model passes to compute the variance in the network's outputs. This means the Deterministic Methods generally have the lowest computational impact on the model, but they usually perform worse than other methods. The single deterministic network approach can be improved by considering the Bayesian alternative, as it can be seen as a case of BNNs with the simple Dirac delta distribution over weights, which places the whole distributional mass on a single weight point.
The 3D SOT task is usually approached by using point-based Siamese networks, which consider a pair of target and search regions, predict a position of the target region inside the search region and compute the object offset relative to the previous object position. P2B [29], BAT [30], Point-Track-Transformer (PTT) [31, 32] and 3D-SiamRPN [4] use point-wise Siamese networks and predict object positions based on the comparison of target and search point clouds. 3D Siam-2D [33] uses one Siamese network in a 2D Birds-Eye-View (BEV) space to create fast object proposals and another Siamese network in 3D space to select the true object proposal and regress the bounding box. Voxel Pseudo Image Tracking (VPIT) [8] uses voxel pseudo images in BEV space and deploys a SiamFC-like module [5] to extract and compare features from target and search regions. Instead of using different scales, VPIT uses a multi-rotation search to find the correct vertical rotation angle.
Bayesian YOLO [34] is a 2D object detection method that estimates uncertainty by combining Monte Carlo Dropout (MCD) [16] with a deterministic approach and predicts aleatoric uncertainty with a special regression branch, while computing the epistemic uncertainty from the variance in MCD model predictions. Feng et al. [9] use a Lidar-based 3D object detection method and estimate the uncertainty in the predictions of the model in a similar way to Bayesian YOLO, by using a partially MCD model for the epistemic uncertainty estimation and using a separate regression branch for the aleatoric uncertainty estimation. LazerNet [10] predicts the uncertainty of a 3D bounding box using a single deterministic network and utilizes the predicted uncertainty during the non-maximum suppression process. This approach is further improved by estimating the ground truth labels' uncertainty based on the IoU between the 3D bounding box and the convex hull of the enclosed point cloud, and using the provided uncertainties during the training process [11].
Zhong et al. [12] perform 3D Multiple Object Tracking (MOT) by using a single deterministic network for 3D Object Detection to predict the uncertainty in outputs and providing the estimated uncertainties to the tracker by replacing the unit-Gaussian measurement noise in Kalman filter [35] with the predicted uncertainties. Uncertainty-Aware Siamese Tracking (UAST) [36] performs 2D single object tracking by using a single deterministic network and computing the distribution over the outputs by quantizing over the specific range of values and predicting the softmax score for each quantized value. The final regression value is computed as an expectation of the corresponding quantized distribution, and the distributions are used to estimate better confidence scores and select the best box predictions.
To the best of our knowledge, there are no methods that utilize uncertainty for 3D Single Object Tracking. Moreover, the estimation of uncertainty for related tasks, such as 2D Single Object Tracking, 3D Multiple Object Tracking or 3D Object Detection, is based on single deterministic networks or MCD, despite the fact that the statistical quality of single deterministic networks can be improved by using a Bayesian alternative, and that MCD tends to produce the worst quality of uncertainty between BNNs [17].
## III Methodology
Voxel Pseudo Image Tracking (VPIT) uses PointPillars [37] as a backbone to create voxel pseudo images and to process them with a Feature Generation Network (FGN), which consists of the convolutional part of the PointPillars' Region Proposal Network. The search and target features are compared with a convolutional cross-correlation function that calculates a pixel-wise similarity map. The highest value in this similarity map is used to determine the object position offset between frames. The structure of VPIT is present on Fig. 1.
We train a Variational VPIT (VVPIT) by replacing the FGN subnetwork with a Variational Neural Network (VNN) [18, 28] based version of it, i.e., we create a Variational FGN (VFGN). We use multiple samples of the network for each input to compute mean and variance for the output features. The number of samples can be dynamic and is not required to be the same during training and inference. For each of target and search regions, VFGN produces a set of outputs in the form \(Y=\{y_{i},i\in[1,\ldots,P]\}\) which
correspond to the outputs of \(P\) sampled VFGN models, with \(Y^{s}=\{y_{i}^{s},i\in[1,\ldots,P]\}\) corresponding to the search region output set and \(Y^{t}=\{y_{i}^{t},i\in[1,\ldots,P]\}\) to the target region output set. The number of samples \(P\) can be different for each set, but for simplicity, we use the same number of samples for both target and search regions. The mean and variance of the outputs are computed as follows:
\[\begin{split}& y_{m}^{s}=\frac{1}{P}\sum_{i}^{P}y_{i}^{s},\\ & y_{m}^{t}=\frac{1}{P}\sum_{i}^{P}y_{i}^{t},\\ & y_{v}^{s}=\operatorname{diag}\left(\frac{1}{P}\sum_{i}^{P}(y_{i} ^{s}-y_{m}^{s})(y_{i}^{s}-y_{m}^{s})^{T}\right),\\ & y_{v}^{t}=\operatorname{diag}\left(\frac{1}{P}\sum_{i}^{P}(y_{i} ^{t}-y_{m}^{t})(y_{i}^{t}-y_{m}^{t})^{T}\right),\end{split} \tag{1}\]
where \(y_{m}^{s},y_{v}^{s}\) and \(y_{m}^{t},y_{v}^{t}\) are the mean and variance values of search and target output sets, respectively, and \(\operatorname{diag}(\cdot)\) is a function that returns the main diagonal of a matrix. Fig. 2 shows an example of the mean and variance values of features generated by the VFGN for a search region with a car in the center. The background pixels have mostly high certainty, as all sampled models agree on them being irrelevant. The high magnitude features at the top part of the car have the highest uncertainty, as different model samples can disagree on the details in the appearance of the object.
The proposed VVPIT method can utilize the predicted uncertainties in different ways. The simplest way is to entirely ignore the uncertainty values and process the mean outputs only with the regular cross-correlation function \(g(a,b)\), defined as a 2D convolution \(\operatorname{conv2D}_{\omega=b}(a)\) with \(\omega\) being the kernel weights. This still leads to a statistically better model which can provide better predictions, but it can be further improved by utilizing the predicted uncertainties in the cross-correlation module. Since most 3D SOT methods compare region features in a similarity manner, we focus on similarity-based approaches to use the uncertainty values, instead of applying distance-based approaches. We propose a double similarity-based process to utilize uncertainty, which treats mean and variance values as separate feature sets and uses the convolutional similarity function \(g(a,b)\) on both of them independently. The final similarity value \(\hat{g}_{\mathrm{double}}\) is obtained by linearly the similarities of the mean and variance of the outputs as follows:
\[\hat{g}_{\mathrm{double}}(y_{m}^{s},y_{m}^{t},y_{v}^{s},y_{v}^{t})=g(y_{m}^{s },y_{m}^{t})+\lambda g(y_{v}^{s},y_{v}^{t}), \tag{2}\]
where \(\lambda\) is a variance weight hyperparameter. This approach is based on the idea that positions with similar uncertainties should be prioritized, as there is a high chance of them representing the same object. Humans can also treat uncertainties as separate features. Let us consider a task of classifying triangle and circle images, where some objects are rounded triangles. Based on the deformation degree, people will have different values of aleatoric uncertainty in their predictions, as they will have harder time classifying rounded triangles as only one of the two classes. If a person is asked to track these objects, the aleatoric uncertainty in predictions may be the only feature needed to distinguish between objects, given that size, thickness and other features are identical. This is achieved by describing the tracked objects as "definitely a circle", "triangle with some curves", "in between the circle and the triangle", which leads to low chances of mixing up these objects during tracking. The same principle can be applied for Lidar-based 3D SOT task. However, there are many different sources of uncertainty, considering the varying point cloud density, possible occlusions and object rotation. Some parts of the object of interest may have uncertain features, and this uncertainty is likely to be preserved during the tracking process.
In addition to the above approach, we also define an uncertainty penalization process which places focus on features with higher certainty and penalizes the uncertain feature values. This is achieved by dividing each mean feature value during the convolutional process by the corresponding normalized
Fig. 1: Voxel Pseudo Image Tracking structure.
variance score, as follows:
\[\begin{split}\forall c,\;v_{n}^{c}(v)=(\rho-1)\frac{v^{c}-\min(v^{c} )}{\max(v^{c})-\min(v^{c})}+1,\\ \forall p_{x},\forall p_{y},\;\;\hat{g}_{\mathrm{pen}}(y_{m}^{s},y _{m}^{t},y_{v}^{s},y_{v}^{t})^{p_{x},p_{y}}=\\ =\frac{2\hat{y_{m}^{s}}^{p_{x},p_{y}}y_{m}^{t}}{v_{n}(\hat{y_{v}^{ s}})^{p_{x},p_{y}}+v_{n}(\hat{y_{v}^{c}})^{p_{x},p_{y}}},\end{split} \tag{3}\]
where the \(v_{n}(v)\) function is used to normalize the variance predictions by the channel-wise minimum and maximum values to be in \([1,\rho]\) range, with a hyperparameter \(\rho\) that defines how much the uncertain predictions are penalized, \(v_{n}^{c}(v)\) implements the normalization procedure for a single channel \(c\). For an input \(j\), \(\hat{j}\) represents the tensor with convolutional patches of \(j\), and \(j^{p_{x},p_{y}}\) corresponds to the values of \(j\) at position \((p_{x},p_{y})\).
We follow the VPIT's training protocol and initialize a VVPIT model based on the VNN version of PointPillars for 3D Object Detection. After the initialization, the model is trained with the Binary Cross-Entropy (BCE) loss between the ground truth and the predicted score maps. Multiple VFGN samples are used during both training and inference to compute the mean and the variance in the target and search region features, which are later combined by using an uncertainty-aware cross-correlation module using one of the processes described above.
## IV Experiments
We use the KITTI [38] tracking dataset to train and test models. Following the standard protocol, we use KITTI tracking training subset for both training and testing, as the test subset does not provide the initial ground truth positions. The tracks \([0,\dots,18]\) are used for training and validation, and tracks \(19\) and \(20\) are used to test the trained models. Model performance is computed using the Precision and Success [39] metrics, which are based on the predicted and ground truth objects' center difference and 3D Intersection Over Union, respectively. VPIT uses a pre-trained PointPillars network to initialize its pseudo image generation and FGN modules. To follow the same procedure, we train a VNNs version of PointPillars on the KITTI [38] detection dataset, use it to initialize the VPIT model and train the corresponding model for \(64,000\) steps with different number of training VFGN samples per step in \([1,\dots,20]\) range.
Table I contains the evaluation results of regular VPIT and the Variational VPIT (VVPIT) models with different ways to utilize the predicted uncertainty. We report the best-performing models for each uncertainty utilization process, which are obtained by using \(20\) samples of the VFGN module. By computing the average of predictions and discarding the variances, VVPIT achieves higher tracking performance compared to the VPIT model. By utilizing uncertainties, the Success and Precision values are further improved. Both double similarity and uncertainty penalization processes lead to better models, but the penalization process leads to a better tracking performance.
## V Conclusions
In this paper, we proposed a method to utilize uncertainty in 3D Single Object Tracking which uses a Variational Neural Network (VNN) based version of the VPIT 3D Single Object Tracking method to estimate uncertainty in target and search features and combines these features with an uncertainty-aware cross-correlation module. We proposed two ways to utilize uncertainty in cross-correlation, i.e., by double similarity which adds a similarity in uncertainties to the regular cross-correlation, and by uncertainty penalization which penalizes uncertain features to shift focus to the more reliable feature channels. Additionally, we tested the model's performance without exploiting the estimated uncertainties, as it still leads to a statistically better model compared to regular VPIT. The use of VNNs improves the tracking performance of VPIT in all cases, with the uncertainty penalization leading to the best Success and Precision values.
## Acknowledgement
This work has received funding from the European Union's Horizon 2020 research and innovation programme under grant agreement No 871449 (OpenDR). This publication reflects the authors' views only. The European Commission is not responsible for any use that may be made of the information it contains.
\begin{table}
\begin{tabular}{l|c c c} \hline \hline
**Method** & **Uncertainty** & **Success** & **Precision** \\ \hline VPIT & - & 50.49 & 64.53 \\ VVPIT & averaging & 51.97 & 66.69 \\ VVPIT & double similarity & 52.62 & 66.56 \\ VVPIT & uncertainty penalization & **53.30** & **67.79** \\ \hline \hline \end{tabular}
\end{table} TABLE I: Precision and Success values on the KITTI single object tracking experiments for VPIT and Variational VPIT (VVPIT) models.
Fig. 2: An example of (a) mean, (b) variance and (c) mean and certainty features of a search region with a car in the center. Lighter color in the mean and variance images corresponds to higher values. Red color channel represents certainty in the corresponding pixel values, and blue color channel represents the mean feature values. The purple color indicates that the feature values and the certainty in those values is equally high, while the blue pixels signal features with high values and low certainty. |
2307.11167 | Magnetic properties and structural phase transition in ultrathin fcc Fe
(111) and bcc Fe (111) films: first-principles study | The aim of this work is to investigate the structural and magnetic
characteristics of Fe thin films with a triangular (hexagonal) lattice surfaces
(fcc (111) and bcc (111)). The properties of these structures have been
calculated using density functional theory (DFT) implemented in the
full-potential local-orbital(FPLO) code. The results indicate a structural
phase transition from fcc to bcc structure when the film thickness exceeds 23
Fe atomic monolayers. The considered fcc films prefer the low-spin
ferromagnetic state with an average magnetic moment of about 1.0 $\mu_{B}$ per
atom. This moment decreases with increasing film thickness until the critical
thickness, where, after the structural transition to the bcc phase, it reaches
a value close to that of bulk bcc Fe. Moreover, the values of the magnetic
anisotropy energy are positive (perpendicular magnetic anisotropy) for the
entire thickness range of films with fcc structure (in ferromagnetic low-spin
state) and systematically decrease with increasing film thickness. The
presented computational results explain the experimentally observed structural
transition and may help to select appropriate substrates with suitable lattice
parameters for the deposition of ultrathin Fe(111) films. | Jakub Meixner, Justyna Rychły-Gruszecka, Mirosław Werwiński | 2023-07-20T18:02:12Z | http://arxiv.org/abs/2307.11167v3 | Magnetic properties and structural phase transition in ultrathin fcc Fe (111) and bcc Fe (111) films: first-principles study
###### Abstract
The aim of this work is the investigation of the structural and magnetic characteristics of Fe thin films with a triangular (hexagonal) lattice surfaces (fcc (111) and bcc (111)). Properties of these structures have been calculated using density functional theory (DFT) implemented in full-potential local-orbital code (FPLO). The results indicate structural phase transition from fcc to bcc structure at film thickness of about 21 Fe atomic monolayers. Furthermore, findings show the positive magnetic anisotropy energy (MAE) values occur for below about 9 Fe monolayers indicating perpendicular magnetic anisotropy, and for larger number of Fe monolayers MAE switches to very small negative values. Calculated spin magnetic moments initially decrease with film thickness. However, at a thickness of structural phase transition (21 atomic monolayers) we observe their sudden increase. The presented computational results explain the experimentally observed structural transition and can help select proper substrates with trigonal (hexagonal) lattice surfaces and suitable lattice parameters for the deposition of ultrathin Fe (111) films.
+
Footnote †: journal:
## 1 Introduction
Iron is a transition metal important in many industrial and technological applications. Under normal conditions, bulk Fe crystallizes in body-centered cubic (bcc) lattice. The face-centered cubic (fcc) crystal structure of iron, see Fig. 1, is observed in ultrathin films and in bulk samples under certain pressure and thermal conditions (e.g. above 912\({}^{\circ}\)C at atmospheric pressure) [1].
The stability of the fcc Fe film is sensitive to the type of surface on which it is deposited [2]. Fe films are mainly deposited on substrates with square- or triangular-lattice surfaces, different types of epitaxial growth. Fe (111) layers with a triangular lattice surface, depending on the lattice parameter \(a\), describe bcc or fcc structures, see Fig. 2. Ultrathin Fe films can be grown, among other things, on substrates with square lattice (e.g., fcc (001) and bcc (001)) or hexagonal/triangular lattice (e.g., fcc (111), bcc (111), and hcp (0001)) surfaces [3; 4; 5; 6; 7]. Ultrathin Fe films growing on a substrate with a hexagonal-lattice surface, such as Au (111), exhibit an fcc structure, and an increase in Fe film thickness leads to a transformation to a bcc structure [8]. Depending on the type of surface on which the Fe film grows, the critical film thickness at which the transition to the bcc phase is observed ranges from about 10 to 20 atomic monolayers [3; 4; 5; 9]. Since ultrathin Fe films are always deposited on a substrate, usually the interpretation of the properties of such a film also refers to the properties of the substrate. In particular, the fcc phase stabilization of ultrathin Fe (111) films is customarily interpreted as a substrate effect.
Ultrathin Fe (111) films have also been considered by theoreticians with the use of ab initio calculations [10; 11; 12; 13]. Although Fe (111) films with hexagonal-lattice surfaces can occur in fcc [10; 11] and bcc [12; 13] structures, the authors of the computational works consider arbitrarily chosen case. Wu and Freeman's 1993 [12] paper on bcc (111) Fe layers is a pioneering work considering films up to 13 atomic monolayers thick. Subsequent work has focused
Figure 1: Unit cells of considered Fe structures with marked (111) planes. (a) Face centered cubic (fcc) unit cell, \(a_{bulk.fcc}=3.65\) Å. (b) Body centered cubic (bcc) unit cell, \(a_{bulk.bcc}=2.82\) Å.
on small clusters (up to 10 atoms) of Fe [14], hexagonal Fe monolayer on substrate [15], carbon-vapor reactions on the fcc-Fe(111) surface [10], Fe interfaces with h-WC [11], Al\({}_{2}\)O\({}_{3}\)[16], TiB\({}_{2}\)[17] and FeWB [13], as well as the Fe-Fe interface: fcc Fe (111)/hcp Fe (0001) [18], among others.
This article focuses on magnetic properties of ultrathin iron films with trigonal symmetry space group _P_-\(3m1\). We consider films with a triangular lattice on a surface that are known to form an fcc structure and are stable for thicknesses of a few monolayers. In limit of large thicknesses the stable structure is the bcc, so we will consider both within a consistent model. We determine at what film thickness a structural transition from fcc to bcc can be expected. Since in the present work we do not assume the presence of a substrate, the results obtained will shed light on the intrinsic properties of the independent Fe film.
## 2 Computational details
In this study, we examined the structural and magnetic properties of iron ultrathin films with a trigonal-lattice surface, ranging from 1 to 25 atomic monolayers (odd numbers of atomic monolayers only), see Fig. 2, using density functional theory (DFT). The calculations were performed with the full-potential local-orbital (FPLO-21) code [19; 20]. The exchange-correlation functional in the Perdew-Burke-Ernzerhof (PBE) parametrization was used [21]. The Brillouin zone was initially sampled using a \(20\times 20\times 1\)\(k\)-point mesh to optimise the geometry [22]. In each of the considered systems, Wyckoff positions were optimized using a scalar-relativistic approach with spin polarization. To find the optimized lattice parameter \(a\), the geometry optimization data were interpolated, and then at equilibrium minima the structures were calculated without forces using a denser _k_-point mesh of \(80\times 80\times 4\) with tetrahedron method approach [23]. A one step of fully-relativistic calculations was done for each structure in three directions to calculate the magnetic anisotropy energy (MAE).
The MAE values were obtained by comparing the total energy of the system in the three specific directions - one normal to the surface and two in-plane. The self-consistent scalar-relativistic calculations converged to a tolerance of \(10^{-8}\) eV for total energy. In each of the systems structures were fully optimized until the forces on atoms were less than 0.01 eV A\({}^{-1}\). To remove the bulk periodicity in the indicated direction and to model the thin film, a vacuum space of at least 15 A was added. All crystal structures were visualized using VESTA program [24].
## 3 Results and discussion
In our study of the iron films with triangular (111) surfaces, we conducted a thorough analysis of the energy landscape using DFT methods. Our investigation revealed the presence of two distinct minima for parameter \(a\) of trigonal-lattice of film surface, indicating the existence of two stable configurations for the Fe (111) thin films, see Fig. 3. These two energy minima occur at 2.58 A and 4.00 A. In order to identify which lattices and surface types are characterized by these lattice parameters, we look at the lattice parameters previously obtained for bulk Fe crystals. Calculations by Jain _et al._[25] for bulk Fe fcc gave a lattice parameter value of _a_\(bulk.fcc\) = 3.65 A. The lattice parameter \(a\) for Fe fcc (111) films is the length of the cathuts of a right triangle with hypotenuse of length _a_\(bulk.fcc\), see Fig. 1. Therefore, the value of _a_\(layer.fcc\) for an fcc Fe (111) layer taken from the bulk fcc Fe would be _a_\(bulk.fcc/\sqrt{2}\), which is 2.58 A, based on the above value. This value agrees very well with the lower minimum of the lattice parameter in our results for Fe thin films.
Furthermore, if we cut a (111) film from bcc Fe, we obtain a structure similar to our fcc (111) film with a lattice defined in the plane by the parameter \(a\) and an angle of 120 degrees (60 degrees). Experimental data indicate that the lattice parameter of bulk bcc Fe is \(a\) = 2.87 A and theoretical calculations give \(a\) = 2.82 A [26]. Therefore,
Figure 2: Crystal structure models of 13 atomic monolayer thick Fe films with triangular lattice surfaces, space group _P_-\(3m1\), as obtained after geometry optimization. (a) Unit cell of film with \(a\approx 2.6\) Å (fcc (111)). (b) Multiplication of the fcc Fe (111) unit cell. (c) Unit cell of film with \(a\approx 4.0\) Å (bcc (111)). (d) Multiplication of the bcc Fe (111) unit cell.
if we take the calculated value of the lattice parameter \(a\) of bulk bcc Fe to determine the value of the lattice parameter for bcc Fe (111) film, we have \(a_{bulk.bcc}\times\sqrt{2}=2.82\) A\(\times\sqrt{2}\approx 4.00\) A, which is exactly the value we see for the second minimum in the energy dependence of the lattice parameter. It is therefore justified to conclude that the lower minimum corresponds to the fcc lattice and the higher one to the bcc lattice. We observe that as the number of atomic monolayers increases above 21, lower energies begin to be found for systems with a bcc (111) structure, which we identify as a structural phase transition from an fcc to a bcc structure. The direction of the transition towards a bcc lattice is intuitively consistent with the bcc structure observed for bulk Fe (large thickness limit). The thickness of the 21 atomic monolayers, at which the structural phase transition is observed, is 43 A (4.3 nm) for the Fe fcc (111) layer and 16.3 A (1.63 nm) for the post-transition stable Fe bcc (111) layer. This implies a significant change in the preferred geometry of the film in the vicinity of the structural transition, which affects the structural properties of ultrathin Fe (111) films with thicknesses in the range from about 1 to 5 nm, further strongly depending on the lattice parameters of the substrate surface. A similar span of critical thickness for the transition from the fcc to the bcc phase in Fe films is observed experimentally, ranging from about 10 to 20 atomic monolayers [3; 4; 5].
The plot of the lattice parameter \(a\), see Fig. 3, shows no significant change with increasing film thickness until the transition point at 21 monolayers, where it jumps from about 2.6 to 4.0 A. At this point, we define parameter \(c\) as the shortest length in Cartesian direction \(z\) between periodically occurring atoms in the cell. The lattice parameter \(c\) defined in this way includes three atomic monolayers, see Fig. 2. With increasing film thickness, the parameter \(c\) remains approximately constant around 6.4 A, while at around 21 Fe monolayers it drops significantly to about 2.6 A. In experiments with fcc Fe depositions on Au (111) substrate, ultrathin fcc Fe films grow with a pseudomorphic lattice parameter \(a\) of Au (111) equal to 2.88 A and lattice parameter \(c\) equal to 6.27 A (3\(\times\) 2.09 A) [8]. Although the values of the lattice parameters obtained experimentally are very close to those determined theoretically by us, we see that it would be possible to match the fcc Fe layer to the substrate even better if the lattice parameter of the substrate were chosen even closer to the 2.6 A value. Since calculations show, that free-standing ultra-thin Fe fcc layers of the lowest thickness are more stable than bcc layers, we think that it might be possible to experimentally fabricate Fe fcc films thickness of a few atomic monolayers on less favourable substrates, such as graphene.
Changes in lattice parameters near the structural transition are also reflected in the volume dependence of the number of atomic monolayers in the film. In Fig. 3 we can see that in the vicinity of the phase transition, the volume expressed per atom decreases in steps from a value close to bulk fcc Fe to that of bulk bcc Fe.
Furthermore, we calculated an average spin magnetic moments of considered films, see Fig. 4. For all films the average spin magnetic moment is observed to be higher than for bcc Fe (2.17 \(\mu_{B}\)) [27]. The highest spin magnetic moment is found for the thinnest layers and decreases with increasing thickness. In addition we observe a jump to a higher values at the fcc-bcc structural transition. The elevated average spin magnetic moment values result from relatively high surface contributions, and from the proportion of these contributions increasing with decreasing film thickness. The surface of the layer contributes significantly to the mean spin magnetic moment of an atom
Figure 3: Structural properties of Fe (111) ultrathin films as a function of number of Fe monolayers and parameter \(a\) of trigonal-lattice surface. (a) Energy of the system as a function of the number of atomic monolayers and lattice parameter \(a\). Total energies of Fe slabs with \(n\) monolayers (\(E^{n}\)) shifted to the lowest energy of each slab (\(E^{n}_{0}\)); presented on a logarithmic scale. (b) Energy of Fe thin film for several thicknesses around strained phase transition and as a function of lattice parameter \(a\) shifted to the lowest energy of thickest slab (\(E^{n5}_{0}\)). (c) Lattice parameters \(a\) and \(c\) calculated in equilibrium state. (d) Unit cell volumes compared with values for bulk fcc and bcc Fe. The DFT calculations were performed with the full-potential local-orbital (FPLO-21) code [19; 20]. The exchange-correlation functional in the Perdew-Burke-Ernzerhof (PBE) parametrization was used [21].
and moments are higher at the surface, for example the value of spin magnetic moment of optimized 13-monolayer fcc Fe at surface is around \(2.9\,\mu_{B}\), while in the centre it is around \(2.1\,\mu_{B}\). The average values of spin magnetic moments as a function of lattice parameter \(a\) and number of monolayers are plotted on the map \(m_{s}(a,n)\), see Fig. 4. The map itself reveals the presence of two distinct magnetic regions separated at about \(a=3.3\) A. Furthermore, the results show the positive magnetic anisotropy energy (MAE), see Fig. 4, occur for below about 9 Fe monolayers, and for larger amounts of Fe monolayers MAE switches to very low negative values, indicating weak in-plane magnetic anisotropy.
## 4 Summary and conclusions
This work investigates the structural phase transition and magnetic properties in ultrathin Fe films with a trigonal-lattice surface, particularly focusing on Fe (111) fcc and bcc thin films. The energy landscape of these structures was found to exhibit energy minima for two distinct values of lattice parameters in plane of the films. The minimum at around 2.6 A is considered as fcc Fe (111) and that at around 4.0 A as bcc Fe (111). We have shown that in ultrathin Fe films with a trigonal-lattice surface, a structural phase transition from fcc to bcc Fe occurs at a thickness of about 21 atomic monolayers. At the structural transition, we observe step changes in the values of lattice parameters and film volume. The results are in agreement with the experimentally observed fcc-bcc structural transition in ultrathin Fe films. However, our calculations show that the more stable fcc structure observed for the smallest Fe (111) thicknesses is not the result of stabilization by the substrate, but the ground state of the free-standing Fe film. The average spin magnetic moments are significantly elevated for the ultrathin films relative to bulk materials. This increase comes mainly from contributions from the near-surface monolayers, the proportion of which increases with decreasing film thickness. Moreover, at the fcc-bcc structural transition we observe a clear jump in the magnetic moment with a change in film thickness. Furthermore, in ultrathin Fe (111) films we noticed change in magnetization direction from perpendicular to in-plane magnetization at about 1.2 nm.
## Acknowledgments
We acknowledge the financial support of the National Science Center Poland under the decision DEC-2018/30/E/ST3/00267 (SONATA-BIS 8). Part of the computations were performed on resources provided by the Poznan Supercomputing and Networking Center (PSNC). We thank Justyn Snarski-Adamski and Igor Di Marco for valuable comments and Pawel Lesniak and Daniel Depcik for compiling the scientific software and administration of the computing cluster at the Institute of Molecular Physics, Polish Academy of Sciences.
|
2304.08166 | Flow-preserving ZX-calculus Rewrite Rules for Optimisation and
Obfuscation | In the one-way model of measurement-based quantum computation (MBQC),
computation proceeds via measurements on a resource state. So-called flow
conditions ensure that the overall computation is deterministic in a suitable
sense, with Pauli flow being the most general of these. Computations,
represented as measurement patterns, may be rewritten to optimise resource use
and for other purposes. Such rewrites need to preserve the existence of flow to
ensure the new pattern can still be implemented deterministically. The majority
of existing work in this area has focused on rewrites that reduce the number of
qubits, yet it can be beneficial to increase the number of qubits for certain
kinds of optimisation, as well as for obfuscation.
In this work, we introduce several ZX-calculus rewrite rules that increase
the number of qubits and preserve the existence of Pauli flow. These rules can
be used to transform any measurement pattern into a pattern containing only
(general or Pauli) measurements within the XY-plane. We also give the first
flow-preserving rewrite rule that allows measurement angles to be changed
arbitrarily, and use this to prove that the `neighbour unfusion' rule of
Staudacher et al. preserves the existence of Pauli flow. This implies it may be
possible to reduce the runtime of their two-qubit-gate optimisation procedure
by removing the need to regularly run the costly gflow-finding algorithm. | Tommy McElvanney, Miriam Backens | 2023-04-17T11:28:47Z | http://arxiv.org/abs/2304.08166v2 | # Flow-preserving ZX-calculus rewrite rules for optimisation and obfuscation
###### Abstract
In the one-way model of measurement-based quantum computation (MBQC), computation proceeds via measurements on a resource state. So-called flow conditions ensure that the overall computation is deterministic in a suitable sense, with Pauli flow being the most general of these. Computations, represented as measurement patterns, may be rewritten to optimise resource use and for other purposes. Such rewrites need to preserve the existence of flow to ensure the new pattern can still be implemented deterministically. The majority of existing work in this area has focused on rewrites that reduce the number of qubits, yet it can be beneficial to increase the number of qubits for certain kinds of optimisation, as well as for obfuscation.
In this work, we introduce several ZX-calculus rewrite rules that increase the number of qubits and preserve the existence of Pauli flow. These rules can be used to transform any measurement pattern into a pattern containing only (general or Pauli) measurements within the XY-plane. We also give the first flow-preserving rewrite rule that allows measurement angles to be changed arbitrarily, and use this to prove that the 'neighbour unfusion' rule of Staudacher et al. preserves the existence of Pauli flow. This implies it may be possible to reduce the runtime of their two-qubit-gate optimisation procedure by removing the need to regularly run the costly gflow-finding algorithm.
## 1 Introduction
The ZX-calculus is a graphical language for representing and reasoning about quantum computations, allowing us to conveniently represent and reason about computations in both the quantum circuit model and the one-way model of measurement based quantum-computation (MBQC), as well as to translate between the two. The ZX-calculus has various complete sets of rewrite rules, meaning that any two diagrams representing the same linear map can be transformed into each other entirely graphically [1, 14, 20] and provide tools for uses in optimization [24][12], obfuscation [7] and other areas of research in quantum computing.
The one-way model of MBQC involves the implementation of quantum computations by performing successive adaptive single-qubit measurements on a resource state [21], largely without using any unitary operations. This contrasts with the more commonly-used circuit model and has applications in server-client scenarios as well as for certain quantum error-correcting codes.
An MBQC computation is given as a _pattern_, which specifies the resource state - usually a graph state - and a sequence of measurements of certain types [11]. As measurements are non-deterministic, future measurements need to be adapted depending on the outcomes of past measurements to obtain an overall deterministic computation. Yet not every pattern can be implemented deterministically. Sufficient (and in some cases necessary) criteria for determinism are given by the different kinds of _flow_, which define a partial order on the measured qubits and give instructions for how to adapt the future computation if a measurement yields the undesired outcome [6, 10] (cf. Section 2.3).
In addition to the applications mentioned above, the flexible structure of MBQC patterns is also useful as a theoretical tool. For example, translations between circuits and MBQC patterns have been used to trade off circuit depth versus qubit number [5] or to reduce the number of \(T\)-gates in a Clifford+T circuit [16]. When translating an MBQC pattern (back) into a circuit, it is important that the pattern still have flow, as circuit extraction algorithms rely on flow [3, 10, 12, 19].
ZX-calculus diagrams directly corresponding to MBQC-patters are said to be in MBQC-form. Many of the standard ZX-calculus rewrite rules do not preserve the MBQC-form structure nor the existence of a flow, which we often want to preserve, thus circuit optimisation using MBQC and the ZX-calculus relies on proofs that preserve both MBQC-form and flow [3, 12]. Much of the previous work on this has focused on rewrite rules that maintain or reduce the number of qubits, which find direct application in T-count optimisation [12]. Nevertheless, it is sometimes desirable to increase the number of qubits in an MBQC pattern while preserving the existence of flow, such as for more involved optimisation strategies [23] or for obfuscation.
In this work we introduce several ZX-calculus rewrite rules that preserve the MBQC-form structure as well as Pauli flow [6], alongside proofs of this preservation. These rules have various applications, such as being used in obfuscation techniques for blind quantum computation [7]. Notably, we introduce the first Pauli flow preserving rewrite rule that allows us to change measurement angles arbitrarily, with all previous rules only allowing for changes that are integer multiples of \(\frac{\pi}{2}\). Using this, we prove that the 'neighbour unfusion' rule of [24] always preserves the existence of Pauli flow. Additionally, we show that neighbour unfusion only preserves gflow [6], a less general flow condition, if and only if one of the two neighbours is in the correction set of the other.
## 2 Preliminaries
In this section, we give an overview of the ZX-calculus and then use it to introduce measurement-based quantum computing. We discuss the notion of flow that will be used in this paper and some existing rewrite rules which preserve the existence of this flow.
### The ZX-calculus
The ZX-calculus is a diagrammatic language for reasoning about quantum computations. We will provide a short introduction here; for a more thorough overview, see [8, 25].
A ZX-diagram consists of _spiders_ and _wires_. Diagrams are read from left to right: wires entering a diagram from the left are inputs while wires exiting the diagram on the right are outputs, like in the quantum circuit model. ZX-diagrams compose in two distinct ways: _horizontal composition_, which involves connecting the output wires of one diagram to the input wires of another, and _vertical composition_ (or the tensor product), which just involves drawing one diagram vertically above the other. The linear map corresponding to a ZX-diagram \(D\) is denoted by \(\llbracket D\rrbracket\).
ZX-diagrams are generated by two families of spiders which may have any number of inputs or outputs, corresponding to the Z and X bases respectively. \(Z\)-spiders are drawn as green dots and \(X\)-spiders as red dots; with \(m\) inputs, \(n\) outputs, and using \((\cdot)^{\otimes k}\) to denote a \(k\)-fold tensor power, we have:
\[\begin{array}{c}\includegraphics[width=142.26378pt]{ZX-diagrams}=\left|0 \right\rangle^{\otimes n}\left\langle 0\right|^{\otimes m}+e^{i\alpha}\left|1 \right\rangle^{\otimes n}\left\langle 1\right|^{\otimes m}\\ \end{array}\qquad\qquad\begin{array}{c}\includegraphics[width=142.26378pt]{ ZX-diagrams}=\left|+\right\rangle^{\otimes n}\left\langle+\right|^{\otimes m}+e^{i\alpha} \left|-\right\rangle^{\otimes n}\left\langle-\right|^{\otimes m}\\ \end{array}\]
Spiders with exactly one input and output are unitary, in particular \(\left[\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
**Definition 2.2**.: [3, Definition 2.18] A ZX-diagram is in _MBQC-form_ if it consists of a graph state diagram in which each vertex of the graph may furthermore be connected to an input (in addition to its output), and a measurement effect instead of its output.
MBQC restricts the allowed single-qubit measurements to three planes of the Bloch sphere: those spanned by the eigenstates of two Pauli matrices, called the XY, YZ and XZ planes. Each time a qubit \(u\) is measured in a plane \(\lambda(u)\) at an angle \(\alpha\), one may obtain either the desired outcome, denoted \(\langle+_{\lambda(u),\alpha}|\), or the undesired outcome \(\langle-_{\lambda(u),\alpha}|=\langle+_{\lambda(u),\alpha+\pi}|\). Measurements where the angle is an integer multiple of \(\frac{\pi}{2}\) are Pauli measurements; the corresponding measurement type is denoted by simply \(X\), \(Y\), or \(Z\). The ZX-diagram corresponding to each (desired) measurement outcome is given in Table 1. The structure of an MBQC protocol is formalised as follows.
**Definition 2.3**.: A _labelled open graph_ is a tuple \(\Gamma=(G,I,O,\lambda)\), where \(G=(V,E)\) is a simple undirected graph, \(I\subseteq V\) is a set of input vertices, \(O\subseteq V\) is a set of output vertices, and \(\lambda:V\setminus O\rightarrow\{X,Y,Z,XY,XZ,YZ\}\) assigns a measurement plane or Pauli measurement to each non-output vertex.
### Pauli flow
Measurement-based computations are inherently probabilistic because measurements are probabilistic. Computations can be made deterministic overall (up to Pauli corrections on the outputs) by tracking which measurements result in undesired outcomes and then correcting for these by adapting future measurements. A sufficient (and in some cases necessary) condition for this to be possible on a given labelled open graph is _Pauli flow_. In the following, \(\mathcal{P}(A)\) denotes the powerset of a set \(A\).
**Definition 2.4** ([6, Definition 5]).: A labelled open graph \((G,I,O,\lambda)\) has Pauli flow if there exists a map \(p:V\setminus O\rightarrow\mathcal{P}(V\setminus I)\) and a partial order \(\prec\) over V such that for all \(u\in V\setminus O\),
1. if \(v\in p(u)\), \(v\neq u\) and \(\lambda(v)\not\in\{X,Y\}\), then \(u\prec v\).
2. if \(v\in\mathrm{Odd}_{G}(p(u))\), \(v\neq u\) and \(\lambda(v)\not\in\{Y,Z\}\), then \(u\prec v\).
3. if \(\neg(u\prec v)\), \(u\neq v\) and \(\lambda(v)=Y\), then \(v\in p(u)\Longleftrightarrow v\in\mathrm{Odd}_{G}(p(u))\).
4. if \(\lambda(u)=XY\), then \(u\not\in p(u)\) and \(u\in\mathrm{Odd}_{G}(p(u))\).
5. if \(\lambda(u)=XZ\), then \(u\in p(u)\) and \(u\in\mathrm{Odd}_{G}(p(u))\).
6. if \(\lambda(u)=YZ\), then \(u\in p(u)\) and \(u\not\in\mathrm{Odd}_{G}(p(u))\).
7. if \(\lambda(u)=X\), then \(u\in\mathrm{Odd}_{G}(p(u))\).
8. if \(\lambda(u)=Z\), then \(u\in p(u)\).
9. if \(\lambda(u)=Y\) then either \(u\in p(u)\) and \(u\not\in\mathrm{Odd}_{G}(p(u))\) or \(u\not\in p(u)\) and \(u\in\mathrm{Odd}_{G}(p(u))\).
Here, the partial order is related to time order in which the qubits need to be measured. The set \(p(u)\) denotes qubits that are modified by Pauli-\(X\) to compensate for an undesired measurement outcome on \(u\), \(\mathrm{Odd}_{G}(p(u))\) denotes the set of vertices that are modified by Pauli-\(Z\).
\begin{table}
\begin{tabular}{c||
Pauli flow is a sufficient condition for strong, stepwise and uniform determinism: this means all branches of the computation should implement the same linear operator up to a phase, any interval of the computation should be deterministic on its own, and the computation should be deterministic for all choices of measurement angles that satisfy \(\lambda\)[7, p. 5]. Pauli flow (and related flow conditions) are particularly interesting from a ZX-calculus perspective as there are polynomial-time algorithms for extracting circuits from MBQC-form ZX-diagrams with flow [4, 13, 23], while circuit extraction from general ZX-diagrams is #P-hard [5].
There are certain Pauli flows where the side effects of any correction are particularly limited, these are called _focused_ and they exist whenever a labelled open graph has Pauli flow.
**Definition 2.5** (rephrased from [23, Definition 4.3]).: Suppose the labelled open graph \((G,I,O,\lambda)\) has a Pauli flow \((p,\prec)\). Define \(S_{u}=V\setminus(O\cup\{u\})\) for all \(u\in V\). Then \((p,\prec)\) is _focused_ if for all \(u\in V\setminus O\):
* Any \(v\in S_{u}\cap p(u)\) satisfies \(\lambda(v)\in\{\mathrm{XY},X,Y\}\).
* Any \(v\in S_{u}\cap\mathsf{Odd}\,(p(u))\) satisfies \(\lambda(v)\in\{\mathrm{XZ},\mathrm{YZ},Y,Z\}\).
* Any \(v\in S_{u}\) such that \(\lambda(v)=Y\) satisfies \(v\in p(u)\) if and only if \(v\in\mathsf{Odd}\,(p(u))\).
**Lemma 2.6** ([23, Lemma 4.6]).: _If a labelled open graph has Pauli flow, then it has a focused Pauli flow._
### Existing flow-preserving rewrite rules
The basic ZX-calculus rewrite rules in Figure 1 do not generally preserve even the MBQC-form structure of a ZX-calculus diagram. Yet there are some more complex derived rewrite rules that are known to preserve both the MBQC-form structure and the existence of a flow. These rules were previously considered in the context of gflow [13] and extended gflow [4]; the Pauli-flow preservation proofs are due to [23]. The simplest of these rules are \(Z\)-deletion and \(Z\)-insertion:
**Lemma 2.7** ([23, Lemma D.6]).: _Deleting a \(Z\)-measured vertex preserves the existence of Pauli flow._
**Lemma 2.8** ([18, Proposition 4.1]).: _Inserting a \(Z\)-measured vertex (i.e. the inverse of \(Z\)-deletion) also preserves the existence of Pauli flow._
Other rewrite rules are based around quantum generalisations of two graph-theoretic operations.
**Definition 2.9**.: Let \(G=(V,E)\) be a graph and \(u\in V\). The _local complementation of \(G\) about \(u\)_ is the operation which maps \(G\) to \(G\star u:=(V,E\bigtriangleup\{(b,c)\mid(b,u),(c,u)\in E\text{ and }b\neq c\})\), where \(\bigtriangleup\) is the symmetric difference operator given by \(A\bigwedge B=(A\cup B)\setminus(A\cap B)\). The _pivot of \(G\) about the edge \((u,v)\)_ is the operation mapping \(G\) to the graph \(G\wedge uv:=G\star u\star v\star u\).
Local complementation keeps the vertices of the graph the same but toggles some edges: for each pair of neighbours of \(u\), i.e. \(v,v^{\prime}\in N_{G}(u)\), there is an edge connecting \(v\) and \(v^{\prime}\) in \(G\star u\) if and only if there is no edge connecting \(v\) and \(v^{\prime}\) in \(G\). Pivoting is a series of three local complementations about two neighbouring vertices, and is denoted by \(G\wedge uv=G\star u\star v\star u\).
Both local complementation and pivoting give rise to operations on MBQC-form diagrams which preserve the MBQC form as well as the existence of Pauli flow (after some simple merging of single-qubit Cliffords into measurement effects, cf. [4, Section 4.2]). We illustrate the operations with examples as they are difficult to express in ZX-calculus in generality.
**Lemma 2.10** ([22, Lemma D.12]).: _A local complementation about a vertex \(u\) preserves the existence of Pauli flow._
**Lemma 2.11** ([22, Lemma D.21]).: _A pivot about an edge \((u,v)\) preserves the existence of Pauli flow._
**Observation 2.12**.: _Lemmas 2.10 and 2.11 provide their own inverses: four successive local complementations about the same vertex or two successive pivots about the same edge leave a diagram invariant._
## 3 Converting planar measurements to XY-measurements
In the _graph-like diagrams_[12] used in PyZX, all spiders are green and all edges are Hadamard edges. 'Phase gadgets' consist of a degree-1 green spider connected to a phase-free green spider by a Hadamard edge as in the left-most diagram of (1). When converting graph-like diagrams to MBQC-form, it is difficult to know whether to interpret phase gadgets as a single YZ-measured vertex (middle diagram) or as an \(X\)-measured vertex connected to a degree-1 XY-measured vertex (right-most diagram):
(1)
The following proposition shows that both interpretations are valid and can be interconverted.
**Proposition 3.1**.: _Let \((G,I,O,\lambda)\) be a labelled open graph with Pauli flow where \(G=(V,E)\), and suppose there exists \(x\in V\) with \(\lambda(x)=\mathrm{YZ}\). Then \((G^{\prime},I,O,\lambda^{\prime})\) has Pauli flow, where \(V^{\prime}=V\cup\{x^{\prime}\}\), \(E^{\prime}=E\cup\{\{x,x^{\prime}\}\}\), and \(\lambda^{\prime}(x)=X\), \(\lambda^{\prime}(x^{\prime})=\mathrm{XY}\), and \(\lambda^{\prime}(v)=\lambda(v)\) otherwise._
Proof.: Consider the following sequence of rewrites.
(20,0)(20,0)(20,0)(20,0)(20,0)(20,0)(20,0)(20,0)(20,0)(20,0)(20,0)(20,0)(20,0)(20,0)(20,0)(20,0)(20,0)(20,0)(20,0)(20,0)(20,0)(20,0)(20,0)(20,0)(20,0)(20,0)(20,0)(20,0)(20,0)(20,0)(20,0)(20,0)(20,0)(20,0)(20,0)(20,0)(20,0)(20,0)(20,0)(20,0)(20,0)(20,0)(20,0)(20,0)(20,0)(20,0)(20,0)(20,0)(20,0)(20,0)(20,0)(20,0)(20,0)(20,0)(20,0)(20,0)(20,0)(20,0)(20,0)(20,0)(20,0)(20,0)(20,0)(20,0)(20,0)(20,0)(20,0)(20,0)(20,0)(20,0)(20,0)(20,0)(20,0)(20,0)(20,0)(20,0)(20,0)(20,0)(20,0)(20,0)(20,0)(20,0)(20,0)(20,0)(20,0)(20,0)(20,0)(20,0)(20,0)(20,0)(20,0)(20,0)(20,0)(20,0)(20,0)(20,0)(20,0)(20,0)(20,0)(20,0)(20,0)(20,0)(20,0)(20,0)(20,0)(20,0)(20,0)(20,0)(20,0)(20,0)(20,0)(20,0)(20,0)(20,0)(20,0)(20,0)(20,0)(20,0)(20,0)(20,0)(20,0)(20,0)(20,0)(20,0)(20,0)(20,0)(20,0)(20,0)(20,0)(20,0)(20,0)(20,0)(20,0)(20,0)(20,0)(20,0)(20,0)(20,0)(20,0)(20,0)(20,0)(20,0)(20,0)(20,0)(20,0)(20,0)(20,0)(20,0)(20,0)(20,0)(20,0)(20,0)(20,0)(20,0)(20,0)(20,0)(20,0)(20,0)(20,0)(20,0)(20,0)(20,0)(20,0)(20,0)(20,0)(20,0)(20,0)(20,0)(20,0)(20,0)(20,0)(20,0)(20,0)(20,0)(20,0)(20,0)(20,0)(20,0)(20,0)(20,0)(20,0)(20,0)(20,0)(20,0)(20,0)(20,0)(20,0)(20,0)(20,0)(20,0)(20,0)(20,0)(20,0)(20,0)(20,0)(20,0)(20,0)(20,0)(20,0)(20,0)(20,0)(20,0)(20,0)(20,0)(20,0)(2
**Proposition 3.2**.: _Let \((G,I,O,\lambda)\) be a labelled open graph with Pauli flow where \(G=(V,E)\), and suppose there exists \(x\in V\) with \(\lambda(x)=\mathrm{XZ}\). Then \((G^{\prime},I,O,\lambda^{\prime})\) has Pauli flow, where \(V^{\prime}=V\cup\{x^{\prime}\}\), \(E^{\prime}=E\cup\{\{x,x^{\prime}\}\}\), and \(\lambda^{\prime}(x)=Y\), \(\lambda^{\prime}(x^{\prime})=\mathrm{XY}\) and \(\lambda^{\prime}(v)=\lambda(v)\) otherwise._
Proof.: Consider the following sequence of rewrites.
Here we insert a \(Z\)-measured vertex \(x^{\prime}\) connected to the \(XZ\)-measured vertex \(x\), perform local complementation about \(x^{\prime}\), then pivot along the edge \((x,x^{\prime})\). Each of these rewrites preserves the existence of Pauli flow, thus the resulting pattern has Pauli flow.
Using the two previous propositions, we are able to re-write any XZ- and YZ-planar measurements into a Pauli measurement plus an XY-measurement. This implies the following.
**Proposition 3.3**.: _Let \((G,I,O,\lambda)\) be an arbitrary MBQC-form diagram with Pauli flow. Then there exists an equivalent diagram \((G^{\prime},I^{\prime},O^{\prime},\lambda^{\prime})\) with Pauli flow where \(\lambda^{\prime}(v)\in\{X,Y,XY\}\) for all \(v\in V^{\prime}\)._
Proof.: We begin by applying \(Z\)-deletion (Lemma 2.7) to all \(Z\)-measured vertices, leaving us with only \(X\), \(Y\), \(XY\), \(XZ\) and \(YZ\) vertices. It remains to remove all \(XZ\) and \(YZ\) measurements.
By Proposition 3.1, we can convert every \(YZ\)-measured vertex into an \(X\)-measured vertex connected to an \(XY\)-measured vertex while preserving the existence of Pauli flow. Then, by Proposition 3.2 we can convert every \(XZ\)-measured vertex into a \(Y\)-measured vertex connected to an \(XY\)-measured vertex. We now only have \(X\), \(Y\) and \(XY\) measured vertices remaining, and each rewrite rule used to get here preserves the existence of Pauli flow, thus the resulting graph has Pauli flow.
**Remark 3.4**.: Note that Pauli flow is important here: the gflow conditions need not be satisfied if the newly-introduced Pauli measurements were taken to be arbitrary XY-measurements instead.
For example, the first of the following two diagrams has a gflow \((g,\prec)\) with \(g(a)=\{c\}\), \(g(b)=\{d\}\), \(g(x)=\{c,d,x\}\) and \(a,b,x\prec c,d\). The second diagram has Pauli flow by Proposition 3.1, but it does not have gflow: any flow \((p,\prec^{\prime})\) on the second diagram must have \(x\in p(x^{\prime})\) to satisfy \(x^{\prime}\in\mathsf{Odd}\,(p(x^{\prime}))\), as inputs \(a,b\) do not appear in correction sets. Similarly, \(x^{\prime}\in p(x)\) as it is the only neighbour. Thus the gflow conditions would require \(x\prec^{\prime}x^{\prime}\) and \(x^{\prime}\prec^{\prime}x\) simultaneously, which is not possible.
## 4 Subdividing an edge
Research on flow-preserving rewrite rules so far has been geared towards optimization, which usually involves reducing the number of vertices in a pattern. Yet there are also cases where it is desirable to introduce new vertices. An example of this is the obfuscation protocol for blind quantum computing of [8], which used an unpublished rewrite rule proved by one of the authors. We give the proof below.
**Proposition 4.1**.: _Let \(G=(V,E)\) be a graph with vertices \(V\) and edges \(E\). Suppose the labelled open graph \((G,I,O,\lambda)\) has Pauli flow. Pick an edge \(\{v,w\}\in E\) and subdivide it twice, i.e. let \(G^{\prime}:=(V^{\prime},E^{\prime})\), where \(V^{\prime}:=V\cup\{v^{\prime},w^{\prime}\}\) contains two new vertices \(v^{\prime},w^{\prime}\), and_
\[E^{\prime}:=(E\setminus\{\{v,w\}\})\cup\{\{v,w^{\prime}\},\{w^{\prime},v^{ \prime}\},\{v^{\prime},w\}\}.\]
_Then \((G^{\prime},I,O,\lambda^{\prime})\) has Pauli flow, where \(\lambda^{\prime}(v^{\prime})=\lambda^{\prime}(w^{\prime})=X\) and \(\lambda^{\prime}(u)=\lambda(u)\) for all \(u\in V\setminus O\)._
Proof.: We may subdivide an edge by inserting two \(Z\)-measured vertices as shown in the following diagram, then pivoting about these two \(Z\)-measured vertices.
As inserting \(Z\)-measured vertices and pivoting both preserve the existence of Pauli flow, subdividing an edge also preserves the existence of Pauli flow.
## 5 Splitting a vertex
Each of the previously mentioned Pauli-flow preserving rewrite rules only changes measurement angles by integer multiples of \(\frac{\pi}{2}\). Here we introduce the first Pauli-flow preserving rewrite rule which allows us to change measurement angles arbitrarily. To simplify the proof, the proposition requires that all measurements in the pattern are \(\mathrm{XY},X\) or \(Y\); by Proposition 3.3 this is without loss of generality.
**Proposition 5.1**.: _Let \(G=(V,E)\) be a graph with vertices \(V\) and edges \(E\). Suppose the labelled open graph \((G,I,O,\lambda)\) has Pauli flow and satisfies \(\lambda(u)\in\{\mathrm{XY},X,Y\}\) for all \(u\in O^{c}\). Pick a vertex \(x\in O^{c}\) such that \(\lambda(x)=\mathrm{XY}\) and split it, i.e. let \(G^{\prime}:=(V^{\prime},E^{\prime})\), where \(V^{\prime}:=V\cup\{x^{\prime},x^{\prime\prime}\}\) contains two new vertices \(x^{\prime},x^{\prime\prime}\), and choose some (possibly empty) subset \(W\subseteq N(x)\) such that_
\[E^{\prime}:=(E\setminus\{\{x,w\}\mid w\in W\})\cup\{\{x^{\prime\prime},w\} \mid w\in W\}\cup\{\{x,x^{\prime}\},\{x^{\prime},x^{\prime\prime}\}\}.\]
_Then \((G^{\prime},I,O,\lambda^{\prime})\) has Pauli flow, where \(\lambda^{\prime}(x^{\prime})=X\), \(\lambda^{\prime}(x^{\prime\prime})=\mathrm{XY}\), and \(\lambda^{\prime}(u)=\lambda(u)\) for all \(u\in V\setminus O\)._
Proof.: Let \((p,\prec)\) be a focused Pauli flow for \((G,I,O,\lambda)\); this exists as the labelled open graph has Pauli flow. Since all measurements are \(XY\), \(X\) or \(Y\), the focusing conditions from Definition 2.5 reduce to:
* For all \(u\in V\setminus O\), if \(v\in\mathsf{Odd}_{G}\left(p(u)\right)\setminus(O\cup\{u\})\) then \(\lambda(v)=Y\).
* For all \(u\in V\setminus O\) and all \(v\in V\setminus(O\cup\{u\})\) such that \(\lambda(v)=Y\), we have \(v\in p(u)\leftrightarrow v\in\mathsf{Odd}_{G}\left(p(u)\right)\).
Now, for all \(u\in V\setminus O\), define
\[p^{\prime}(u):=\begin{cases}p(u)\cup\{x^{\prime},x^{\prime\prime}\}&\text{if }x \in p(u)\text{ and }\left|p(u)\cap W\right|\equiv 1\pmod{2}\\ p(u)\cup\{x^{\prime\prime}\}&\text{if }x\in p(u)\text{ and }\left|p(u)\cap W\right|\equiv 0 \pmod{2}\\ p(u)\cup\{x^{\prime}\}&\text{if }x\notin p(u)\text{ and }\left|p(u)\cap W\right|\equiv 1 \pmod{2}\\ p(u)&\text{if }x\notin p(u)\text{ and }\left|p(u)\cap W\right|\equiv 0 \pmod{2},\end{cases}\]
then it is straightforward to check that \(\mathsf{Odd}_{G^{\prime}}\left(p^{\prime}(u)\right)=\mathsf{Odd}_{G}\left(p( u)\right)\). For example, in the first case, note that \(\cup\) can be replaced by \(\Delta\) since \(x^{\prime},x^{\prime\prime}\) are new vertices that cannot appear in \(p(u)\). Thus
\[\mathsf{Odd}_{G^{\prime}}\left(p^{\prime}(u)\right) =\mathsf{Odd}_{G^{\prime}}\left(p(u)\,\Delta\left\{x^{\prime},x^{ \prime\prime}\right\}\right)\] \[=\mathsf{Odd}_{G^{\prime}}\left(p(u)\right)\Delta\mathsf{Odd}_{G^ {\prime}}\left(\{x^{\prime},x^{\prime\prime}\}\right)\] \[=\mathsf{Odd}_{G^{\prime}}\left(x\right)\Delta\left(\mathop{ \boldsymbol{\Lambda}}_{w\in p(u)\setminus(W\cup\{x\}\cup O)}\mathsf{Odd}_{G^{ \prime}}\left(w\right)\right)\Delta\left(\mathop{\boldsymbol{\Lambda}}_{w \in(p(u)\cap W)\setminus O}\mathsf{Odd}_{G^{\prime}}\left(w\right)\right) \Delta\left\{x,x^{\prime},x^{\prime\prime}\right\}\Delta W\] \[=\mathsf{Odd}_{G}\left(p(u)\right),\]
where the third step uses \(\mathsf{Odd}_{G^{\prime}}\left(x\right)=\mathsf{Odd}_{G}\left(x\right)\Delta W \,\Delta\left\{x^{\prime}\right\}\), and the final step uses \(\left|p(u)\cap W\right|\equiv 1\pmod{2}\).
Let1\(p^{\prime}(x^{\prime}):=\{x^{\prime\prime}\}\), and let \(p^{\prime}(x^{\prime\prime}):=p^{\prime}(x)\,\Delta\left\{x^{\prime}\right\}\), resulting in the following odd neighbourhoods:
Footnote 1: Note there is a choice for \(p^{\prime}(x^{\prime})\), since \(x^{\prime}\) can be corrected either via \(x\) or via \(x^{\prime\prime}\); we pick the latter for some slight notational convenience.
\[\mathsf{Odd}_{G^{\prime}}\left(p^{\prime}(x^{\prime})\right) =\mathsf{Odd}_{G^{\prime}}\left(\{x^{\prime\prime}\}\right)=W \cup\{x^{\prime}\} \tag{2}\]
and
\[\mathsf{Odd}_{G^{\prime}}\left(p^{\prime}(x^{\prime\prime})\right) =\mathsf{Odd}_{G^{\prime}}\left(p^{\prime}(x)\right)\Delta \mathsf{Odd}_{G^{\prime}}\left(\{x^{\prime}\}\right)\] \[=\mathsf{Odd}_{G}\left(p(x)\right)\Delta\left\{x,x^{\prime\prime}\right\}\] \[=(\mathsf{Odd}_{G}\left(p(x)\right)\cup\{x^{\prime\prime}\}) \setminus\{x\} \tag{3}\]
where the final step uses the fact that \(x\in\mathsf{Odd}_{G}\left(p(x)\right)\) and \(x^{\prime\prime}\not\in\mathsf{Odd}_{G}\left(p(x)\right)\) (since \(x^{\prime\prime}\) is not even in \(G\)).
Let \(\prec^{\prime}\) be the transitive closure of
\[\prec\cup\Big{\{}(w,x^{\prime\prime})\mid w\prec x\Big{\}}\cup\Big{\{}(x^{ \prime\prime},w)\mid x\prec w\Big{\}}\cup\Big{\{}(x^{\prime},w)\mid w\in W \Big{\}}\cup\Big{\{}(x^{\prime},x^{\prime\prime})\Big{\}}.\]
This is a partial order since \(x^{\prime\prime}\) has the same relationships as \(x\) (except for being a successor of \(x^{\prime}\)) and \(x^{\prime}\) only has successors.
The proof that \((p^{\prime},\prec^{\prime})\) is a Pauli flow for \(G^{\prime}\) can be found in Appendix A.
We are able to obtain other useful rewrite rules as immediate corollaries of this.
**Corollary 5.2**.: _Using Proposition 5.1 with \(W=\emptyset\) and \(\alpha^{\prime\prime}=0\), we obtain the following rule used in [7]._
\[\tikzfig{1}\quad=\quad\tikzfig{2}\quad=\quad\tikzfig{3}\quad\tikzfig{4}\quad \tikzfig{5}\quad\tikzfig{6}\quad=\quad\tikzfig{7}\quad\tikzfig{8}\quad\tikzfig{9}\quad\tikzfig{10}\quad\tikzfig{11}\quad\tikzfig{12}\quad\tikzfig{13}\quad\tikzfig{14}\quad\tikzfig{15}\quad\tikfig{16}\quad\tikzfig{17}\quad\tikfig{18}\quad\tikfig{19}\]
This rule can alternatively be derived in a more round-about way from \(Z\)-insertion and pivoting, but we next prove a rule that truly requires vertex splitting.
## 6 Neighbour unfusion
In [24], a rewrite rule called _neighbour unfusion_ was used to reduce the number of two-qubit gates in circuits via the ZX-calculus. Using neighbour unfusion allowed for the two-qubit gate count to be greatly reduced, but introduced a new problem: neighbour unfusion, which introduces two new qubits in each application, was found to not always preserve gflow. Yet a flow is needed to be able to translate back to a circuit after the application of the two-qubit gate count reduction algorithm. We now show that neighbour unfusion preserves the existence of Pauli flow, so circuit re-extraction is always possible.
**Corollary 6.1**.: _By applying vertex splitting with \(|W|=1\), we obtain the following 'neighbour unfusion' rule, where \(\alpha=\alpha^{\prime}+\alpha^{\prime\prime}\) (the measurement for the right-most vertex is not drawn as it can be measured in any plane, or even be an output)._
Staudacher et al. [24] state that, in the case of only XY-measurements, neighbour unfusion fails to preserve gflow if the two vertices to which neighbour unfusion is applied are extracted to different qubits in the circuit extraction process. We now formalise this idea and characterise exactly the situations where neighbour unfusion does preserve gflow.
**Proposition 6.2**.: _Suppose neighbour unfusion is applied to two adjacent vertices \(a\) and \(b\) in a labelled open graph \(\Gamma=(G,I,O,\lambda)\) where \(\lambda(a)=\lambda(b)=\mathrm{XY}\) and \(|I|=|O|\). This preserves the existence of gflow if and only if there is some gflow \((g,\prec)\) on the original labelled open graph where \(a\in g(b)\) or where \(b\in g(a)\)._
Proof.: Consider a labelled open graph which has a focused gflow \((g,\prec)\) and contains the subdiagram
For the 'if' direction, assume without loss of generality that \(b\in g(a)\), the other case is symmetric. Neighbour unfusion yields a labelled open graph \(\Gamma^{\prime}=(G^{\prime},I,O,\lambda^{\prime})\) with the following subdiagram:
\[a\]
(4)
Given \(b\in g(a)\), we can construct a gflow for the new pattern by defining the correction sets as follows:
\[g^{\prime}(v)=\begin{cases}g(v)&\text{if }v\not\in\{a,x,x^{\prime}\}\\ g(a)\cup\{x\}&\text{if }v=a\\ \{x^{\prime}\}&\text{if }v=x\\ \{b\}&\text{if }v=x^{\prime}\end{cases}\]
Take \(\prec^{\prime}\) to be the transitive closure of \(\prec\cup\{(a,x),(x,x^{\prime}),(x^{\prime},b)\}\), then \((g^{\prime},\prec^{\prime})\) is a gflow for \(\Gamma^{\prime}\): each of \(a,x\) and \(x^{\prime}\) is in the odd neighbourhood of its correction set. The relation \(\prec^{\prime}\) is a strict partial order and satisfies the gflow conditions for the newly introduced vertices. Finally, all of the other gflow conditions are satisfied as \((g,\prec)\) is a gflow for \(\Gamma\).
For the 'only if' direction, we will prove the contrapositive: Assume the post-neighbour-unfusion measurement pattern has gflow, then the original measurement pattern has a gflow where \(a\) is in the correction set of \(b\), or a gflow where \(b\) is in the correction set of \(a\). So suppose we have a pattern \(\Gamma^{\prime}\) with the subdiagram (4). Assume that \(\Gamma^{\prime}\) has a focused gflow \((g^{\prime},\prec^{\prime})\). Let \(\Gamma^{\prime\prime}=(G^{\prime\prime},I,O,\lambda^{\prime\prime})\) be the induced sub-pattern containing only those vertices of \(G^{\prime}\) that are either outputs or measured in the XY-plane; this must include all inputs since those cannot be measured in planes XZ or YZ. This new measurement pattern still contains the subdiagram (4) and it has gflow [3, Lemma 3.15]. In fact, since \((g^{\prime},\prec^{\prime})\) is focused, it implicitly follows from [3, Proposition 3.14 and Lemma 3.15] that the gflow of the new pattern is just the restriction of the old gflow function to a smaller domain, and this is still focused; denote it by \((g^{\prime\prime},\prec^{\prime\prime})\).
Now every focused gflow in a pattern with only XY-plane measurements and equal numbers of inputs and outputs can be reversed in a very strict sense [18]: let \(\Gamma^{\prime\prime\prime}=(G^{\prime\prime},O,I,\lambda^{\prime\prime\prime})\) be the reversed pattern with the roles of inputs and outputs swapped and \(\lambda^{\prime\prime\prime}\) mapping all non-outputs to XY. Then there exists a focused gflow \((g^{\prime\prime}_{rev},\prec^{\prime\prime}_{rev})\) for \(\Gamma^{\prime\prime\prime}\) where \(\prec^{\prime\prime}_{rev}\) is the reverse of \(\prec^{\prime\prime}\) and \(u\in g^{\prime\prime}_{rev}(v)\) if and only if \(v\in g^{\prime\prime}(u)\)[3, Corollary 2.47].
As \(x\) is XY-measured and has two neighbours, to satisfy \(x\in\mathsf{Odd}_{G^{\prime\prime}}\,(g^{\prime\prime}(x))\) and \(x\in\mathsf{Odd}_{G^{\prime\prime}}\,(g^{\prime\prime}_{rev}(x))\) we require the following to hold, where \(\oplus\) is the exclusive-or operator:
\[(a\in g^{\prime\prime}(x)\wedge x\in g^{\prime\prime}(x^{\prime}))\oplus(x^{ \prime}\in g^{\prime\prime}(x)\wedge x\in g^{\prime\prime}(a)).\]
As \(x^{\prime}\) is also XY-measured and has two neighbours, by the same reasoning we obtain the following:
\[(b\in g^{\prime\prime}(x^{\prime})\wedge x^{\prime}\in g^{\prime\prime}(x)) \oplus(x\in g^{\prime\prime}(x^{\prime})\wedge x^{\prime}\in g^{\prime\prime}( b)).\]
Then, as we cannot have both \(x\in g^{\prime\prime}(x^{\prime})\) and \(x^{\prime}\in g^{\prime\prime}(x)\), we have either that \(a\in g^{\prime\prime}(x)\), \(x\in g^{\prime\prime}(x^{\prime})\) and \(x^{\prime}\in g^{\prime\prime}(b)\) or that \(b\in g^{\prime\prime}(x^{\prime})\), \(x^{\prime}\in g^{\prime\prime}(x)\) and \(x\in g^{\prime\prime}(a)\) for \((g^{\prime\prime},\prec^{\prime\prime})\) to be a gflow for \(\Gamma^{\prime\prime}\). But \((g^{\prime\prime},\prec^{\prime\prime})\) is the restriction of \((g^{\prime},\prec^{\prime})\) to the XY-measured vertices in \(\Gamma^{\prime}\). Thus either \(a\in g^{\prime}(x)\), \(x\in g^{\prime}(x^{\prime})\) and \(x^{\prime}\in g^{\prime}(b)\) or that \(b\in g^{\prime}(x^{\prime})\), \(x^{\prime}\in g^{\prime}(x)\) and \(x\in g^{\prime}(a)\).
Now, consider the following sequence of rewrites corresponding to the inverse of neighbour unfusion:
where we first pivot along the edge \((x,x^{\prime})\), then apply \(Z\)-deletion to \(x^{\prime}\) and finally apply the phase gadget identity rule of [16] to add the phase of \(x^{\prime}\) to that of \(a\). Each of these rules preserves the existence of gflow, thus the inverse of neighbour unfusion preserves the existence of gflow. Moreover, if \(x\in g^{\prime}(a)\), \(x^{\prime}\in g^{\prime}(x)\) and \(b^{\prime}\in g^{\prime}(x^{\prime})\), then after applying the inverse of neighbour unfusion we get a gflow \((g,\prec)\) for \(\Gamma\) with \(b\in g(a)\) (and similarly if \(a\in g^{\prime}(x)\), \(x\in g^{\prime}(x^{\prime})\) and \(x^{\prime}\in g^{\prime}(b)\) we get a gflow with \(a\in g(b)\)). Therefore, if the measurement pattern after neighbour unfusion has gflow, then the original pattern has a gflow where \(b\) is in the correction set of \(a\), or a gflow where \(a\) is in the correction set of \(b\).
An analogous argument works if \(b\) is an output, in which case the only option is for \(b\) to be in the correction set of \(a\). Therefore the above proposition covers all the cases relevant to Staudacher et al.'s work on patterns where all measurements are in the XY-plane.
**Example 6.3**.: The following two measurement patterns are related by neighbour unfusion along the edge between vertices \(a\) and \(b\):
In the first pattern, \(a\) and \(b\) are both inputs and thus cannot appear in correction sets. Hence the pattern does not have a gflow where \(a\) is in the correction set of \(b\) or where \(b\) is in the correction set of \(a\). Yet it does have a gflow \((g,\prec)\) with \(g(a)=\{c\}\), \(g(b)=\{d\}\) and \(a,b\prec c,d\).
For the second pattern to have a flow \((p,\prec^{\prime})\), we require \(x^{\prime}\in p(x)\) and \(x\in p(x^{\prime})\) since both vertices need to be in the odd neighbourhood of their correction set and inputs cannot appear in correction sets. This diagram can therefore not have a gflow, as the gflow conditions would require that \(x\prec^{\prime}x^{\prime}\) and \(x^{\prime}\prec^{\prime}x\) simultaneously, so \(\prec^{\prime}\) would not be strict. This diagram does have a Pauli flow however, as the \(X\)-measured vertex \(x\) does not need to come after \(x^{\prime}\) in the partial order in the case of Pauli flow. The Pauli flow satisfies \(p(a)=\{c\}\), \(p(b)=\{d\}\), \(p(x)=\{d,x^{\prime}\}\) and \(p(x^{\prime})=\{c,x\}\) with \(x\prec a,b,x^{\prime}\prec c,d\).
As neighbour unfusion does not always preserve the existence of gflow, the algorithm in [24] had poor runtime as it would need to explicitly calculate a new gflow (using a gflow-finding algorithm such as that in [3]) every time this rule was used. Yet neighbour unfusion preserves the existence of Pauli flow, so by using Pauli flow instead of gflow, it should be possible to avoid running the gflow-finding algorithm after each application of neighbour unfusion (and instead track the changes to the flow as in the proof of Proposition 5.1), and thus improve the runtime of this algorithm as a whole.
## 7 Conclusion
We have introduced several rewrite rules which preserve the existence of Pauli flow, including the first flow-preserving rewrite rule which allows us to change phases arbitrarily, rather than just by multiples of \(\frac{\pi}{2}\). An immediate corollary of this rule preserving Pauli flow is that the neighbour unfusion rule of [24] also preserves Pauli flow, potentially leading to a reduced runtime for their two-qubit gate count reduction algorithm.
At present, the circuit extraction algorithm for diagrams with Pauli flow introduces more two-qubit gates than the corresponding circuit extraction algorithm for diagrams with gflow - future work could involve using known work on Pauli gadget optimization, such as that of [9], to reduce the number of two-qubit gates obtained when performing circuit extraction on diagrams with Pauli flow.
Other future work could involve finding an analogous result to the stabiliser completeness proof of [17] for a more general fragment of the MBQC-form ZX-calculus, using Proposition 5.1 to introduce phases that are not just integer multiples of \(\frac{\pi}{2}\).
## Acknowledgements
We would like to thank Korbinian Staudacher and Shuxiang Cao for bringing the topic of rewriting measurement patterns to add new qubits to our attention, as well as for interesting conversations in developing this paper. We would also like to thank Will Simmons for useful conversations on related topics. |
2305.00613 | Gravitationally sensitive structured x-ray optics using nuclear
resonances | Einstein's general theory of relativity not only revolutionized human
understanding of the universe, but also brought many gravitational applications
in large scale, such as gravitational-wave astronomy, gravitational lensing,
and the operation of the global positioning system. However, it still remains a
challenge to implement applications for gravitational effects at small spacial
extensions on Earth. Here, we investigate a structured waveguide system that
allows for the control of an x-ray profile at altitude separations of
millimeters and even shorter using the nuclear resonant scattering of x rays.
Our present results suggest a potential compact scheme for turning the Earth's
gravity into a practical application of x-ray optics. | Shin-Yu Lee, Sven Ahrens, Wen-Te Liao | 2023-05-01T00:41:04Z | http://arxiv.org/abs/2305.00613v1 | # Gravitationally sensitive structured x-ray optics using nuclear resonances
###### Abstract
Einstein's general theory of relativity not only revolutionized human understanding of the universe, but also brought many gravitational applications in large scale, such as gravitational-wave astronomy [1], gravitational lensing [2], and the operation of the global positioning system [3]. However, it still remains a challenge to implement applications for gravitational effects at small spacial extensions on Earth. Here, we investigate a structured waveguide system that allows for the control of an x-ray profile at altitude separations of millimeters and even shorter using the nuclear resonant scattering of x rays [4; 5]. Our present results suggest a potential compact scheme for turning the Earth's gravity into a practical application of x-ray optics.
The Pound-Rebka experiment [6] has demonstrated a unique system for probing the gravitational red-shift effect by exploiting an extremely narrow nuclear linewidth in combination of a high x-ray energy in the Mossbauer effect [7]. Moreover, advances in modern x-ray light sources and optics have raised the field of x-ray-nuclei interactions to a new level of accuracy where coherent quantum control comes into play [8; 9; 10; 11; 12; 13; 14; 15; 16; 17; 18; 19; 20; 21]. A combination of nuclear quantum coherence and its sensitivity to gravity will potentially lead to a new type of x-ray optics whose performance depends on the gravitational red-shift in addition to the typical Zeeman shift [8; 12] and Doppler shift [14; 17; 18; 19]. In this context, we investigate a system that emulates the Schrodinger equation [22] and is sensitive to gravity. The present scheme allows for a systematic generation of structured x rays [23; 24; 25; 26] via changing the altitude, the x-ray photon energy, or the external magnetic field of a our system for a given waveguide structure.
The system is depicted in Figure 1 with the description as follows. An x ray drives a nuclear transition from the ground state \(\left|g\right>\) to the excited state \(\left|e\right>\) with the total detuning \(\Delta_{t}\Gamma=\left(\Delta_{G}+\Delta\right)\Gamma\) in Fig. 1(a). We emphasize that the gravitational red shift \(\Delta_{G}\simeq-zE_{t}GM_{E}/\left(\hbar\Gamma c^{2}R_{E}^{2}\right)\) has to be taken into account when the system is located at different altitude \(z\) relative to where x rays are emitted. Here \(E_{t}\) is the nuclear transition energy, \(G\) is the gravitational constant, \(M_{E}\) is the mass of the Earth, and \(R_{E}\) is the average radius of the Earth. \(\Delta\Gamma\) is the x-ray detuning, and \(\Gamma\) the spontaneous decay rate of the excited state \(\left|e\right>\). Our system of nuclear resonant scattering of x rays can be described by the optical-Bloch equation [9; 21]:
\[\partial_{t}\rho_{eg} =-\Gamma\left[\frac{1}{2}-i\left(\Delta+\Delta_{G}\right)\right] \rho_{eg}+\frac{i}{2}\frac{P}{\hbar}E, \tag{1}\] \[\frac{1}{c}\partial_{t}E+\partial_{y}E =\frac{-1}{2ik}\nabla_{\perp}^{2}E-\frac{k}{2i}(n_{e}^{2}-1)E+i \eta\rho_{eg}, \tag{2}\]
where \(\rho_{eg}\) is the coherence of a nuclear two-level system, \(E\) is the x-ray electric field strength, \(\hbar\) is the reduced Planck constant, \(k\) is the x-ray wavenumber, and \(n_{e}\) is the index of refraction from electrons. Further, we denote the transverse Laplacian \(\nabla_{\perp}^{2}=\partial_{x}^{2}+\partial_{z}^{2}\), and the coupling constant \(\eta=2\hbar\Gamma\xi/\left(PL\right)\), where \(L\) the length of the waveguide, \(P\) is the nuclear transition dipole moment, and \(\xi\) is the nuclear resonant thickness. The steady state of Eqs. (1-2) leads to the analytic solution \(\rho_{eg}=PE\left(i-2\Delta_{t}\right)/\left[\hbar\Gamma\left(1+4\Delta_{t}^{2 }\right)\right]\) and the optical Schrodinger equation [22] (see supplemental information)
\[i\hbar c\partial_{y}E=-\frac{\hbar^{2}}{2m_{e}}\nabla_{\perp}^{2}E+\hbar c \left[\frac{k(1-n_{e}^{2})}{2}+\frac{4\xi\Delta_{t}}{L\left(1+4\Delta_{t}^{2 }\right)}\right]E, \tag{3}\]
with the effective mass \(m_{e}=\hbar k/c\). Equation (3) suggests a structured x-ray waveguide (SXWG) system composed of resonant nuclei, as depicted in Fig. 1(b), with a high degree of freedom for simulating different quantum systems via spatial engineering of \(n_{e}\) and \(\xi\). Fig. 1(c) displays the altitude dependent real part \(Re\left[\rho_{eg}\right]\) (blue-solid) and imaginary part \(Im\left[\rho_{eg}\right]\) (red-dashed line) of \(\rho_{eg}\) with \(\Delta=0\) for the isotope \({}^{45}\)Sc. \(Re\left[\rho_{eg}\right]\) describes the refractive index from nuclei and results in the gravitational effects in our system as revealed by the last term of Eq. (3). In cantrast, \(Im\left[\rho_{eg}\right]\) of Lorentzian line shape represents the nuclear absorption of x rays and leads to the Pound-Rebka experiment [6].
In the following, we use the SXWG to simulate Rabi oscillations of x ray in a finite square well of width \(L_{x}\). As illustrated in Fig. 1(d), we introduce a platinum cladding \(n_{e}=1-\delta_{\rm Pt}+i\beta_{\rm Pt}\) for \(\left|x\right|>L_{x}/2\) to constitute a finite square well potential, which provides with a transverse confinement to the x ray propagation direction [21]. The cladding material leads to the energy eigenfunctions of the \(E\) field in Eq. (3) which read as (see supplemental information)
\[\psi_{n}\left(x\right)=\sqrt{\frac{2}{L_{x}}}\sin\left[\frac{n\pi}{L_{x}} \left(x+\frac{L_{x}}{2}\right)\right], \tag{4}\]
with the eigen angular frequencies \(\omega_{n}=n^{2}\pi^{2}c/\left(2kL_{x}^{2}\right).\) Inside the square well \(\left|x\right|\leq L_{x}/2\), we perturb the system by a periodic (gradient) particle distribution of isotope \(X\) and carbon along the y direction (x direction), where \(\xi\left(x,y\right)=\left(\varrho/2\right)\left[1+\left(2x/L_{x}\right)\sin \left(k_{d}y\right)\right]\) and \(n_{e}\left(x,y\right)=1-\delta_{\rm C}\left(x,y\right)+i\beta_{\rm C}\left(x,y \right)-\delta_{\rm X}\left(x,y\right)+i\beta_{\rm X}\left(x,y\right)\) (see supplemental information for the form of \(n_{e}\)). In Fig. 1(d) subscripts Pt, C, and X
\begin{table}
\begin{tabular}{r r r r r r r r r} \hline \hline \(X\) & \(E_{t}\) (keV) & \(\Gamma\) (MHz) & \(\delta_{X}(10^{-6})\) & \(\delta_{\rm C}(10^{-6})\) & \(\delta_{\rm Pt}(10^{-5})\) & \(\beta_{X}(10^{-9})\) & \(\beta_{\rm C}(10^{-9})\) & \(\beta_{\rm Pt}(10^{-6})\) \\ \hline \({}^{45}\)Sc & 12.4 & 2.18\(\times 10^{-6}\) & 3.84 & 2.97 & 2.091 & 131.9 & 1.78 & 2.737 \\ \({}^{57}\)Fe & 14.413 & 7.05 & 7.43 & 2.20 & 1.607 & 338.9 & 0.93 & 2.49 \\ \({}^{73}\)Ge & 13.275 & 0.24 & 5.41 & 2.59 & 1.622 & 508.9 & 1.32 & 2.947 \\ \({}^{181}\)Ta & 6.238 & 0.11 & 67.74 & 11.77 & 8.731 & 7987.3 & 32.76 & 12.616 \\ \({}^{182}\)Ta & 16.273 & 2.45\(\times 10^{-6}\) & 10.41 & 1.72 & 1.304 & 1062.2 & 0.55 & 1.642 \\ \hline \hline \end{tabular}
\end{table}
Table 1: **Nuclear and waveguide material parameters**. For each isotope \(X\) we present the nuclear transition energy \(E_{t}\) and the radiative decay rate \(\Gamma\). The last six columns list the x-ray index of refraction \(n_{e}=1-\delta+i\beta\) for SXWG materials [5; 27; 28].
Figure 1: (a) the incident x ray (red upward arrow) drives a nuclear transition \(|g\rangle\rightarrow|e\rangle\) with detuning \(\Delta+\Delta_{G}\) (gray vertical double arrows). \(\Delta\) is the x-ray detuning, and \(\Delta_{G}\) is the x-ray gravitational red shift. (b) a hard x ray of transverse mode \(\psi_{1}\) (red arrow) propagates through a structured platinum cladding waveguide with a periodic nuclear distribution whose spatial concentration is indicated by the horizontal green-yellow legend. An output x ray of mode \(\psi_{2}\) is measured by a downstream position-sensitive detector (blue thick disc). The top black curves and pictures in gray level represent the intra-waveguide x-ray intensity. (c) the SXWG altitude-dependent nuclear coherence \(\rho_{eg}\) between the nuclear ground state \(|g\rangle\) and the excited state \(|e\rangle\) of the isotope \({}^{45}\)Sc. Red-dashed (blue-solid) line depicts the imaginary (real) part of the nuclear \(\rho_{eg}\). Black horizontal double arrow indicates the full altitude width \(\Delta Z_{G}\) at the half maximum \(|Re\left[\rho_{eg}\right]|\). (d) the transversely gradient (along x) and the longitudinally periodic (along y) electronic refractive index of an SXWG. Between the platinum claddings the intra-waveguide structure, made of carbon and an isotope X, drives the transition from the x-ray ground state \(\psi_{1}\) (red-solid line) to the first excited state \(\psi_{2}\) (blue-solid line).
represent platinum, carbon and resonant nucleus, respectively. We list other relevant material parameters in Table 1. With the above density modulation, the last term in Eq. (3) effectively becomes the electric dipole Hamiltonian in an oscillating field which perturbes the square well potential. This plays the key role to drive the x-ray Rabi oscillation with gravitational sensitivity. When the resonant condition \(ck_{d}=\omega_{n+1}-\omega_{n}\) is fulfilled, the periodic structure of the refractive index drives the dipole transition \(\psi_{n}\rightarrow\psi_{n+1}\) with the effective Rabi frequency (see supplemental information)
\[\Omega_{n}=\frac{16gn\left(n+1\right)c}{\pi^{2}\left(2n+1\right)^{2}L}\left[ \frac{k\left(\delta_{\mathrm{C}}-\delta_{\mathrm{Sc}}\right)}{2N\sigma_{0}}+ \frac{2\Delta_{t}}{1+4\Delta_{t}^{2}}\right]. \tag{5}\]
A propagating x ray experiences a constant SXWG-induced Rabi frequency, and the condition for having a \(m\pi\) pulse is \(|\Omega_{n}|L/c=m\pi\). We use \(m=2\) to demonstrate the Rabi oscillation between the x-ray ground state \(\psi_{1}\) and the first excited state \(\psi_{2}\) by numerically solving Eq. (3).
The solution of Eq. (3) in Fig. 2 represents an x ray which propagates through an SXWG with with \(X=^{45}\)Sc, \(\varrho=26.25\), natural scandium particle density \(N=3.99\times 10^{28}\)m\({}^{-3}\), nuclear resonance absorption cross section \(\sigma_{0}=12.6\)(kbarn), \(L_{x}=100\)nm, \(L=4\)mm, \(k_{d}=22.778\times 10^{3}\)rad/m, \(\Delta=4\Gamma\), and \(\Delta_{G}=0\). The process is visualized in terms of the fidelity \(F_{n}\left(y\right)=\left|\int_{-\infty}^{\infty}\psi_{n}^{\ast}\left(x\right) E\left(x,y\right)dx\right|^{2}/\int_{-\infty}^{\infty}\left|E\left(x,y\right) \right|^{2}dx\) in Fig. 2(a), and the normalized intra-waveguide x-ray intensity distribution \(\left|E\left(x,y\right)\right|^{2}/\int_{-\infty}^{\infty}\left|E\left(x,y \right)\right|^{2}dx\) in Fig. 2(b). The alternation of \(F_{1}\) and \(F_{2}\) in Fig. 2(a) clearly demonstrates that the ground-state x ray enters the SXWG at \(y=0\), and is then coherently promoted to the first excited state when approaching \(y=2\)mm. After that, the x ray returns to the ground state and finishes a full Rabi cycle at \(y=4\)mm. One can also observe the same phenomenon at the intra-waveguide x-ray intensity which evolves back and forth between states \(\psi_{1}\) and \(\psi_{2}\) in Fig.2(b).
The above x-ray Rabi oscillation suggests a systematic way to generate the \(\psi_{n}\) mode with a sequence of SXWGs driving \(\Delta n=1\) transitions, where one can raise the quantum number one by one. Specifically, one can connect two different SXWGs to accomplish a \(\psi_{1}\rightarrow\psi_{3}\) transition, where the upstream SXWG drives a \(\psi_{1}\rightarrow\psi_{2}\) transition, and the downstream SXWG achieves a \(\psi_{2}\rightarrow\psi_{3}\) promotion. Moreover, one can even design multiple SXWG modules for \(\Delta n=2\) or any dipole forbidden transitions. Thus, all combinations of SXWG modules open the capability to generate high order x-ray modes starting from the ground state \(\psi_{1}\). It is worth mentioning another possible application using dual SXWGs with a gap in between as an x-ray interferometer. While the upstream SXWG causes the \(\psi_{1}\rightarrow\psi_{2}\) transition as a beam splitter, the downstream SXWG leads to the return of \(\psi_{2}\rightarrow\psi_{1}\) as a beam combiner. Furthermore, in the gap between two SXWGs one can introduce a phase modulator to impose a phase shift at one branch of the split state \(\psi_{2}\), e.g., \(x>0\) at \(y=2\)mm in Fig. 2. A controllable interference due to the phase modulation is expected to happen at the end of the downstream SXWG.
We are ready to demonstrate the Earth's gravitational effect on our SXWG system. Given that the gravitational redshift \(\Delta_{G}\) significantly changes the nuclear coherence \(\rho_{eg}\) and Rabi frequency \(\Omega_{n}\) in Eq. (5) in two millimeter on Earth as demonstrated in Fig. 1(c), this sensitivity potentially allows for turning gravity into a practical use, e.g., gravitationally sensitive x-ray optics. For illustrating the effect, we numerically solve Eq. (3) and use the isotope \({}^{45}\)Sc in an SXWG with parameters \(\varrho=8.38\), \(L_{x}=100\)nm, \(L=2\)mm, \(k_{d}=23.778\times 10^{3}\)rad/m, and \(\Delta=19.36\) to show the gravitational effect. Fig. 3(a) illustrates three cases where the above discussed SXWG is located at \(z=2.32\)cm, \(z=2\)cm, and \(z=1.72\)cm from the top down. An incident x ray with the transverse mode \(\psi_{1}\) is deflected upward and experiences a gravitational redshift (vertical upward arrow with color gradient). The \(\Delta_{G}\) will change when the x ray illuminates the SXWG at different altitudes. The total detuning for each case is specified at the level-scheme plot, namely, \(\Delta+\Delta_{G}=-2.52\), \(\Delta+\Delta_{G}=0.5\), and \(\Delta+\Delta_{G}=3.14\) from the top down. We emphasize that the periodic particle density modulation effectively plays the role of a resonant field, and it always resonantly drives a transition between the x-ray modes in a cladding waveguide for three cases. However, various \(\Delta_{G}\) change the effective coupling strength \(\Omega_{n}\) and result in different outputs. The scattered/split x rays reflect the output mode and can be measured by a downstream position-sensitive detector. The \(x\)-dependent photon number counts show the output \(|E\left(x,L\right)|^{2}\) and reveal the Earth's gravitational effect. We depict the normalized \(|E\left(x,y\right)|^{2}\) for \(z=2.32\)cm, \(z=2\)cm, and \(z=1.72\)cm in Fig. 3(c, e, and g), respectively. The intra-waveguide intensity shows that the x ray significantly gets split in Fig. 3(e) under a half Rabi cycle \(\psi_{1}\rightarrow\psi_{2}\) in the SXWG at \(z=2\)cm. In contrast, Fig. 3(c and g) depicts only a transverse broad
Figure 2: A full cycle of the x-ray Rabi oscillation between \(\psi_{1}\) and \(\psi_{2}\) is illustrated by (a) the fidelity \(F_{1}\) (red-dashed line) and \(F_{2}\) (green-solid line) and (b) the normalized intra-waveguide x-ray intensity distribution. The input x-ray in the state \(\psi_{1}\) splits and reaches the maximum transverse double-hump separation of the state \(\psi_{2}\) at \(y=2\)mm where the maximum \(F_{2}\) also occurs. For \(y>2\)mm the x-ray confluence reflects the second half Rabi cycle, and the transverse x-ray pattern returns toward the state \(\psi_{1}\).
Figure 4: (a) altitude-dependent x-ray fidelity \(F_{2}\) through an SXWG composed of \({}^{45}\)Sc nuclei. (b) the FWHM \(\Delta Z_{G}\) on Earth is dependent on the quality factor \(Q\) of nuclear resonances for different nuclear species.
Figure 3: (a) Earth’s gravity changes the x-ray propagation in the waveguide composed of \({}^{45}\)Sc nuclei. Three cases at different altitudes \(z=2.32\)cm, \(z=2\)cm, and \(z=1.72\)cm where x rays propagate with a detuning \(\Delta=19.36\) and gravitational red shifts \(\Delta_{G}=-21.88\), \(\Delta_{G}=-18.86\), and \(\Delta_{G}=-16.22\), respectively. (b, d, and f) the fidelity \(F_{1}\) (red-dashed line) and \(F_{2}\) (green-solid line) for \(z=2.32\)cm, \(z=2\)cm, and \(z=1.72\)cm, respectively. (c, e, and g) the normalized intra-waveguide x-ray intensity distribution \(\left|E\left(x,y\right)\right|^{2}\) at altitude \(z=2.32\)cm, \(z=2\)cm, and \(z=1.72\)cm from the top to the bottom.
ening of the x ray due to a small \(|\Omega_{1}|\). Fig. 3(b,d, and f) illustrate \(F_{1}\left(y\right)\) (red-dashed line) and \(F_{2}\left(y\right)\) (green-solid line) for \(z=2.32\)cm, \(z=2\)cm, and \(z=1.72\)cm, respectively. One can clearly see that the x ray experiences Rabi flopping and becomes \(\psi_{2}\) at the resonant altitude \(z=2\)cm as also pointed out by Fig. 3(d and e). Given that \(|\Omega_{1}|\) decreases when the SXWG leaves the resonant altitude, the x ray mostly remains in the initial mode \(\psi_{1}\), namely, \(F_{1}\left(y\right)>F_{2}\left(y\right)\) at \(z=2.32\)cm and \(z=1.72\)cm. We depict the output altitude-dependent \(F_{2}\) at \(y=2\)mm in Fig. 4(a). As a result, different x-ray splitting is expected to occur when lifting an SXWG composed of \({}^{45}\)Sc at only a millimeter altitude change.
We quantify the gravitational sensitivity of the SXWG by the full altitude width \(\Delta Z_{G}\) at the half maximum \(Re\left[\rho_{eg}\right]\)
\[\Delta Z_{G}=\sqrt{3}\left(\frac{\hbar\Gamma}{E_{t}}\right)\frac{c^{2}R_{E}^{ 2}}{GM_{E}}, \tag{6}\]
as indicated by the black-horizontal double arrow in Fig. 1(c). The introduced \(\Delta Z_{G}\) is a measure for the sensitivity of the x-ray-nucleus coupling to the change of the SXWG vertical location. With the definition of the quality factor of a nuclear resonance \(Q=E_{t}/\left(\hbar\Gamma\right)\), we can see that \(\Delta Z_{G}\) is proportional to \(1/Q\). Fig. 4(b) exemplifies the implication of Eq. (6) for our system on Earth in a double-logarithmic plot, where we mark the isotopes \({}^{45}\)Sc, \({}^{57}\)Fe, \({}^{67}\)Zn, \({}^{73}\)Ge, \({}^{103}\)Rh, \({}^{107}\)Ag, \({}^{109}\)Ag, \({}^{181}\)Ta, \({}^{182}\)Ta, \({}^{229}\)Th, and \({}^{249}\)Bk, according their \(Q\) factor. Some of the nuclear parameters are listed in Table 1. Remarkably, the advantage of a very high \(Q\sim 10^{19}\) of \({}^{45}\)Sc or \({}^{182}\)Ta nuclear resonance endues an SXWG with a gravitational sensitivity to only millimeter altitude change. Notably, it is also possible to get sub-millimeter \(\Delta Z_{G}\) using \({}^{229}\)Th whose \(Q\sim 10^{20}\)[29], and micron \(\Delta Z_{G}\) with \({}^{107}\)Ag and \({}^{109}\)Ag whose \(Q\sim 10^{22}\). It deserves to mention that \({}^{103}\)Rh whose \(Q>10^{23}\)[27] even results in a nanometer \(\Delta Z_{G}\), and it may lead to gravitational application in mesoscopic scale.
In conclusion we have put forward a controllable SXWG system that potentially turns gravity into an application of x-ray optics. A periodic intra-waveguide structure, e.g., nuclear optical lattice \({}^{57}\)Fe/\({}^{56}\)Fe bilayers in Ref.[30], can drive a transition between x-ray modes. The x-ray transverse mode experiences Rabi oscillation when propagating in an SXWG. Our scheme allows for applications like a systematic production of structured x rays and an x-ray interferometer without any beam splitter. Remarkably, a significant change of a gravitationally induced splitting of x rays could happen by lifting our SXWG made of, e.g., \({}^{45}\)Sc or \({}^{182}\)Ta, at only a millimeter scale.
S. L. and W.-T. L. are supported by the National Science and Technology Council of Taiwan (Grant No. 110-2112-M-008-027-MY3, 110-2639-M-007 -001-ASP, 111-2923-M-008-004-MY3 & 111-2639-M-007-001-ASP). S. A. is supported by National Science Foundation of China (Grant No. 11975155).
|
2310.15916 | In-Context Learning Creates Task Vectors | In-context learning (ICL) in Large Language Models (LLMs) has emerged as a
powerful new learning paradigm. However, its underlying mechanism is still not
well understood. In particular, it is challenging to map it to the "standard"
machine learning framework, where one uses a training set $S$ to find a
best-fitting function $f(x)$ in some hypothesis class. Here we make progress on
this problem by showing that the functions learned by ICL often have a very
simple structure: they correspond to the transformer LLM whose only inputs are
the query $x$ and a single "task vector" calculated from the training set.
Thus, ICL can be seen as compressing $S$ into a single task vector
$\boldsymbol{\theta}(S)$ and then using this task vector to modulate the
transformer to produce the output. We support the above claim via comprehensive
experiments across a range of models and tasks. | Roee Hendel, Mor Geva, Amir Globerson | 2023-10-24T15:17:14Z | http://arxiv.org/abs/2310.15916v1 | # In-Context Learning Creates Task Vectors
###### Abstract
In-context learning (ICL) in Large Language Models (LLMs) has emerged as a powerful new learning paradigm. However, its underlying mechanism is still not well understood. In particular, it is challenging to map it to the "standard" machine learning framework, where one uses a training set \(S\) to find a best-fitting function \(f(x)\) in some hypothesis class. Here we make progress on this problem by showing that the functions learned by ICL often have a very simple structure: they correspond to the transformer LLM whose only inputs are the query \(x\) and a single "task vector" calculated from the training set. Thus, ICL can be seen as compressing \(S\) into a single task vector \(\mathbf{\theta}(S)\) and then using this task vector to modulate the transformer to produce the output. We support the above claim via comprehensive experiments across a range of models and tasks.1
Footnote 1: We release our code at [https://github.com/roeehendel/icl_task_vectors](https://github.com/roeehendel/icl_task_vectors).
## 1 Introduction
Large language models have improved dramatically over the last several years. One striking property of these models is that they can learn new rules from very few demonstrations. For instance, a model can be prompted with the input _"Apple \(\rightarrow\)Red, \(\textit{Line}\rightarrow\)Green, Corn \(\rightarrow\)"_ and produce the output _"Yellow"_. The model has thus learned a mapping based on just two examples, which it can apply correctly to new examples. This capability, referred to as In-Context Learning (ICL), has been used extensively, yielding impressive empirical results Brown et al. (2020); Liu et al. (2023); Dong et al. (2022).
Given this success, it is natural to ask what is the underlying mechanism behind ICL. Namely, how does the model internally use the demonstrations \(S\) and the query \(x\) to produce the required output? Here we approach this question by utilizing the concept of a hypothesis class from statistical learning theory Shalev-Shwartz and Ben-David (2014). In the learning-theoretic formulation, one typically considers a hypothesis class \(\mathcal{H}\), where every element of \(\mathcal{H}\) is a function \(h(x;\mathbf{\theta})\), operating on the input \(x\), and specified by a parameter vector \(\mathbf{\theta}\). For example, if \(x\in\mathbb{R}^{d}\) then the class \(\mathcal{H}\) could be the set of linear classifiers, defined by a coefficient vector \(\mathbf{\theta}\) as \(h(x;\mathbf{\theta})=\mathbf{\theta}\cdot x\). Learning algorithms seek an element \(h\in\mathcal{H}\) that fits the training set well. This is known as Empirical Risk Minimization.
It is unclear whether ICL operates in such a way because the prediction is performed via \(T([S,x])\), where \(T\) is typically an auto-regressive transformer
Figure 1: **ICL as learning in a Hypothesis Class.** In ICL, one provides an LLM with a prompt including demonstrations \(S\) of some task, and a query \(x\). The model generates the output for \(x\) (here “Yellow”). We show that the underlying process can be broken down into two parts: \(\mathcal{A}\), a “learning algorithm” (marked in blue), computes a query-agnostic vector \(\mathbf{\theta}(S)\), which we view as a parameter of a function in a hypothesis class. The second part, denoted by \(f\) and marked in yellow, is the application of the rule defined by \(\mathbf{\theta}\) on the query \(x\), without direct dependence on \(S\).
and \([S,x]\) is a concatenation of the tokens in \(S\) and \(x\). Thus, in the general case, it can be an arbitrary function that operates on \(S\) and \(x\) to produce the output. This can include "non-parametric" methods such as nearest-neighbor. Recent work has begun to explore this question. For example, it was shown that when training a transformer from scratch to perform linear regression in context, the emerging learning algorithm is similar to Stochastic Gradient Descent (Akyurek et al., 2022; von Oswald et al., 2022). However, for LLMs performing more complex natural language tasks, it is not at all clear what the hypothesis space may be.
In this work, we show that on a wide range of tasks, ICL in LLMs can be viewed as working on a very natural hypothesis space. We argue that, given a training set \(S\), the transformer maps it into a "task vector" \(\mathbf{\theta}(S)\) that essentially represents the mapping/rule described in \(S\).2 Namely, given the transformer \(T\) and a vector \(\mathbf{\theta}\), we can construct a new function \(f(x;\mathbf{\theta})\) that implements the task. The function \(f\) is very similar to the original transformer applied to \(x\)_without_ demonstrations but instead modulated by \(\mathbf{\theta}\) (see Fig. 2).
Footnote 2: The term “task vector” was coined by Ilharco et al. (2023) for directions in weight space that correspond to a particular task. Although our vectors are in “activations space” they share a similar motivation and thus we overload the term.
Our view is also related to soft prompts (Lester et al., 2021), since both approaches modulate the function of the transformer towards a particular task. However, in ICL, task vectors are calculated in the forward pass rather than being fine-tuned.
Our contributions include proposing a hypothesis-class based mechanistic view of ICL, and conducting experiments to validate our view on a range of publicly available LLMs and a diverse set of tasks. Our results further the understanding of ICL and may have practical implications for the efficient adaptation of LLMs to perform specific tasks.
## 2 A Hypothesis Class View of ICL
Motivated by the hypothesis class view of learning theory, our goal is to understand if ICL maps the set of demonstrations \(S\) to a function on the query \(x\) and how this mapping occurs. Specifically, we seek to see if ICL converts \(S\) into \(\mathbf{\theta}\) - the "parameters" of a function within a certain hypothesis space. Our empirical findings suggest this view is applicable, shedding light on the structure of the hypothesis space on which ICL can be viewed to operate.
### Theoretical Framework
We use \(T\) to denote a decoder-only transformer LLM, \(S\) to denote the set of demonstrations (i.e. training examples) used as input to ICL, and \(x\) to denote the query that ICL is asked to provide an output for. We use \(T([S,x])\) to denote the output of ICL on the concatenation of \(S\) and \(x\).
To demonstrate that ICL operates within a hypothesis space, we aim to show that its underlying mechanism can be broken down into two parts:
* A **"Learning Algorithm"** (denoted by \(\mathcal{A}\)) that maps \(S\) into a "task vector" \(\mathbf{\theta}\), independent of the query \(x\). Given that attention layers can access both \(S\) and \(x\), this independence is not trivial.
* A **"Rule Application"** (denoted by \(f\)) which maps the query \(x\) to the output, based on \(\mathbf{\theta}\equiv\mathcal{A}(S)\), without direct dependence on \(S\). Again, this independence is not trivial.
Thus, we consider the following mapping from a set of demonstrations and a query to the predicted output: \(T([S,x])=f(x;\mathcal{A}(S))\).
If we can break down the forward pass of the LLM into the above two components, we can view ICL as operating on the following hypothesis class: \(\mathcal{H}=\{f(\cdot;\mathbf{\theta})\mid\mathbf{\theta}\}\). In the next section we propose an implementation of such a class.
### A Proposed Hypothesis Class
There are many possible realizations of the above framework, that correspond to different choices of \(\mathcal{A}\) and \(f\). We next describe the realization we focus on, which naturally follows from the transformer architecture. We consider an ICL setting as in Fig. 1, where the input ends with a query \(x\) (i.e., Corn) followed by an "\(\rightarrow\)" symbol. As mentioned above, we view learning as composed of two steps: calculating a parameter vector \(\mathbf{\theta}\) based on the training sample \(S\), and applying the rule defined by this parameter vector to the query \(x\). A presumably simple way for a transformer to do this is for the first \(L\) layers of the \(\rightarrow\) representations to calculate \(\mathbf{\theta}\) and then for the remaining layers to take \(\mathbf{\theta}\) and \(x\) as input and produce an output. See Fig. 1. Recall that \(S\) and \(x\) are accessible to the transformer at any layer, presenting a challenge with our view.
In the following sections, we address this challenge and present experiments validating our view. Namely, we show that we can isolate our proposed \(\mathcal{A}\) and \(f\) in the forward pass of LLMs performing ICL. We also show that the \(\mathbf{\theta}\) vectors are interpretable and correspond to learned tasks.
## 3 Validity of the Hypothesis Class View
We first show that separating the forward pass into the two distinct components \(\mathcal{A}\) and \(f\), defined in SS2.2, maintains the high accuracy of ICL.
### Separating \(\mathcal{A}\) and \(f\)
We face some challenges in a regular forward pass: first, the initial \(L\) layers that correspond to \(\mathcal{A}\), updating the representations of \(\rightarrow\) to create \(\boldsymbol{\theta}\), can attend to the query \(x\). Thus, they may depend on \(x\), creating an unwanted dependence of \(\boldsymbol{\theta}\) on \(x\). Second, the remaining layers that correspond to \(f\), may directly access \(S\), instead of using only \(x\) and \(\boldsymbol{\theta}\).
We propose the following procedure to tackle these challenges: to solve the first problem, we introduce a "dummy query" \(x^{\prime}\) and calculate the representations of \(\rightarrow\) using that query. We use the representation of \(\rightarrow\) after the first \(L\) layers, calculated using \(x^{\prime}\), as the vector \(\boldsymbol{\theta}\) (as demonstrated on the left side of Fig. 2). An alternative was to block attention to \(x\), but it led to poor performance. To solve the second problem of calculating \(f(x,\boldsymbol{\theta})\) without allowing direct dependence on \(S\), we perform a forward pass of the transformer only on \(x\) and \(\rightarrow\),3 and "patch" the \(\boldsymbol{\theta}\) we previously extracted at the \(L\)th layer of the \(\rightarrow\) (right side of Fig. 2).4
Footnote 3: Ignoring positional embeddings, this is equivalent to blocking the attention to \(S\) in these layers.
Footnote 4: Note that the second token can actually be anything, because it is overridden by patching. We use \(\rightarrow\) for simplicity.
### Tasks and Models
TasksWe consider a diverse set of 18 tasks across 4 categories: algorithmic, translation, linguistic, and factual knowledge. For simplicity, we limit ourselves to single-token outputs. A representative subset of the tasks is described in Tab. 1. A complete detailed table, as well as more information regarding the data, are provided in SS A.1.
ModelsWe use multiple open LLMs: LLaMA 7B, 13B, and 30B (Touvron et al., 2023), GPT-J 6B (Wang and Komatsuzaki, 2021), and Pythia 2.8B, 6.9B, and 12B (Biderman et al., 2023).
### Finding \(L\)
The mechanism we described in SS2.2 has a free parameter - the layer \(L\) where \(\mathcal{A}\) ends and \(f\) begins. We use the proposed \((\mathcal{A},f)\) implementation for different choices of \(L\) and evaluate the accuracy on a development set to find the best layer.
Fig. 3 shows the accuracy on the development set, for different choices of \(L\). We focus here on the LLaMA models and include the rest in SS A.2. Interestingly, all models exhibit a performance peak at a similar intermediate layer, irrespective of their parameters and layer count differences.
\begin{table}
\begin{tabular}{l l l}
**Category** & **Task** & **Example** \\ \hline \multirow{4}{*}{Algorithmic} & Next letter & a \(\rightarrow\) b \\ & List first & a,b,c \(\rightarrow\) a \\ & List last & a,b,c \(\rightarrow\) c \\ & To uppercase & a \(\rightarrow\) A \\ \hline \multirow{2}{*}{Translation} & French to English & bonjour \(\rightarrow\) lello \\ & Spanish to English & hola \(\rightarrow\) hello \\ \hline \multirow{2}{*}{Linguistic} & Present to gerund & go \(\rightarrow\) going \\ & Singular to plural & cat \(\rightarrow\) cats \\ & Antonyms & happy \(\rightarrow\) sad \\ \hline \multirow{2}{*}{Knowledge} & Country to Capital & France \(\rightarrow\) Paris \\ & Person to Language & Macron \(\rightarrow\) French \\ \hline \end{tabular}
\end{table}
Table 1: A representative subset of the tasks used in the study with input \(\rightarrow\) output examples.
Figure 3: Accuracy for each choice of the intermediate layer \(L\), averaged across all tasks. Solid lines show average values, and shaded areas standard deviations.
Figure 2: **Separating \(\mathcal{A}\) and \(f\).** To make \(\boldsymbol{\theta}\) independent of the query \(x\), we use a dummy query (\(x^{\prime}=\) Plum) and use the representation of \(\rightarrow\) at the \(L^{th}\) layer as \(\boldsymbol{\theta}\). The vector \(\boldsymbol{\theta}\) is then patched at the same layer during a forward pass of a transformer that only takes \(x\) and \(\rightarrow\) as input, to prevent the direct dependence of \(f\) on \(S\).
### Accuracy of Hypothesis Based Prediction
We next compare the accuracy of the \((\mathcal{A},f)\) mechanism to that of a regular forward pass performing ICL. For each model and task, we evaluate the following three procedures:
* **Regular** An application of the LLM to the demonstrations \(S\) and query \(x\). Namely \(T([S,x])\), as in regular ICL.
* **Hypothesis** Our proposed procedure from SS 3.1 where \(\mathcal{A}\) generates \(\mathbf{\theta}\) using a dummy \(x^{\prime}\), and \(f(\cdot;\mathbf{\theta})\) is applied to \(x\) by running the transformer on \([x,\rightarrow]\) with \(\mathbf{\theta}\) patched at layer \(L\) of \(\rightarrow\).
* **Baseline** A forward pass of the LLM only on \(x\), without demonstrations \(S\). That is, \(T([x,\rightarrow])\). This is the same as the application of \(f\) from our separated procedure, but without patching \(\mathbf{\theta}\).
Fig. 4 shows the average accuracy across all tasks of these 3 procedures, for each model. Full results are reported in Tab. 6 in SS A.2. Across all models, our procedure maintains around 80-90% of the accuracy of regular ICL, while the baseline reaches only 10-20%. This shows that our proposed separation to \(\mathcal{A}\) and \(f\) provides a good empirical approximation of the process underlying ICL.
## 4 Robustness of Task Vectors
In our setting, \(\mathbf{\theta}\) is derived from \(S\) and a dummy query \(x^{\prime}\). It is natural to examine the robustness of \(\mathbf{\theta}\) to variations in these inputs. Intuitively, if it represents the task, it should remain stable across different \(S\) and \(x^{\prime}\) values.
To test this, we use LLaMA 7B to generate 50 task vectors per task with varied \(S\) and \(x^{\prime}\) and conduct two analyses.
Geometry of \(\mathbf{\theta}\)A t-SNE dimensionality reduction (Fig. 5) reveals that the task vectors form distinct clusters, each containing task vectors of a single task. Fig. 9 further shows proximity between tasks of the same category, strengthening the idea that they encapsulate task understanding.
Variability of \(\mathbf{\theta}\)Fig. 8 shows histograms of distances within and across tasks. It can be seen that vectors within the same task are closer than those between different tasks, indicating that \(\mathbf{\theta}\) is stable within tasks and not highly influenced by \(x^{\prime}\) or \(S\).
## 5 Dominance of \(\mathbf{\theta}\) Patching
In SS3 we prevented \(f\) from directly accessing \(S\). However, in a regular forward pass during ICL, the last token can attend to \(S\). Here we verify that even in this case, \(f\) mainly uses the task vector \(\mathbf{\theta}\), without directly accessing the demonstrations \(S\). To this end, we use a pair of tasks, \(A\) and \(B\), sharing the input space but differing on the output. We first use a "Regular" forward pass, where we provide the model with demonstrations \(S\) for task \(A\) (denoted \(S_{A}\)), to verify the model can perform this task using ICL. Then, we do a "Conflicting" forward pass, still providing \(S_{A}\), while injecting \(\mathbf{\theta}_{B}\). For more details, refer to Fig. 6 in SSA.1.
Figure 4: Average accuracy across all tasks for each model, using each of the three procedures: Baseline, Regular and Hypothesis.
Figure 5: **A t-SNE plot of task vectors. A 2D t-SNE plot visualizing 50 task vectors for each task, each generated from a different choice of \(S\) and \(x^{\prime}\) using LLaMA 7B. Points are color-coded according to the task. Each task can be seen to form its own distinct cluster.**
In Tab.2, the "Regular" forward pass shows high accuracy on task \(A\) (90%+), as anticipated. However, the "Conflicting" forward pass yields high accuracy on task \(B\), corresponding to the injected task vector \(\mathbf{\theta}\). This implies that the model mainly relies on \(\mathbf{\theta}\), largely disregarding the demonstrations \(S\) for task \(A\). We note that the accuracy on task \(B\) is slightly low, likely consistent with the performance dip seen in Fig. 6, and potentially further affected by the presence of \(S\).
## 6 Interpreting \(\mathbf{\theta}\)
The learned vector \(\mathbf{\theta}\) intuitively captures information about the task demonstrated by \(S\). Here we provide evidence supporting this interpretation. Since \(\mathbf{\theta}\) is an intermediate hidden state of the transformer, we can employ a vocabulary projection method (nostalgebraist, 2020; Dar et al., 2022). Namely, we examine the top tokens in the distribution over the vocabulary induced by the hidden state.
Tab. 3 shows the top tokens for three tasks for LLMaA 13B (more models and tasks are provided in Tab. 7 in SSA). In multiple cases, we observe tokens that directly describe the task. Importantly, these terms never explicitly appeared in the context. For example in the task of translation from French to English, we observe tokens such as "English" and "translate". This supports our view that \(\mathbf{\theta}\) carries significant, non-trivial semantic information about the task.
## 7 Related Work
Emergence of ICLA key question with ICL is how it emerges as a capability from pre-training the LLMs. Levine et al. (2022) provides results in this direction that highlight the importance of training data structure. Xie et al. use probabilistic analysis and model pre-training data using Hidden Markov Models to theoretically explain the emergence of ICL, while Chan et al. (2022) empirically explore the effect of several distributional properties of the pre-training data.
Meta-Learning in TransformersStudies by Akyurek et al. (2022); von Oswald et al. (2022); Garg et al. focus on the meta-learning capabilities of transformers. They typically train models from scratch on elementary tasks such as linear regression, drawing theoretical parallels with algorithms like Gradient Descent and demonstrating how transformers could implement them. A key assumption of these works is a known parameter space within which gradient descent operates. Our work focuses on identifying such a parameter space for LLMs.
ICL in LLMsOlsson et al. (2022) identify "induction heads" in transformers as a likely main mechanism of ICL. Dai et al. (2022) provide empirical evidence for the connection of ICL to Gradient Descent in LLMs, focusing on classification tasks. Concurrent work by Merullo et al. (2023) also explores a phenomenon similar to the task vectors we study here, where a single vector can encode learned functions. Our findings are complementary to theirs, and future work could explore the relationship between the two more closely.
## 8 Conclusions
Through this exploration of ICL in LLMs, we have shed light on a new perspective of ICL learning mechanisms. We have revealed a simple and elegant structure: ICL functions by compressing a given training set into a single task vector, which then guides the transformer to generate appropriate outputs given queries. Our work provides a stepping stone towards understanding how LLMs perform ICL. In light of our findings, future work could focus on understanding how the task vector is constructed as well as how it is used to calculate the output.
\begin{table}
\begin{tabular}{l l c c} \hline \hline Task \(A\) (\(S\)) & Task \(B\) (\(\mathbf{\theta}\)) & Regular & Conflicting \\ & & Task \(A\) & Task \(B\) \\ \hline Next Letter & To Upper & 0.92 & 0.77 \\ List Last & List First & 0.95 & 0.78 \\ Present to Past & to Gerund & 0.96 & 0.95 \\ \hline \hline \end{tabular}
\end{table}
Table 2: **Conflicting tasks experiment results. The model’s accuracy on the relevant task (\(A\) in “Regular” and \(B\) in “Conflicting”) is displayed for both scenarios.**
\begin{table}
\begin{tabular}{l l} \hline \hline
**Task** & **Top tokens in the task vector projection** \\ \hline Previous & e, y, unknown, alphabet, preceding, c \\ Letter & Cad, zA, dit, bill \\ \hline FR-EN & Mason, gram, immer, Santi, latin, \\ & utter, Span, Conc, English, equivalent \\ \hline Present & cin, thats, gram, Lorenzo, cian, \\ Simple to & Isabel, ud, berto, partici, Sah \\ Gerund & \\ \hline Country & Paris, its, capital, central, Conc, cities, administrative, Los, Madrid, London \\ \hline \hline \end{tabular}
\end{table}
Table 3: The top 10 tokens in the distribution induced by the task vector, for one task per category.
### Limitations
We study relatively simple tasks, whereas ICL can learn to perform more complex tasks, such as solving arithmetic reasoning problems. It remains to be seen if and how the mechanisms we observe here will translate to these cases. E.g., our approach focuses on cases where a single task vector suffices, while more complex ICL cases may require more elaborate parameterization. We also focus on tasks where the output is a single token, while some other tasks require multi-token outputs.
Finally, as noted above, we do not provide a mechanistic explanation for how the task vector is formed or how it is used. Namely, we do not explain how the transformer performs these calculations using its parameters.
## Acknowledgements
This project is funded by the European Research Council (ERC) under the European Unions Horizon 2020 research and innovation program (grant ERC HOLI 819080).
|
2306.16521 | Enumerative Theory for the Tsetlin Library | The Tsetlin library is a well-studied Markov chain on the symmetric group
$S_n$. It has stationary distribution $\pi(\sigma)$ the Luce model, a
nonuniform distribution on $S_n$, which appears in psychology, horse race
betting, and tournament poker. Simple enumerative questions, such as ``what is
the distribution of the top $k$ cards?'' or ``what is the distribution of the
bottom $k$ cards?'' are long open. We settle these questions and draw attention
to a host of parallel questions on the extension to the chambers of a
hyperplane arrangement. | Sourav Chatterjee, Persi Diaconis, Gene B. Kim | 2023-06-28T19:33:51Z | http://arxiv.org/abs/2306.16521v1 | # Enumerative Theory for the Tsetlin Library
###### Abstract
The Tsetlin library is a well-studied Markov chain on the symmetric group \(S_{n}\). It has stationary distribution \(\pi(\sigma)\) the Luce model, a nonuniform distribution on \(S_{n}\), which appears in psychology, horse race betting, and tournament poker. Simple enumerative questions, such as "what is the distribution of the top \(k\) cards?" or "what is the distribution of the bottom \(k\) cards?" are long open. We settle these questions and draw attention to a host of parallel questions on the extension to the chambers of a hyperplane arrangement.
In memory of Georgia Benkart
## 1 Introduction
Let \(\theta_{1},\theta_{2},\ldots,\theta_{n}\) be positive real numbers. The **Luce model**\(\pi(\sigma)\) is a probability distribution on the symmetric group \(S_{n}\) driven by these weights. In words, "put \(n\) balls with weights \(\theta_{1},\theta_{2},\ldots,\theta_{n}\) into an urn. Each time, withdraw a ball from the urn (sampling without replacement) with probability proportional to its weight (relative to the remaining balls)." Thus, if \(w_{n}=\theta_{1}+\cdots+\theta_{n}\),
\[\pi(\sigma)=\frac{\theta_{\sigma(1)}}{w_{n}}\frac{\theta_{\sigma(2)}}{w_{n}- \theta_{\sigma(1)}}\frac{\theta_{\sigma(3)}}{w_{n}-\theta_{\sigma(1)}-\theta_ {\sigma(2)}}\ldots\frac{\theta_{\sigma(n)}}{\theta_{\sigma(n)}}. \tag{1}\]
In Section 2, a host of applied problems are shown to give rise to the Luce model. These include the celebrated Tsetlin library, a Markov chain on \(S_{n}\), described as: "at each step, choose a card labeled \(i\) with probability proportional to \(\theta_{i}\) and move it to the top" - \(\pi(\sigma)\) is the stationary distribution of this Markov chain.
It is natural to ask basic enumerative questions: pick \(\sigma\) from \(\pi(\sigma)\).
* What is the distribution of the number of fixed points, cycles, longest cycle, and order of \(\sigma\)?
* What is the distribution of the length of the longest increasing subsequence of \(\sigma\)?
Most of these questions are open at this time. The main results below settle
* What is the distribution of the top \(k\) cards \(\sigma(1),\ldots,\sigma(k)\)? (Section 3)
* What is the distribution of the bottom \(k\) cards \(\sigma(n-k+1),\ldots,\sigma(n)\)? (Section 4)
Weighted sampling without replacement from a finite population is a standard topic. This may be accessed from the Wikipedia entries of "Horvitz-Thompson estimator" [31] and "concomitant order statistics."
Coming closer to combinatorics, two papers by Rosen [40] treat the coupon collector's problem and other coverage problems.
The recent paper by Ben-Hamou, Peres and Salez [6] couples sampling with and without replacement so that tail and concentration bounds, derived for partial sums when sampling with replacement, are seen to apply "as is" to sampling without replacement.
A final item; throughout, we have assumed that the weights \(\theta_{i}\) are fixed and known. It is also natural to consider random weights. For a full development, see [39].
Section 5 develops the connections of the Tsetlin library to the Bidigare-Hanlon-Rockmore (BHR) walk on the chambers of a real hyperplane arrangement. Understanding the stationary distributions of these Markov chains is almost completely open.
Section 2 begins with a review of enumerative group theory. These questions make sense for continuous groups. Georgia Benkart made fundamental contributions here through her work on decomposing tensor products.
## 2 Background
### Enumerative Group Theory
Let \(G\) be a finite group. A classical question is "pick \(g\in G\) at random. What does it look like?" For example, if \(G=S_{n}\),
* What is the distribution of \(F(g)\), the number of fixed points of \(g\)?
* How many cycles are typical?
* What is the expected length of the longest cycle?
* What about the length of the longest increasing subsequence of \(g\)?
* What about the descent structure of \(g\)?
* How many inversions are typical?
All of these questions have classical answers (references below).
For \(G=GL_{n}(q)\), parallel questions involve the conjugacy class structure of a random \(g\in G\). For a splendid development (for finite groups of Lie type), see Fulman [26], which has full references to the results above. The recent survey of Diaconis and Simper [22] brings this up to date. It focuses on enumeration by double cosets \(H\setminus G/K\).
The questions above make sense for continuous groups, where they become "random matrix theory." For example, when \(G=O_{n}\) (the real orthogonal group), one may study the eigenvalues of \(g\in G\) under Haar measure by studying the powers of traces
\[\int_{O_{n}}(\operatorname{Tr}(g))^{k}\,\mathrm{d}g.\]
Patently this asks for the number of times the trivial representation appears in the \(k\)th tensor power of the usual \(n\)-dimensional representation of \(O_{n}\). See [19] for details.
Georgia Benkart did extensive work on decomposing tensor powers of representations of classical (and more general) groups. She worked on this with many students and coauthors. Her monograph with Britten and Lemire [7] is a convenient reference. Most of this work can be translated into probabilistic limit theorems. We started to do this with Georgia during MSRI 2018, but got sidetracked into doing a parallel problem working over fields of prime characteristic in joint work with Benkart-Diaconis-Liebeck-Tiep [8].
Most all of the above is enumeration under the uniform distribution. A recent trend in enumerative (probabilistic) group theory is enumeration under natural _non_-uniform distributions. For example, on \(S_{n}\),
* The Ewens measure \(\pi_{\theta}(\sigma)=Z^{-1}(\theta)\theta^{C(\sigma)}\). Here, \(\theta\) is a fixed positive real number, \(C(\sigma)\) is the number of cycles of \(\sigma\), and \(Z^{-1}(\theta)\) is a simple normalizing constant. The Ewens measure originated in biology, but has blossomed into a large set of applications. See Crane [17].
* The Mallows measure \(\pi_{\theta}(\sigma)=Z^{-1}(\theta)\theta^{I(\sigma)}\), where \(I(\sigma)\) is the number of inversions of \(\theta\). This was originally studied for taste testing experiments but has again had a huge development.
* More generally, if \(G\) is a finite group and \(S\subseteq G\) is a symmetric generating set, let \(\ell(g)\) be the length function and define \(P_{\theta}(g)=Z^{-1}(\theta)\theta^{\ell(g)}\). Ewens and Mallows models are special cases with \(G=S_{n}\) and \(S=\{\text{all transpositions}\}\) and \(S=\{\text{all adjacent transpositions}\}\).
Most of the questions studied above under the uniform distribution have been fully worked out under Ewens and Mallows measures. See the survey by Diaconis and Simper [22] for pointers to a large literature.
The above can be amplified to "permutons" [30] and "theons" [16]. It shows that enumeration under non-uniform distributions is an emerging and lively subject. We turn next to the main subject of the present paper.
### The Luce model
This section gives several applications where the Luce model appears.
#### 2.2.1 Psychology
In psychophysics experiments, a panel of subjects are asked to rank things, such as:
* Here are seven shades of red; rank them in order of brightness.
* Here are five tones; rank them from high to low.
* The same type of task occurs in taste-testing experiments. Rank these five brands of chocolate chip cookies (or wines, etc.) in order of preference.
This generates a collection of rankings (permutations) and one tries to draw conclusions.
Patently, rankings vary stochastically; if the same person is asked the same question at a later time, we expect the answers to vary slightly.
Duncan Luce introduce the model (1) via the simple idea that each item has a true weight (say, \(\theta_{i}\)) and the model (1) induces natural variability (which can then be compared with observed data).
Indeed, he did more, crafting a simple set of axioms for pairwise comparison and showing that any consistent ranking distribution has to follow (1) for some choice of \(\theta_{i}\). This story is well and clearly told in [32] and [33].
We would be remiss in not pointing to the widespread dissatisfaction over the "independence of irrelevant alternatives" axiom in Luce's derivation. The long Wikipedia article on "irrelevance of alternatives" chronicles experiments and theory disputing this, not only for Luce but in Arrow's paradox and several related developments. Amos Tversky's "elimination by aspects (EBA)" model is a well-liked alternative.
#### 2.2.2 Exponential formulation
Luce's work followed fifty years of effort to model such rankings. Early work of Thurstone and Spearman postulated "true weights" \(\theta_{1},\ldots,\theta_{n}\) for the ordered values and supposed people perceived \(\theta_{i}+\varepsilon_{i},1\leq i\leq n\) with \(\varepsilon_{i}\) independent normal \(\mathcal{N}(0,\sigma^{2})\). They then reported the ordering of these perturbed values.
Yellott [44] noticed that if in fact the \(\varepsilon_{i}\) had an extreme value distribution, with distribution function \(e^{-e^{-x/r}},-\infty<x<\infty\), then the associated Thurstonian ranking model is exactly the Luce model!
It is elementary, that if the random variable \(Y\) has an exponential distribution (\(P(Y>x)=e^{-x}\)), then \(\log Y\) has an extreme value distribution. This gives the following theorem (used in Section 4):
**Theorem 2.1**.: _For \(1\leq i\leq n\), let \(X_{i}\) be independent exponential random variables on \([0,\infty)\) with density_
\[\theta_{i}e^{-x\theta_{i}}\]
_(so \(X_{i}=Y_{i}/\theta_{i}\) with \(Y_{i}\) the standard exponential). Then, the chance of the event_
\[X_{1}<X_{2}<\cdots<X_{n}\]
_is (with \(w_{n}=\theta_{1}+\cdots+\theta_{n}\))_
\[\frac{\theta_{1}}{w_{n}}\cdot\frac{\theta_{2}}{w_{n}-\theta_{1}}\cdot\frac{ \theta_{3}}{w_{n}-\theta_{1}-\theta_{2}}\cdots\frac{\theta_{n}}{\theta_{n}}.\]
Proof.: Consider the event \(X_{1}<X_{2}<\cdots<X_{n}\). The chance of this is
\[\theta_{1} \cdots\theta_{n}\int_{x_{1}=0}^{\infty}\int_{x_{2}=x_{1}}^{ \infty}\cdots\int_{x_{n}=x_{n-1}}^{\infty}\exp\left\{-\sum_{i=1}^{n}x_{i} \theta_{i}\right\}\,dx_{1}\cdots dx_{n}\] \[=\frac{\theta_{1}\cdots\theta_{n}}{\theta_{n}}\int_{x_{1}=0}^{ \infty}\cdots\int_{x_{n-1}=x_{n-2}}^{\infty}\exp\left\{-\sum_{i=1}^{n-2}x_{i} \theta_{i}-x_{n-1}\left(\theta_{n-1}+\theta_{n}\right)\right\}\,dx_{1}\cdots dx _{n-1}\] \[=\frac{\theta_{1}\cdots\theta_{n}}{\theta_{n}\left(\theta_{n}+ \theta_{n-1}\right)}\int_{x_{1}=0}^{\infty}\cdots\int_{x_{n-2}=x_{n-3}}^{ \infty}\exp\left\{-\sum_{i=1}^{n-3}x_{i}\theta_{i}-x_{n-2}\left(\theta_{n-2}+ \theta_{n-1}+\theta_{n}\right)\right\}\,dx_{1}\cdots dx_{n-2}\] \[=\frac{\theta_{1}\cdots\theta_{n}}{\theta_{n}(\theta_{n}+\theta_{ n-1})(\theta_{n}+\theta_{n-1}+\theta_{n-2})\cdots\left(\theta_{n}+\cdots+ \theta_{1}\right)},\]
which is indeed equal to
\[\frac{\theta_{1}}{w_{n}}\cdot\frac{\theta_{2}}{w_{n}-\theta_{1}}\cdot\frac{ \theta_{3}}{w_{n}-\theta_{1}-\theta_{2}}\cdots\frac{\theta_{n}}{\theta_{n}}.\]
Thus, the order statistics follow the Luce model (1).
For an application of the exponential representation to survey sampling, see Gordon [27].
#### 2.2.3 Tsetlin library
The great algebraist Tsetlin was forced to work in a library science institute. While there, he postulated (and solved) the following problem:
Consider \(n\) library books arrnged in order \(1,2,\ldots,n\). Suppose book \(i\) has popularity \(\theta_{i}\). During the day, patrons come and pick up book labeled \(i\) with probability \(\theta_{i}/w_{n}\) and after perusing, they replace it at the left end of the row.
This is a Markov chain on \(S_{n}\) and Tsetlin [43] showed that it has (1) as its stationary distribution.
The same model has been repeatedly rediscovered; in computer science, the books are discs in deep storage. When a disc is called for, it is replaced on the front of the queue to cut down on future search costs. See Dobrow and Fill [23].
The model (and its stationary distribution) appear in genetics as the GEM (Griffiths-Engen-McCloskey) distribution [24].
Over the years, a host of properties of the Tsetlin chain have been derived. For example, Phatarfod [38] found a simple formula for the eigenvalues and Diaconis [20] found sharp rates of convergence to stationarity (including a cutoff) for a wide class of weights. See further Nestoridi [35]. All of this is now subsumed under "hyperplane walks"; see Section 5.
**A monotonicity property.** Suppose, without essential loss, that \(\theta_{1}\geq\theta_{2}\geq\cdots\geq\theta_{n}>0\). Then,
* The largest \(\pi(\sigma)\) is for \(\sigma=\mathrm{id}\).
* The smallest \(\pi(\sigma)\) is for \(\sigma=(n,n-1,\ldots,1)\).
* More generally, \(\pi(\sigma)\) is monotone decreasing in the weak Bruhat order on permutations.
To explain, the weak Bruhat order is a partial order on \(S_{n}\) with cover relations \(\sigma\preceq\sigma^{\prime}\) if \(\sigma\) can be reached from \(\sigma^{\prime}\) by a single adjacent transposition of the \(i\)th and \((i+1)\)th symbols when \(\sigma^{\prime}(i)<\sigma^{\prime}(i+1)\). Thus, when \(n=3\),
**Proposition 2.2**.: _For \(\theta_{1}\geq\theta_{2}\geq\cdots\geq\theta_{n}\), \(\pi(\sigma)\) is monotone decreasing in the weak Bruhat order._
Proof.: The formulas for \(\pi(\sigma)\) and \(\pi(\sigma^{\prime})\) only differ in one term. If \(\sigma_{i}>\sigma_{i+1}\) and these terms were transposed from \(\sigma^{\prime}\), then
\[\frac{\pi(\sigma)}{\pi(\sigma^{\prime})}=\frac{1-\theta_{\sigma_{1}}-\theta_{ \sigma_{2}}-\cdots-\theta_{\sigma_{i+1}}}{1-\theta_{\sigma_{1}}-\theta_{\sigma _{2}}-\cdots-\theta_{\sigma_{i}}}<1.\]
**Irrelevance of alternatives.** The following property characterizes the Luce measure (1): let \(\{i_{1},i_{2},\ldots,i_{k}\}\) be a subset of \(\{1,2,\ldots,n\}\). Fix \(\theta_{1},\ldots,\theta_{n}\) and pick \(\sigma\) from \(\pi(\sigma)\). The distribution of cards labeled \(\{i_{1},\ldots,i_{k}\}\) follows the Luce model with parameters \(\theta_{i_{1}},\ldots,\theta_{i_{k}}\). For example, the chance that \(i\) is above \(j\) in \(\sigma\) is \(\frac{\theta_{i}}{\theta_{i}+\theta_{j}}\). This is easy to see from the exponential representation.
#### 2.2.4 Order statistics and a natural choice of weights
Many questions in probability and mathematical statistics can be reduced to the study of the order statistics of uniform random variables on \([0,1]\) by using the simple fact that, if \(X\) is a real random variable with continuous distribution function \(F(x)\) (so \(P(X\leq x)=F(x)\)), then
\(Y=F(X)\) is uniformly distributed on \([0,1]\). This implies that standard goodness of fit tests (e.g., Kolmogorov-Smirnov) have distributions that are universal under the null hypothesis (they do not depend on \(F\)). If \(Y\) is uniform on \([0,1]\), then \(-\log Y\) is standard exponential as above, so order statistics of independent exponentials are a mainstream object of study. A marvelous introduction to this set of ideas is in Chapter 3 of [25] with Ronald Pyke's articles on spacings [37] providing deeper results.
With this background, let \(Y_{1},Y_{2},\ldots,Y_{n}\) be independent standard exponentials on \((0,\infty)\). Denote the order statistics by \(Y_{(1)}\leq Y_{(2)}\leq\cdots\leq Y_{(n)}\). The following property is easy to prove [25].
**Theorem 2.3**.: _With above notation,_
\[Y_{(1)},Y_{(2)}-Y_{(1)},Y_{(3)}-Y_{(2)},\ldots,Y_{(n)}-Y_{(n-1)}\]
_are independent exponential random variables with distributions_
\[Y_{(1)}\sim E_{1}/n,\qquad Y_{(2)}-Y_{(1)}\sim E_{2}/(n-1),\qquad\ldots\qquad Y _{(n)}-Y_{(n-1)}\sim E_{n},\]
_where \(E_{1},\ldots,E_{n}\) are independent standard exponentials (density \(e^{-x}\) on \((0,\infty)\))._
It follows from our Luce calculations that the chance that the smallest spacing is \(Y_{(1)}\) is \(\frac{n}{{n+1\choose 2}}=\frac{2}{n+1}\), and that the smallest spacing is \(Y_{(2)}-Y_{(1)}\) is \(\frac{n-1}{{n+1\choose 2}}\), and so on. Specifically, \(Y_{(j+1)}-Y_{(j)}\) has probability \(\frac{n-j}{{n+1\choose 2}}\) of being smallest, and \(Y_{(n)}-Y_{(n-1)}\) has chance \(\frac{1}{{n+1\choose 2}}\) of being smallest.
The whole permutation is given by the Luce model (1) with \(\theta_{i}=n-i+1\). This classical fact is due to Sukhatme [42]. We will call these **Sukhatme weights** in the following discussion.
#### 2.2.5 Application to poker and the ICM (iterated card model)
In tournament poker (e.g., the World Series of Poker), suppose there are \(n\) players at the final table with player \(i\) having \(\theta_{i}\) dollars. It is current practice among top players to assume that the order of the players, as they are eliminated, follows the Luce model (with the player having the largest \(\theta_{i}\) least likely to be eliminated; thus most likely to win all the money), and so on. This is called the ICM (iterated card model) and is used as a basis for splitting the total capital and for calculating chances as the game progresses. For careful details and references, see Diaconis-Ethier [21], which disputes the model.
#### 2.2.6 Applications to horse racing
In horse racing, players can bet on a horse to win, place (come in second), or show (come in third). The "crowd" does a good job of determining the chances of each of the \(n\) horses running to come in first. Call the amount bet on horse \(i\) just before closing, \(\theta_{i}\). However, the crowd does
a poor job of judging the chance of a horse showing. Often, there is sufficient disparity between the crowd's bet and the true odds that money can be made (perhaps one race in four). This is despite the track's rake being 17% of the total. A group of successful bettors uses the \(\theta_{i}\)'s and the Luce model to evaluate the chance of placing. For details, see Hausch, Lo and Ziemba [29] or Harville [28].
With this list of applications, we trust we have sufficient motivation to ask "what does the distribution (1), \(\pi(\sigma)\), look like?"
## 3 The top \(k\) cards
Throughout this section, without loss of generality, assume \(\theta_{1}+\cdots+\theta_{n}=1\). For \(\theta\) and \(k\) fixed, let
\[P(\sigma_{1}\,\sigma_{2}\,\cdots\,\sigma_{k})=\frac{\theta_{\sigma_{1}}\theta _{\sigma_{2}}\cdots\theta_{\sigma_{k}}}{(1-\theta_{\sigma_{1}})(1-\theta_{ \sigma_{1}}-\theta_{\sigma_{2}})\cdots(1-\theta_{\sigma_{1}}-\cdots-\theta_{ \sigma_{k-1}})} \tag{2}\]
denote the measure induced on the top \(k\) cards by the Luce measure. It is cumbersome to compute, e.g.,
\[P(\sigma_{2})=\theta_{\sigma_{2}}\sum_{i\neq\sigma_{2}}\frac{\theta_{i}}{1- \theta_{i}}.\]
On the other hand, the Luce measure is just sampling from an urn without replacement. If \(\{\theta_{i}\}\) are "not too wild" and \(k\) is small, then sampling with or without replacement should be "about the same." This is made precise in two metrics.
Let \(Q(\sigma_{1}\,\sigma_{2}\,\cdots\,\sigma_{k})\) be the product measure
\[Q(\sigma_{1}\,\sigma_{2}\,\cdots\,\sigma_{k})=\theta_{\sigma_{1}}\theta_{ \sigma_{2}}\cdots\theta_{\sigma_{k}}, \tag{3}\]
where \(\sigma_{1},\ldots,\sigma_{k}\) need not be distinct. Both \(P\) and \(Q\) depend on \(\{\theta_{i}\}\) and \(k\), but this is suppressed below. Define
\[d_{\infty}(P,Q)=\max_{\sigma}\left(1-\frac{Q(\sigma_{1}\,\cdots\,\sigma_{k}) }{P(\sigma_{1}\,\cdots\,\sigma_{k})}\right)\]
and
\[\|P-Q\|_{TV}=\frac{1}{2}\sum_{\sigma}\left|P(\sigma_{1}\,\cdots\,\sigma_{k})- Q(\sigma_{1}\,\cdots\,\sigma_{k})\right|,\]
where, in both formulas, \(\sigma_{1},\ldots,\sigma_{k}\) are not necessarily distinct, and \(P(\sigma_{1}\,\cdots\,\sigma_{k})=0\) if they are not distinct. Clearly, \(\|P-Q\|_{TV}\leq d_{\infty}(P,Q)\).
**Theorem 3.1**.: _For \(\theta_{1}+\cdots+\theta_{n}=1\), \(\theta_{i}\leq\frac{1}{2}\) for all \(1\leq i\leq n\),_
\[d_{\infty}(P,Q)\leq 1-\exp\left\{-2\left((k-1)\theta_{(1)}+(k-2)\theta_{(2)}+ \cdots+\theta_{(k-1)}\right)\right\}.\]
_Here, \(\theta_{(1)}\geq\theta_{(2)}\geq\cdots\geq\theta_{(n)}\) are the ordered values._
**Theorem 3.2**.: _As \(n\to\infty\), suppose \(\binom{k}{2}\sum_{i=1}^{n}\theta_{i}^{2}\to\lambda\). Then,_
\[\|P-Q\|_{TV}\sim 1-e^{-\lambda}.\]
In Theorem 3.2, \(\{\theta_{i}\}\) form a triangular array, but again, this is suppressed in the notation. The remarks below point to non-asymptotic versions.
Proof of Theorem 3.1.: From the definitions,
\[d_{\infty}(P,Q)=\max_{\sigma}\left(1-(1-\theta_{\sigma_{1}})(1-\theta_{\sigma_{ 1}}-\theta_{\sigma_{2}})\cdots(1-\theta_{\sigma_{1}}-\cdots-\theta_{\sigma_{k- 1}})\right),\]
where the maximum is over all \(\sigma_{1},\ldots,\sigma_{k}\) distinct (because, if they are not distinct, then we have that \(1-\frac{Q(\sigma_{1}\,\cdots\,\sigma_{k})}{P(\sigma_{1}\,\cdots\,\sigma_{k})} =-\infty\), which does not contribute to the maximum). Use \(-2x\leq\log(1-x)\leq-x\) for \(0\leq x\leq\frac{1}{2}\). Since all \(\theta_{i}\leq\frac{1}{2}\),
\[d_{\infty}(P,Q)\leq\max_{\sigma}1-\exp\left\{-2\left(\theta_{\sigma_{1}}+( \theta_{\sigma_{1}}+\theta_{\sigma_{2}})+\cdots+(\theta_{\sigma_{1}}+\cdots+ \theta_{\sigma_{k-1}})\right)\right\}.\]
The right-hand side is maximized for \(\sigma_{1},\ldots,\sigma_{k-1}\) with the largest weights.
Proof of Theorem 3.2.: A prepatory observation is useful:
\[\|P-Q\|_{TV}=\sum_{\sigma:P(\sigma)\geq Q(\sigma)}(P(\sigma)-Q(\sigma))=1-P_{ Q}(\sigma_{1},\ldots,\sigma_{k}\text{ are distinct}).\]
This is just the chance that there are two or more balls in the same box if \(k\) balls are dropped independently into \(n\) boxes, the chance of box \(i\) being \(\theta_{i}\). This non-uniform version of the classical birthday problem has been well-studied. If \(X_{ij}\) is \(1\) or \(0\) as balls \(i,j\) are dropped into the same box and
\[W=\sum_{1\leq i<j\leq k}X_{ij},\]
\(E(W)=\binom{k}{2}\sum_{i=1}^{n}\theta_{i}^{2}\). Under the condition \(E(W)\to\lambda\), \(W\) is known to have a limiting Poisson(\(\lambda\)) distribution and \(P_{Q}(W=0)\sim e^{-\lambda}\). See Chatterjee-Diaconis-Meckes [15] or Barbour-Holst-Janson [4] for further details and more quantitative bounds.
**Example 3.3**.: Consider the Sukhatme weights from Section 2.2.4:
\[\theta_{i}=\frac{n+1-i}{\binom{n+1}{2}}.\]
The exponent for the right-hand side of Theorem 3.1 is
\[\frac{2}{\binom{n+1}{2}}\left\{(k-1)n+(k-2)(n-1)+\cdots(n-k+2)\right\}.\]
Simple asymptotics show that for \(k=c\sqrt{n}\), \(c>0\), this is
\[4c^{2}+\mathcal{O}\left(\frac{1}{\sqrt{n}}\right).\]
So,
\[d_{\infty}(P,Q)\leq 1-e^{-4c^{2}+\mathcal{O}\left(\frac{1}{\sqrt{n}}\right)},\]
and \(k\ll\sqrt{n}\) suffices for product measure. To be a useful approximation to the first \(k\)-coordinates of the Luce measure with \(k=c\sqrt{n}\),
\[\binom{k}{2}\sum_{i=1}^{n}\theta_{i}^{2}\sim\frac{c^{2}}{3}=\lambda\]
giving a similar approximation in total variation.
**Example 3.4**.: The bound in Theorem 3.1 is useful when
\[(k-1)\theta_{(1)}+\cdots+\theta_{(k-1)}\]
is small. To see that this condition is needed, take \(\theta_{1}=\frac{1}{2}\), \(\theta_{i}=\frac{1}{2(n-1)}\) for \(2\leq i\leq n\). For \(k=2\),
\[P(1\,2)=\frac{\theta_{1}\theta_{2}}{1-\theta_{1}},\qquad Q(1\,2)=\theta_{1} \theta_{2},\]
and so, \(d_{\infty}(P,Q)=1-(1-\theta_{1})=\frac{1}{2}\). This does not tend to zero when \(n\) is large. The two-sided bounds for \(\log(1-x)\) show Theorem 3.1 is sharp in this sense for general \(k\).
**Example 3.5**.: As discussed above, if the infinity distance tends to zero, then total variation tends to zero. Here is a choice of weights \(\theta_{i}\) so that the total variation convergence holds for the joint distribution of the first \(k\) coordinates of the Luce model to i.i.d. is close, but not in infinity distance.
Fix \(k,1\leq k\leq n\) and let \(\theta_{i}=k^{-7/4}\) for \(i\leq k\) and
\[\theta_{i}=\frac{1-k^{-3/4}}{n-k}\]
for \(i>k\) so that \(\theta_{1}\geq\theta_{2}\geq\cdots\geq\theta_{n}>0\) and \(\sum_{i=1}^{n}\theta_{i}=1\). Note that
\[\binom{k}{2}\sum_{i=1}^{n}\theta_{i}^{2}\leq k^{2}\sum_{i=1}^{k}\theta_{i}^{2} +k^{2}\sum_{i=k+1}^{n}\theta_{i}^{2}\leq k^{-1/2}+\frac{k^{2}}{n-k}\]
while
\[\sum_{i=1}^{k-1}(k-i)\theta_{i}=k^{-7/4}\sum_{i=1}^{k-1}(k-i)=\frac{1}{2} \left(k^{-3/4}(k-1)\right).\]
Thus, if \(1\ll k\ll\sqrt{n}\), \(\sum_{i=1}^{k-1}(k-i)\theta_{i}\) is large, but \(\binom{k}{2}\sum_{i=1}^{n}\theta_{i}^{2}\) is small. Moreover, if \(k=\lambda\sqrt{n}\), then \(\binom{k}{2}\sum_{i=1}^{n}\theta_{i}^{2}\to\frac{\lambda^{2}}{2}\), but \(\sum_{i=1}^{k-1}(k-i)\theta_{i}\to\infty\).
**Remark 3.6**.:
1. Our proof of Theorem 3.2 used the Poisson approximation for the non-uniform version of the birthday problem. There are other possible limits which can be used to bound \(\|P-Q\|_{TV}\). See [14].
2. It is easy to see that \[Q_{\theta}(\sigma_{1},\ldots,\sigma_{k}\text{ distinct})=kle_{k}(\theta_{1}, \theta_{2},\ldots,\theta_{n}),\] where \(e_{k}\) is the \(k\)th elementary symmetric function. From here, Muirhead's theorem shows \(\|P-Q\|_{TV}\) is a Schur-concave function of \(\theta_{1},\ldots,\theta_{n}\), smallest when \(\theta_{i}=\frac{1}{n}\).
## 4 The bottom \(k\) cards
### Introduction
For naturally occurring weights, the bottom \(k\) cards behave very differently from the top \(k\) cards. To illustrate by example, consider the Sukhatme weights of Section 2.2.4:
\[\theta_{i}=\frac{i}{{n+1\choose 2}},\qquad 1\leq i\leq n.\]
The results of Section 3 show that, for large \(n\),
\[P\left(\frac{\sigma_{1}}{n}\leq x\right)\sim 2\int_{0}^{x}(1-y)\,\mathrm{d}y.\]
That is, \(\sigma_{1}/n\) has a limiting \(\beta(1,2)\) distribution.
Using Theorems 3.1 and 3.2, the same holds for \(\sigma_{i}/n\) for fixed \(i\ll\sqrt{n}\). Of course, large numbers have higher probabilities, but all values in \(\{1,2,\ldots,n\}\) occur.
In contrast, consider the value of bottom card \(\sigma_{n}\). Intuitively, this should be small since the high numbers have higher weights. We were surprised to find
\[P(\sigma_{n}=1)\sim 0.516\ldots\]
In fact, we computed, using a result that follows, that
\begin{tabular}{|c|l|} \hline \(\ell\) & \(P(\ell\text{ is last})\) \\ \hline
1 & 0.516094 \\
2 & 0.213212 \\
3 & 0.107310 \\
4 & 0.0597505 \\
5 & 0.0354888 \\
6 & 0.0220716 \\
7 & 0.0142167 \\
8 & 0.00941619 \\
9 & 0.00638121 \\
10 & 0.00440862 \\ \hline \end{tabular} The section below sets up its own notation from first principles.
### Main result
Let \(\mathbb{N}\) denote the set of positive integers and let \(\mathbb{N}^{\mathbb{N}}\) be the set of all maps from \(\mathbb{N}\) into \(\mathbb{N}\). Consider the topology of pointwise convergence on \(\mathbb{N}^{\mathbb{N}}\). This topology is naturally metrizable with a complete separable metric, and so we can talk about convergence of probability measures on this space.
Now suppose that for each \(n\), \(\sigma_{n}\) is a random element of the symmetric group \(S_{n}\). We can extend \(\sigma_{n}\) to a random element of \(\mathbb{N}^{\mathbb{N}}\), by defining \(\sigma_{n}(i)=i\) for \(i>n\).
**Proposition 4.1**.: _Let \(\sigma_{n}\) be as above. Then \(\sigma_{n}\) converges in law as \(n\to\infty\) if and only if for each \(k\), the random vector \((\sigma_{n}(1),\dots,\sigma_{n}(k))\) converges in law as \(n\to\infty\)._
Proof.: Since the coordinate maps on \(\mathbb{N}^{\mathbb{N}}\) are continuous in the topology of pointwise convergence, one direction is clear.
For the other direction, suppose that for each \(k\), \((\sigma_{n}(1),\dots,\sigma_{n}(k))\) converges in law as \(n\to\infty\). Notice that for any sequence of positive integers \(a_{1},a_{2},\dots\), the set
\[\Big{\{}f\in\mathbb{N}^{\mathbb{N}}:f(i)\leq a_{i}\text{ for all }i\Big{\}} \tag{4}\]
is a compact subset of \(\mathbb{N}^{\mathbb{N}}\), since any infinite sequence in this set has a convergent subsequence by a diagonal argument. Take any \(\varepsilon>0\). By the given condition, \(\sigma_{n}(i)\) converges in law as \(n\to\infty\) for each \(i\). In particular, \(\{\sigma_{n}(i)\}_{n\geq 1}\) is a tight family, and so there is some number \(a_{i}\) such that for each \(n\),
\[P(\sigma_{n}(i)>a_{i})\leq 2^{-i}\varepsilon.\]
Therefore if \(K\) denotes the set defined in (4) above, then for each \(n\),
\[P(\sigma_{n}\in K)\geq 1-\sum_{i=1}^{\infty}P(\sigma_{n}(i)>a_{i})\geq 1- \sum_{i=1}^{\infty}2^{-i}\varepsilon=1-\varepsilon.\]
This proves that \(\{\sigma_{n}\}_{n\geq 1}\) is a tight family of random variables on \(\mathbb{N}^{\mathbb{N}}\). Therefore the proof will be complete if we can show that any probability measure on \(\mathbb{N}^{\mathbb{N}}\) is determined by its finite dimensional distributions. But this is an easy consequence of Dynkin's \(\pi\)-\(\lambda\) theorem.
The above proposition implies, for instance, that if \(\sigma_{n}\) is a uniform random element of \(S_{n}\), then \(\sigma_{n}\) does not converge in law on \(\mathbb{N}^{\mathbb{N}}\), because \(\sigma_{n}(1)\) does not converge in law.
Let \(0<\theta_{1}\leq\theta_{2}\leq\cdots\) be a non-decreasing infinite sequence of positive real numbers. For each \(n\), consider the Luce model on \(S_{n}\) with parameters \(\theta_{1},\dots,\theta_{n}\). Let \(\sigma_{n}\) be the _reverse_ of a random permutation drawn from this model. That is, \(\sigma_{n}(1)\) is the last ball that was drawn and \(\sigma_{n}(n)\) is the first. As we know from prior discussions, an equivalent definition is the following. Let \(X_{1},X_{2},\dots\) be an infinite sequence of independent random variables, where \(X_{i}\) has exponential distribution with mean \(1/\theta_{i}\). Then \(\sigma_{n}\in S_{n}\) is the permutation such that \(X_{\sigma_{n}(1)}>X_{\sigma_{n}(2)}>\cdots>X_{\sigma_{n}(n)}\).
**Theorem 4.2**.: _Let \(\sigma_{n}\) be as above. For each \(x\geq 0\), let_
\[f(x):=\sum_{i=1}^{\infty}e^{-\theta_{i}x},\]
_where we allow \(f(x)\) to be \(\infty\) if the sum diverges. Let_
\[x_{0}:=\inf\left\{x:f(x)<\infty\right\},\]
_with the convention that the infimum of the empty set is \(\infty\). Then \(\sigma_{n}\) converges in law as \(n\to\infty\) if and only if \(x_{0}<\infty\) and \(f(x_{0})=\infty\). Moreover, if this condition holds, then the limiting finite dimensional probability mass functions are given by the following formula: For any \(k\) and any distinct positive integers \(a_{1},\ldots,a_{k}\),_
\[\lim_{n\to\infty}P(\sigma_{n}(1)=a_{1},\ldots,\sigma_{n}(k)=a_{k})=\int_{x_{1} >x_{2}>\cdots>x_{k}>0}\prod_{j=1}^{k}(\theta_{a_{j}}e^{-\theta_{a_{j}}x_{j}}) \prod_{i\not\in\{a_{1},\ldots,a_{k}\}}(1-e^{-\theta_{i}x_{k}})\;\mathrm{d}x_{1 }\cdots\mathrm{d}x_{k}.\]
Before proving the theorem, let us work out some simple examples. Suppose that \(\theta_{i}=i\) for each \(i\). This corresponds to the Luce model with the Sukhatme weights. Then clearly \(f(x)<\infty\) for all \(x>0\), and hence \(x_{0}=0\). Also, clearly, \(f(0)=\infty\). Therefore in this case \(\sigma_{n}\) converges in law as \(n\to\infty\). Moreover, by the formula displayed above,
\[\lim_{n\to\infty}P(\sigma_{n}(1)=1)=\int_{0}^{\infty}e^{-x}\prod_{j=2}^{\infty }(1-e^{-jx})dx=\int_{0}^{1}\prod_{j=2}^{\infty}(1-y^{j})dy.\]
On the other hand, for the case of uniform random permutations, \(\theta_{i}=1\) for all \(i\). In this case, \(f(x)=\infty\) for all \(x\), and hence \(x_{0}=\infty\). Thus, the theorem implies that \(\sigma_{n}\) does not converge in law (which we know already).
Next, suppose that \(\theta_{i}=\beta\log(i+1)\) for some \(\beta>0\). Here \(f(x)<\infty\) for \(x>1/\beta\) and \(f(x)=\infty\) for \(x\leq 1/\beta\). Thus, \(x_{0}=1/\beta\) and \(f(x_{0})=\infty\), and so by the theorem, \(\sigma_{n}\) converges in law.
Strangely, \(\sigma_{n}\)_does not_ converge in law if \(\theta_{i}=\log(i+1)+2\log\log(i+1)\). To see this, note that in this case,
\[f(x)=\sum_{i=1}^{\infty}\frac{1}{(i+1)^{x}(\log(i+1))^{2x}}.\]
Thus, \(f(x)<\infty\) for \(x>1\) and \(f(x)=\infty\) for \(x<1\), showing that \(x_{0}=1\). But
\[f(x_{0})=\sum_{i=1}^{\infty}\frac{1}{(i+1)(\log(i+1))^{2}}<\infty,\]
which violates the second criterion required for convergence. This shows that we cannot determine convergence purely by inspecting the rate of growth of \(\theta_{i}\). The criterion is more subtle than that.
What happens if the tightness criterion does not hold? In this case, the formula for the limit of \(P(\sigma_{n}(1)=a_{1},\ldots,\sigma_{n}(k)=a_{k})\) remains valid, but it may not represent a probability mass function, i.e., the sum over all \(a_{1},\ldots,a_{k}\) may be strictly less than \(1\).
Proof of Theorem 4.2.: Take any \(k\geq 1\) and distinct positive integers \(a_{1},\ldots,a_{k}\). Take \(n\geq\max_{1\leq i\leq k}a_{i}\). Let \(E_{n}\) be the event \(\{\sigma_{n}(1)=a_{1},\ldots,\sigma_{n}(k)=a_{k}\}\). Then
\[P(E_{n})=P(X_{a_{1}}>X_{a_{2}}>\cdots>X_{a_{k}}>X_{i}\;\forall i\in[ n]\setminus\{a_{1},\ldots,a_{k}\})\] \[=\int_{x_{1}>x_{2}>\cdots>x_{k}>0}\prod_{j=1}^{k}(\theta_{a_{j}} e^{-\theta_{a_{j}}x_{j}})\prod_{i\in[n]\setminus\{a_{1},\ldots,a_{k}\}}(1-e^{- \theta_{i}x_{k}})\,\mathrm{d}x_{1}\cdots\mathrm{d}x_{k}.\]
By the dominated convergence theorem, this gives
\[\lim_{n\to\infty}P(E_{n})=\int_{x_{1}>x_{2}>\cdots>x_{k}>0}\prod_{j=1}^{k}( \theta_{a_{j}}e^{-\theta_{a_{j}}x_{j}})\prod_{i\notin\{a_{1},\ldots,a_{k}\}}(1 -e^{-\theta_{i}x_{k}})\,\mathrm{d}x_{1}\cdots\mathrm{d}x_{k}.\]
Thus, we have shown that for any \(k\) and distinct positive integers \(a_{1},\ldots,a_{k}\), \(\lim_{n\to\infty}P(\sigma_{n}(1)=a_{1},\ldots,\sigma_{n}(k)=a_{k})\) exists, and also found the desired formula for the limit. However, we have not shown convergence in law because we have not established tightness. (This is not surprising, because we did not use any properties of the \(\theta_{i}\)'s yet.) From what we have done until now, it follows that \((\sigma_{n}(1),\ldots,\sigma_{n}(k))\) converges in law as \(n\to\infty\) if and only if it is a tight family. But this holds if and only if \(\{\sigma_{n}(i)\}_{n\geq 1}\) is a tight family for every \(i\). We will now complete the proof of the theorem by showing that \(\{\sigma_{n}(i)\}_{n\geq 1}\) is a tight family for every \(i\) if and only if \(x_{0}<\infty\) and \(f(x_{0})=\infty\).
First, suppose that \(\{\sigma_{n}(1)\}_{n\geq 1}\) is a tight family. Then there is some \(a\) such that
\[\lim_{n\to\infty}P(\sigma_{n}(1)=a)>0.\]
From the above calculation, we know that
\[\lim_{n\to\infty}P(\sigma_{n}(1)=a)=\int_{0}^{\infty}\theta_{a}e^{-\theta_{a} x}\prod_{i\neq a}(1-e^{-\theta_{i}x})\,\mathrm{d}x.\]
If this is nonzero, then there is at least one \(x>0\) for which
\[\prod_{i=1}^{\infty}(1-e^{-\theta_{i}x})>0.\]
But this implies that
\[f(x)=\sum_{i=1}^{\infty}e^{-\theta_{i}x}<\infty.\]
Thus, \(x_{0}<\infty\). Next, we show that \(f(x_{0})=\infty\). Suppose not. Then \(x_{0}>0\), since \(f(0)=\infty\). Fix a positive integer \(a\). For each \(n\geq a\), and let \(A_{n}\) be the event \(\{\sigma_{n}(1)\leq a\}\). Let \(F_{n}\) be the event \(\{\max_{i\leq n}X_{i}\leq x_{0}\}\). Take any \(x\in(0,x_{0})\) and let \(G_{n}\) be the event \(\{\max_{i\leq n}X_{i}\leq x\}\). Then
\[P(A_{n}) \leq P(A_{n}\cap(F_{n}\setminus G_{n}))+P((F_{n}\setminus G_{n}) ^{c})\] \[=P(A_{n}\cap(F_{n}\setminus G_{n}))+P(F_{n}^{c}\cup G_{n})\] \[\leq P(A_{n}\cap(F_{n}\setminus G_{n}))+P(G_{n})+P(F_{n}^{c}).\]
If the event \(A_{n}\cap(F_{n}\setminus G_{n})\) happens, then \(\max_{i\leq n}X_{i}\) belongs to the interval \((x,x_{0}]\), and one of \(X_{1},\ldots,X_{a}\) is the maximum among \(X_{1},\ldots,X_{n}\). Thus, in particular, one of \(X_{1},\ldots,X_{a}\) is in \((x,x_{0}]\). Plugging this into the above inequality, we get
\[P(A_{n})\leq\sum_{i=1}^{a}(e^{-\theta_{i}x}-e^{-\theta_{i}x_{0}})+\prod_{i=1}^{ n}(1-e^{-\theta_{i}x})+1-\prod_{i=1}^{n}(1-e^{-\theta_{i}x_{0}}).\]
Since \(f(x)=\infty\), we have \(\prod_{i=1}^{\infty}(1-e^{-\theta_{i}x})=0\). Thus, taking \(n\to\infty\) on both sides, we get
\[\lim_{n\to\infty}P(A_{n})\leq\sum_{i=1}^{a}(e^{-\theta_{i}x}-e^{-\theta_{i}x_{ 0}})+1-\prod_{i=1}^{\infty}(1-e^{-\theta_{i}x_{0}}).\]
Now notice that the definition of \(A_{n}\) does not involve \(x\). So we can take \(x\nearrow x_{0}\) on the right, which makes the first term vanish and leaves the rest as it is. Thus,
\[\lim_{n\to\infty}P(A_{n})\leq 1-\prod_{i=1}^{\infty}(1-e^{-\theta_{i}x_{0}}).\]
But the assumed finiteness of \(f(x_{0})\) implies that the product on the right is strictly positive. Thus, we get an upper bound on \(\lim_{n\to\infty}P(A_{n})\) which is less than \(1\). But observe that this upper bound does not depend on \(a\). This contradicts the tightness of \(\sigma_{n}(1)\), thereby completing the proof of one direction of the theorem.
Next, suppose that \(x_{0}<\infty\) and \(f(x_{0})=\infty\). We consider two cases. First, suppose that \(x_{0}=0\). Then \(f(x)<\infty\) for each \(x>0\). But
\[f(x)=\sum_{i=1}^{\infty}P(X_{i}>x). \tag{5}\]
Therefore by the Borel-Cantelli lemma, \(X_{i}\to 0\) almost surely as \(i\to\infty\). Now take any \(i\) and integers \(n\) and \(a\) bigger than \(i\). Then the event \(\sigma_{n}(i)\geq a\) implies that
\[\max_{j\geq a}X_{j}>\min\left\{X_{1},\ldots,X_{i}\right\},\]
because otherwise the \(i\)th largest value among \((X_{j})_{j=1}^{n}\) cannot be one of \((X_{j})_{j\geq a}\). Thus,
\[P(\sigma_{n}(i)\geq a)\leq P\left(\max_{j\geq a}X_{j}>\min\left\{X_{1},\ldots, X_{i}\right\}\right).\]
But the right side is a function of only \(a\) (and not \(n\)), and tends to zero as \(a\to\infty\) because \(X_{j}\to 0\) almost surely as \(j\to\infty\). This proves tightness of \(\left\{\sigma_{n}(i)\right\}_{n\geq 1}\) when \(x_{0}=0\).
Next, consider the case \(x_{0}>0\). For convenience, let us define the partial sums
\[f_{n}(x):=\sum_{i=1}^{n}e^{-\theta_{i}x},\qquad g_{n}(x):=\prod_{i=1}^{n}\left( 1-e^{-\theta_{i}x}\right).\]
Take \(i\), \(n\) and \(a\) as before. Let \(x\) be a real number bigger than \(x_{0}\), to be chosen later. The event \(\sigma_{n}(i)\geq a\) implies that at least one of the following two events must happen: (a) There are less than \(i\) elements of \((X_{j})_{j=1}^{n}\) that are bigger than \(x\), or (b) \(X_{j}>x\) for some \(j\geq a\). This gives
\[P(\sigma_{n}(i)\geq a)\leq\sum_{A\subseteq[n],|A|<i}\left(\prod_{j\in A}e^{- \theta_{j}x}\right)\left(\prod_{j\in[n]\setminus A}\left(1-e^{-\theta_{j}x} \right)\right)+\sum_{j\geq a}e^{-\theta_{j}x}.\]
Now note that for any \(A\subseteq[n]\) with \(|A|<i\),
\[\prod_{j\in[n]\setminus A}\left(1-e^{-\theta_{j}x}\right)\leq\frac{g_{n}(x)}{ \prod_{j\in A}\left(1-e^{-\theta_{j}x}\right)}\leq\frac{g_{n}(x)}{\left(1-e^{- \theta_{1}x_{0}}\right)^{i-1}}.\]
Therefore
\[\sum_{A\subseteq[n],\ |A|<i}\left(\prod_{j\in A}e^{-\theta_{j}x} \right)\left(\prod_{j\in[n]\setminus A}\left(1-e^{-\theta_{j}x}\right)\right) \leq\frac{g_{n}(x)}{\left(1-e^{-\theta_{1}x_{0}}\right)^{i-1}} \sum_{A\subseteq[n],\ |A|<i}\left(\prod_{j\in A}e^{-\theta_{j}x}\right)\] \[\leq\frac{g_{n}(x)(1+f_{n}(x)+f_{n}(x)^{2}+\cdots+f_{n}(x)^{i-1} )}{(1-e^{-\theta_{1}x_{0}})^{i-1}}.\]
By the inequality \(1-x\leq e^{-x}\), we have \(g_{n}(x)\leq e^{-f_{n}(x)}\). Thus,
\[P(\sigma_{n}(i)\geq a)\leq\frac{e^{-f_{n}(x)}(1+f_{n}(x)+f_{n}(x)^{2}+\cdots+f _{n}(x)^{i-1})}{(1-e^{-\theta_{1}x_{0}})^{i-1}}+\sum_{j\geq a}e^{-\theta_{j}x}.\]
Let \(m\) be the largest integer such that \(\theta_{m}\leq 1/(x-x_{0})\). Suppose that \(n\geq m\). Then
\[f_{n}(x)\geq\sum_{j=1}^{m}e^{-\theta_{j}x}=\sum_{j=1}^{m}e^{-\theta_{j}x_{0}- \theta_{j}(x-x_{0})}\geq e^{-1}\sum_{j=1}^{m}e^{-\theta_{j}x_{0}}.\]
But \(m\to\infty\) as \(x\searrow x_{0}\), and \(f(x_{0})=\infty\) by assumption. Thus, the above inequality shows that given any \(L>0\), we can first choose \(x\) sufficiently close to \(x_{0}\), and then choose \(n_{0}\) sufficiently large, such that for all \(n\geq n_{0}\), \(f_{n}(x)\geq L\). Now take any \(\varepsilon>0\) and find \(L\) so large that for all \(y\geq L\),
\[\frac{e^{-y}(1+y+y^{2}+\cdots+y^{i-1})}{(1-e^{-\theta_{1}x_{0}})^{i-1}}<\frac{ \varepsilon}{2}.\]
Choose \(x\) and then \(n_{0}\) as in the previous paragraph corresponding to this \(L\). Then find \(a\) so large that
\[\sum_{j\geq a}e^{-\theta_{j}x}<\frac{\varepsilon}{2},\]
which exists since \(f(x)<\infty\). For this choice of \(a\), the above steps show that \(P(\sigma_{n}(i)\geq a)\leq\varepsilon\) for all \(n\geq n_{0}\). This proves tightness of \(\left\{\sigma_{n}(i)\right\}_{n\geq 1}\) when \(x_{0}>0\), completing the proof of the theorem.
## 5 A vast generalization - hyperplane walks
### Introduction
The Tsetlin library has seen vast generalizations in the past twenty years. In this section, we explain walks on the chambers of a hyperplane arrangement due to Bidigare-Hanlon-Rockmore [9] and Brown-Diaconis [12]. The Tsetlin library is a (very) special case of the braid arrangement. These Markov chains have a fairly complete theory (simple forms for the eigenvalues and good rates of convergence to stationarity). But the description of the stationary distribution, the analog of the Luce model, is indirect, involving a weighted sampling without replacement scheme. Thus the problem
_What does the stationary distribution of hyperplane walks look like?_
Section 5.2 sets things up and states the main theorems (with examples). The few cases where something is known are reported in Section 5.3. The final section points to semigroup walks where parallel problems remain open. The main point of this section is to cast Sections 2-4 above as contributions to a general problem.
### Hyperplane walks
We work in \(\mathbb{R}^{d}\). Let \(\mathcal{A}=\{H_{1},H_{2},\ldots,H_{k}\}\) be a finite collection of affine hyperplanes (translates of codimension one subspaces). These divide \(\mathbb{R}^{d}\) into
* chambers (points not on any \(H_{i}\)). Let \(\mathcal{C}\) be the chambers.
* faces (points on some \(H_{i}\) and on one side or another of others). Let \(\mathcal{F}\) be the faces.
A key notion is the projection of a chamber onto a face (Tits projection). For \(C\in\mathcal{C}\) and \(F\in\mathcal{F}\), PROJ \(C\to F\) is the unique chamber adjacent to \(F\) and closest to \(C\) (in the sense of crossing the fewest number of \(H_{i}\)'s). In the above figure, PROJ \(C\to F=C^{\prime}\).
Figure 1: Four lines in \(\mathbb{R}^{2}\). There are 10 chambers and 30 faces (chambers, points of intersection and the empty face are faces).
With these definitions, we are ready to walk. Choose face weights \(\{w_{F}\}_{F\in\mathcal{F}}\) with \(w_{F}\geq 0\) and \(\sum_{F\in\mathcal{F}}w_{F}=1\). Define a Markov chain \(\kappa(C,C^{\prime})\) on chambers via:
* from \(C\), choose \(F\in\mathcal{F}\) with probability \(w_{F}\) and move to PROJ \(C\to F\).
Thus, \(\kappa(C,C^{\prime})=\sum_{\text{PROJ}}\sum_{C\to F=C^{\prime}}w_{F}\).
**Example 5.1** (Boolean arrangements).: Let \(H_{i}=\left\{x\in\mathbb{R}^{d}:x_{i}=0\right\},1\leq i\leq d\) be the usual coordinate hyperplanes. These divide \(\mathbb{R}^{d}\) into \(2^{d}\) chambers (the usual orthants) and \(3^{d}\) faces. A face may be labeled by a vector of length \(d\) containing \(0,\pm 1\) to delineate \(0\) (on) or on one side or the other of the \(i\)th hyperplane. Chambers are faces with no zeros. For PROJ \(C\to F=C^{\prime}\), set the \(i\)th coordinate of \(C^{\prime}\) to the \(i\)th coordinate of \(F\) if this is \(\pm 1\) and leave it as the \(i\)th coordinate of \(C\) if the \(i\)th coordinate of \(F\) is \(0\).
Thus, a walk proceeds via: from \(C\), pick a subset of coordinates and install \(\pm 1\) in them as determined by \(F\). For example, if
\[w_{F}=\begin{cases}\frac{1}{2d}&F=(0,\ldots,0,\pm 1,0,\ldots,0),\\ 0&\text{otherwise},\end{cases}\]
the walk becomes "pick a coordinate at random and replace it with \(\pm 1\) chosen uniformly." This is the celebrated Ehrenfest urn model of statistical physics. Dozens of natural specializations of these Boolean walks are spelled out in [12].
**Example 5.2** (Braid arrangement).: Take \(H_{ij}=\left\{x\in\mathbb{R}^{d}:x_{i}=x_{j}\right\},1\leq i<j\leq d\). Now, the chambers are points in \(\mathbb{R}^{d}\) with no equal coordinates. It follows that the relative order is fixed within a chamber, so chambers can be labeled by permutations. The faces are indexed by "block ordered set partitions": coordinates within a block are equal and all coordinates in the first block are smaller than the coordinates in the second block, and so on.
For the projection, suppose the chamber labeled \(\pi\) is thought of as a deck of cards in arrangement \(\pi\) (with \(\pi(i)\) the label of the card at position \(i\)). Suppose \(d=5\) and the face is \(F=13/2/45\). Remove cards labeled \(1\) and \(3\) from \(\pi\) (keeping them in their same relative order, then remove the card labeled \(2\) and place it under cards \(1\), \(3\). Finally, remove cards labeled \(4\), \(5\) and place them at the bottom of the five card deck. This is PROJ \(\pi\to 13/2/4\,5\).
The **Tsetlin library** arises from the choice
\[w_{F}=\begin{cases}\theta_{i}&\text{if }F=i/[n]\setminus i\\ 0&\text{otherwise}\end{cases}.\]
That is the walk on \(S_{n}\) with "choose label \(i\) with probability \(\theta_{i}\) and move this card to the top."
**Riffle shuffling** arises from
\[w_{F}=\begin{cases}\frac{1}{2^{d}}&\text{if }F=S/[n]\setminus S\text{ for }S \subseteq[n]\\ 0&\text{otherwise}\end{cases}.\]
Another way to say this - label each of \(d\) cards in the current deck with a fair coin flip, remove all cards labeled "heads" keeping them in their same relative order, and place them on top. This is exactly "inverse riffle shuffling," the inverse of the Gilbert-Shannon-Reeds model studied by Bayer-Diaconis [6].
There are hundreds of other hyperplane arrangements where the chambers are labeled by natural combinatorial objects, and there are choices of face weights so that the walk is a natural object ot study. Indeed, any finite reflection group leads to a hyperplane arrangement with \(H_{\mathbf{v}}\) being the hyperplane orthogonal to the vector \(\mathbf{v}\) determining the reflection. Any finite graph leads to a "graphical arrangement." For a wonderful exposition, see Stanley [42].
As said, the Markov chains \(\kappa(C,C^{\prime})\) admit a complete theory with known eigenvalues and rates of convergence. We will not spell this out here; see [13], but turn to the main object of interest - the stationary distribution.
Let \(\mathcal{A}\) be a general arrangement with chosen face weights \(\{w_{F}\}_{F\in\mathcal{F}}\) and \(\kappa(C,C^{\prime})\) the associated Markov chain on \(C\), the chambers of the arrangement. \(\pi(C)\geq 0\) and \(\sum_{C}\pi(C)=1\) is stationary for \(\kappa\) if \(\sum_{C}\pi(C)\kappa(C,C^{\prime})=\pi(C^{\prime})\) - thus \(\pi\) can be thought of as a left eigenvector with eigenvalue \(1\). When does a unique such \(\pi\) exist?
**Theorem 5.3** (Brown-Diaconis).: _Call \(\{w_{F}\}\)**separating** if they are not all supported in the same hyperplane (for \(H\in\mathcal{A}\), there exists \(H^{\prime}\in\mathcal{A}\) and \(w_{F}>0\) for \(F\subset H^{\prime}\)). Then \(\kappa\) has a unique stationary distribution \(\pi(C)\) if and only if \(\{w_{F}\}\) are separating._
This \(\pi\) is the analog of the Luce model and becomes the Luce model for the braid arrangement as above. The following result gives a "weighted sampling without replacement characterization" of \(\pi(C)\).
**Theorem 5.4** (Brown-Diaconis).: _Suppose \(\{w_{F}\}\) are separating. The following algorithm generates a pick from \(\pi(C)\):_
* _place all_ \(\{w_{F}\}\) _in an_ \(\mathit{urn}\)_._
* _draw them out, without replacement, with probability proportional to size (relative to what is left)._
* _say this results in the ordered list_ \(F_{1},F_{2},\ldots,F_{|\mathcal{F}|}\)_._
* _from any starting chamber_ \(C\) _(the choice does not matter), project on_ \(F_{|\mathcal{F}|}\)_, then on_ \(F_{|\mathcal{F}|-1}\)_, and so on until_ \(F_{1}\)_. The resulting chamber is exactly distributed as_ \(\pi(C)\)_._
Of course, for the Tsetlin library, this is just the Luce measure on permutations. The following subsection delineates the few examples where something can be said about \(\pi\).
### Understanding \(\pi\)
Suppose a group of orthogonal transformations acts transitively on \(\mathcal{A}\) preserving \(\kappa(C,C^{\prime})\). Then, \(\pi(C)\) is uniform over \(\mathcal{C}\) (supposing separability). Examples include riffle shuffles, the Ehrenfest urn, and "random to top" (the Tsetlin library with \(\theta_{i}=\frac{1}{n},1\leq i\leq n\)). For more on this, see [36].
Simple features of \(\pi\) can sometimes be calculated directly. See Pike [36] and its references.
Aside from the present paper, the only other examples that have been carefully studied are in the following graph coloring problems.
#### 5.3.1 Graph coloring
Let \(G\) be a connected and undirected simple graph. Let \(\mathcal{X}\) be the set of 2-colorings (say by \(\pm\)) of the vertex set of \(G\). Define a Markov chain on \(\mathcal{X}\) by
* from \(x\in\mathcal{X}\)
* pick an edge \(e\in G\) uniformly at random
* change the two endpoints of \(e\) in \(x\) to be \(\overbrace{e}^{+}\) or \(\overbrace{e}^{-}\) with probability \(\frac{1}{2}\).
Thus "neighbors are inspired to match, at random times." This is a close cousin of standard particle systems such as the voter model. All the theory works. The process is a hyperplane walk for the Boolean arrangement of dimension \(D\), where \(D\) denotes the number of edges in the graph \(G\). All eigenvalues and rates of convergence are easily available.
The only thing open is
"what can be said about the stationary distribution?"
To understand the question, suppose the graph is an \(n\)-point path
The distribution \(\pi\) is far from uniform. All \(+\) or all \(-\) have chance \(\frac{1}{2}\) of staying, but \(+-+-\cdots\) is impossible. Of course, \(\pi(x)\) is invariant under switching \(+\) and \(-\). It is easy to show that, under \(\pi\), the \(\pi\) process is a 1-dependent point process (see [10]). This means various central limit theorems are available.
How much more likely is "all \(+\)" than "many alternations"? This problem was carefully studied in a difficult paper by Chung and Graham [18] (see also [13]). They show, under \(\pi\), all \(+\) (or all \(-\)) have chance of order \(C/2^{n}\), but many alternations has chance of order \(C^{\prime}/n!\). Very nice systems of recursive differential equations appear.
The point is, even in the simplest case, understanding the stationary distribution leads to interesting mathematics. We offer the present paper in this spirit.
### Semigroups and beyond
The past ten years have shown yet broader generalization of the Tsetlin library. Kenneth Brown extended it to idempotent semigroups (allowing walks on the chambers of a building) [11]. |
2307.15301 | Attentive Multimodal Fusion for Optical and Scene Flow | This paper presents an investigation into the estimation of optical and scene
flow using RGBD information in scenarios where the RGB modality is affected by
noise or captured in dark environments. Existing methods typically rely solely
on RGB images or fuse the modalities at later stages, which can result in lower
accuracy when the RGB information is unreliable. To address this issue, we
propose a novel deep neural network approach named FusionRAFT, which enables
early-stage information fusion between sensor modalities (RGB and depth). Our
approach incorporates self- and cross-attention layers at different network
levels to construct informative features that leverage the strengths of both
modalities. Through comparative experiments, we demonstrate that our approach
outperforms recent methods in terms of performance on the synthetic dataset
Flyingthings3D, as well as the generalization on the real-world dataset KITTI.
We illustrate that our approach exhibits improved robustness in the presence of
noise and low-lighting conditions that affect the RGB images. We release the
code, models and dataset at https://github.com/jiesico/FusionRAFT. | Youjie Zhou, Guofeng Mei, Yiming Wang, Fabio Poiesi, Yi Wan | 2023-07-28T04:36:07Z | http://arxiv.org/abs/2307.15301v1 | # Attentive Multimodal Fusion for
###### Abstract
This paper presents an investigation into the estimation of optical and scene flow using RGBD information in scenarios where the RGB modality is affected by noise or captured in dark environments. Existing methods typically rely solely on RGB images or fuse the modalities at later stages, which can result in lower accuracy when the RGB information is unreliable. To address this issue, we propose a novel deep neural network approach named FusionRAFT, which enables early-stage information fusion between sensor modalities (RGB and depth). Our approach incorporates self- and cross-attention layers at different network levels to construct informative features that leverage the strengths of both modalities. Through comparative experiments, we demonstrate that our approach outperforms recent methods in terms of performance on the synthetic dataset Flyingthings3D, as well as the generalization on the real-world dataset KITTI. We illustrate that our approach exhibits improved robustness in the presence of noise and low-lighting conditions that affect the RGB images. We release the code, models and dataset at [https://github.com/jiesico/FusionRAFT](https://github.com/jiesico/FusionRAFT).
Optical and scene flow, multimodal fusion, self- and cross-attention.
## I Introduction
Optical flow algorithms are essential for determining the motion of objects or regions within images between consecutive video frames. They generate a 2D vector field that describes the apparent movement of pixels over time. In contrast, scene flow focuses on estimating the pixel-level 3D motion in stereo or RGBD video frames [1]. These algorithms find wide applications in robotics [2, 3] and surveillance [4, 5]. Computing optical flow becomes particularly challenging in environments with non-informative textures or when scenes are captured under low-lighting conditions. To address these difficulties, deep learning methods have emerged as effective solutions for optical flow estimation, formulating the problem as an energy minimization task [6, 7, 8, 9]. Deep learning-based optical flow approaches have demonstrated significant improvements over traditional methods [10, 11, 12].
Several approaches utilize the computation of a correlation volume in the visible spectrum (RGB) to estimate the optical flow between two frames [6, 10, 11]. The correlation volume captures inter-frame similarity by taking the dot product of the corresponding convolutional feature vectors and can be generated through an end-to-end deep network. This deep network can be designed to minimize an underlying energy function. However, relying solely on RGB information can be limited in scenes affected by motion blurs, non-informative textures, or low illumination conditions. To address this limitation, some approaches have incorporated multimodal information. For example, depth or point cloud data can provide an alternative representation of the underlying scene structure. This multimodal information can be integrated through _late fusion_, where feature vectors are combined without intermediate information exchange [1, 13], or through exchanging information between branches while sacrificing the independence of the single-modality representation [12].
In this paper, we present a novel multimodal fusion approach, named FusionRAFT, for optical and scene flow estimation, specifically designed to handle data captured in noisy or low-lighting conditions, for example those that can be encountered in search and rescue applications [14]. Our approach introduces three key components to address these challenges. Firstly, we propose a feature-level fusion technique that seamlessly blends RGB and depth information using a shared loss function. Secondly, we introduce a self-attention mechanism that enhances the expressiveness of feature vectors by dynamically balancing the importance of features within each individual modality. Lastly, we incorporate an optimized cross-attention module that facilitates information exchange and balance between RGB and depth modalities. We integrate these new modules within RAFT [10] and RAFT-3D [1], using an application-oriented data augmentation strategy to learn robust feature representations that make optical and scene flow estimation effective in complex environments. We conduct extensive evaluations on standard optical and scene flow benchmarks, as well as on two new settings that we introduce to assess robustness against photometric noise and challenging illumination conditions. Our method achieves state-of-the-art performance on the synthetic dataset FlyingThings3D [15] and demonstrates superior generalization capabilities on the real-world dataset KITTI [16] without fine-tuning.
## II Related work
We provide a comprehensive analysis of the recent progress in optical flow estimation using deep learning, followed by an in-depth investigation into the integration of multimodal fusion techniques for improving flow estimation performance.
**Optical flow.** FlowNet [6] pioneered the use of deep neural networks to estimate optical flow as a supervised learning
task. FlowNet learns features across scales and abstraction levels to determine pixel correspondences. FlowNet inspired FlowNet2.0 [7], PWC-Net [17], MaskFlowNet [18] and LiteFlowNet3 [19]. FlowNet2.0 presents a warping operation and a method for stacking multiple networks through this operation [7]. PWC-Net utilizes pyramidal processing, warping, and a cost volume approach to improve both the size and accuracy of optical flow models [17]. MaskFlowNet incorporates an asymmetric occlusion-aware feature matching module, which learns to filter out occluded regions through feature warping without the need for explicit supervision [18]. LiteFlowNet3 tackles the challenge of estimating optical flow in the presence of partially occluded or homogeneous regions by using an adaptive affine transformation and a confidence map that identifies unreliable flow [19]. The confidence map is used to guide the generation of transformation parameters.
RAFT [10] is a per-pixel feature extraction approach that constructs multi-scale 4D correlation volumes for each pixel pair, and updates the flow field iteratively through a recurrent unit. Like FlowNet, RAFT has inspired GMA [20] and CRAFT [11]. GMA addresses occlusions by modeling image self-similarities by using a global motion aggregation module, a transformer-based approach for finding long-range dependencies between pixels in the first image, and a global aggregation of the corresponding motion features. CRAFT aims to estimate the large motion displacements through a semantic smoothing transformer layer that integrates the features of one image and a cross-attention layer that replaces the original dot-product operator for correlation used in RAFT. Unlike these approaches, we tackle the problem of estimating optical flow in situations of unreliable RGB information, such as noises and scarce illuminations, by appropriately fusing multiple modalities through self and cross attention within feature extraction layers.
**Multimodal fusion.** Multimodal fusion can be performed at various stages: early-, mid-, and late-fusion. In early-fusion, multiple channels are created within the network to process multiple modalities together [21]. Mid-fusion maintains different branches for each modality and then merges the corresponding features at the end of the network [22, 23]. In late-fusion, the network is trained on each modality separately and then fuses the results from the independent branches [24]. RAFT [10], GMA [20], and CRAFT [11] estimate the relationships between two consecutive frames using RGB images. Inspired by multimodal fusion, some of these works have been improved to compute both scene and optical flow by utilizing additional modalities such as depth, and point clouds.
#### Ii-A1 RGB + Point Cloud Data.
DeepLiDARFlow [13] exhibits improved performance in challenging conditions, such as reflective surfaces, poor illumination, and shadows. Images and point clouds are processed by using multi-scale feature pyramid networks. Late-fusion based on differentiable confidence volumes produces the fused features. CamLiFlow [12] improves upon DeepLiDARFlow by fusing dense image features and sparse point features more effectively. Instead of late-fusion, CamLiFlow adopts a multi-stage, bidirectional fusion strategy, in which the two modalities are learned in separate branches using modality-specific architectures. CamLiRAFT [25] further improves the performance based on the RAFT [10] framework, leading to superior results compared to CamLiFlow [12]. Our method differs from previous methods in that it ensures the independence of each modality through the use of two separate branches and balances the information between the modalities through multi-stage information exchange.
#### Ii-A2 RGB + depth
RAFT-3D [1] extends RAFT to estimate both optical and scene flow from RGBD data. RGB images serve as inputs to the feature network, where a 4D correlation volume is constructed and a soft grouping of pixels into rigid objects is formed with the aid of depth information. Unlike RAFT [10], RAFT-3D employs late-fusion with the depth information and the RGB features in the prediction module, improving the stability of flow prediction. However, RAFT's feature extraction method may not sufficiently capture the rich 3D structural information. To address this, our approach employs early-fusion, in which features are extracted from both RGB and depth information, enabling stable estimation even in cases where RGB information is unreliable.
## III Our approach
We present a Multimodal Feature Fusion (MFF) Encoder that performs early fusion of RGB and depth modalities to improve the estimation of both optical and scene flow under noisy or poor lighting conditions. Our encoder is flexible and can be integrated into flow estimation frameworks by replacing their original feature encoder. To achieve this, we employ self-attention, cross-attention, and Multimodal Transfer Module (MMTM) [26]. We extract low-level features from each modality and improve their expressivity using self-attention. Cross-attention enables the network to attend to the most informative modality. MMTM is used to further fuse the attended features that are computed from the two modalities. Fig. 1(a) shows the architecture of our encoder.
### _Multimodal Feature Fusion Encoder_
The Multimodal Feature Fusion Encoder takes a pair of consecutive RGBD frames \((P^{t},\ P^{t+1})\) at time \(t\) as input. Each frame \(P^{t}=\{I^{t},Z^{t}\}\) is composed of a RGB image \(I^{t}\) and a depth image \(Z^{t}\).
We first obtain low-level features \(\mathbf{F}_{r}^{t}\in\mathbb{R}^{W\times H\times D}\) and \(\mathbf{F}_{d}^{t}\in\mathbb{R}^{W\times H\times D}\) from each modality with convolutional blocks, where we use the subscript \(r\) to represent the RGB branch and \(d\) for the depth branch (Fig. 1(a)). \(D\) is the feature dimension and \(W\times H\) is the resolution of the features.
**Self-attention.** The local features \(\mathbf{F}_{r}^{t}\) and \(\mathbf{F}_{d}^{t}\) are obtained with convolutions that have limited receptive fields, therefore we model global structures by establishing long-range dependencies through a self-attention module (\(\mathcal{S}_{\theta}(\cdot)\) in Fig. 1(a)). To mitigate the high computational cost of self-attention, we downsample \(\mathbf{F}_{r}^{t}\in\mathbb{R}^{N\times D}\) and \(\mathbf{F}_{d}^{t}\in\mathbb{R}^{N\times D}\) to obtain \(\mathbf{\bar{F}}_{r}^{t}\) and \(\mathbf{\bar{F}}_{d}^{t}\) via \(3\times 3\) and \(5\times 5\) max-pooling layers. With these downsampled features, we can use a multi-attention layer with four parallel attention heads to process \(\mathbf{F}_{r}^{t}\) and \(\mathbf{\bar{F}}_{r}^{t}\) (or \(\mathbf{F}_{d}^{t}\) and \(\mathbf{\bar{F}}_{d}^{t}\)) in parallel and get \(\mathbf{\hat{F}}_{r}^{t}\):
\[\begin{split}&\hat{\mathbf{F}}_{k}^{t}\gets S_{\theta}(\mathbf{F}_{k}^{t}, \mathbf{\tilde{F}}_{k}^{t})\\ &=\mathbf{F}_{k}^{t}+\text{MLP}\left(\sigma\left(\mathbf{W}_{Q_{x}}^{t} \mathbf{F}_{k}^{t}\left(\mathbf{W}_{K_{x}}^{t}\tilde{\mathbf{F}}_{k}^{t}\right)^{\top}/ \sqrt{D}\right)\mathbf{W}_{V_{x}}^{t}\tilde{\mathbf{F}}_{k}^{t}\right),\end{split} \tag{1}\]
where \(k\in\{r,d\}\) and \(\sigma\) is the _softmax_ function. \(D\) is the feature dimension. \(\mathbf{W}_{Q_{x}}^{t}\in\mathbb{R}^{N\times D},\mathbf{W}_{K_{x}}^{t}\in\mathbb{R}^{J \times D}\) and \(\mathbf{W}_{V_{x}}^{t}\in\mathbb{R}^{J\times D}\) are the query, key and value matrices, where \(N=W\times H\), \(J=(W\times H)/(3\times 5)\). \(\text{MLP}(\cdot)\) denotes a three-layer fully connected network with instance normalization [27] and ReLU [28] activation after the first two layers.
**Cross-attention.** We promote information exchange between the two modalities via cross-attention, which we implement through the network \(C_{\theta}(\cdot)\) (Fig. 1(a)). Attention signals from one modality (e.g. RGB) emphasize the features of another modality (e.g. depth), and vice versa. Given the self-attended features \(\mathbf{\tilde{F}}_{r}^{t}\in\mathbb{R}^{N\times D}\) and \(\mathbf{\tilde{F}}_{d}^{t}\in\mathbb{R}^{N\times D}\), we also adopt two downsampled networks max-pooling (\(3\times 3\)), and max-pooling (\(5\times 5\)) to generate the downsampled image feature map \(\mathbf{\tilde{F}}_{r}^{t}\) (or \(\mathbf{\tilde{F}}_{d}^{t}\)). We denote the transformed features as \(\mathbf{\hat{\tilde{F}}}_{r}^{t}\in\mathbb{R}^{N\times D}\) and \(\mathbf{\hat{\tilde{F}}}_{d}^{t}\in\mathbb{R}^{N\times D}\) attained by cross-attention via
\[\begin{split}&\hat{\mathbf{F}}_{r}^{t}\gets C_{\theta}(\mathbf{\hat{ F}}_{r}^{t},\mathbf{\tilde{\tilde{F}}}_{d}^{t})\\ &=\mathbf{\tilde{F}}_{r}^{t}+\text{MLP}\left(\sigma\left(\mathbf{W}_{Q_{x }}^{t}\mathbf{\hat{F}}_{r}^{t}\left(\mathbf{W}_{K_{x}}^{t}\tilde{\mathbf{F}}_{d}^{t}\right) ^{\top}/\sqrt{D}\right)\mathbf{W}_{V_{x}}^{t}\tilde{\mathbf{F}}_{d}^{t}\right),\end{split} \tag{2}\]
where \(W_{Q_{x}}^{t}\in\mathbb{R}^{N\times D},W_{K_{x}}^{t}\in\mathbb{R}^{J\times D}\) and \(W_{V_{x}}^{t}\in\mathbb{R}^{J\times D}\) are the query, key and value matrices. This cross-attention block is also applied in the reverse direction so that information flows in both directions, i.e., RGB\(\rightarrow\)depth and depth\(\rightarrow\)RGB.
**Multimodal Transfer Module.** Because our architecture operates with multimodal information, we further promote information exchange between modalities after attention. Let \(M_{\theta}(\cdot)\) be the Multimodal Transfer Module [26] we use to improve the balance between RGB and depth information (Fig. 1(a)). Let \(\mathbf{\hat{\tilde{F}}}_{r}^{t}\in\mathbb{R}^{N\times D_{M}}\) and \(\mathbf{\hat{\tilde{F}}}_{d}^{t}\in\mathbb{R}^{N\times D_{M}}\) be the input multimodal features to MMTM, and \(\mathbf{\tilde{F}}_{r}^{t}\in\mathbb{R}^{N\times D_{M}}\) and \(\mathbf{\tilde{F}}_{d}^{t}\in\mathbb{R}^{N\times D_{M}}\) be the respective outputs. MMTM first squeezes the feature vectors into \(S_{\mathbf{\tilde{F}}_{r}^{t}}\) and \(S_{\mathbf{\tilde{F}}_{d}^{t}}\) via a global average pooling. MMTM then maps these tensors to a joint representation \(Z\) through concatenation and a fully-connected layer. Based on \(Z\), MMTM finally balances RGB and depth information by gating the channel-wise features:
\[\begin{split}& S_{\mathbf{\hat{F}}_{k}^{t}}=\frac{1}{\Pi_{t=1}^{K}N_{k}} \sum_{n_{1,\ldots,n_{K}}}\mathbf{\hat{\tilde{F}}}_{k}^{t}(n_{1},\cdots,n_{K}),\\ & Z=\mathbf{W}[S_{\mathbf{\hat{F}}_{k}^{t}},S_{\mathbf{\hat{F}}_{d}^{t}}]+ b\\ &\tilde{\mathbf{F}}_{k}^{t}=2\sigma(\mathbf{W}_{\mathbf{\tilde{F}}_{k}^{t} }Z)\odot\mathbf{\hat{\tilde{F}}}_{k}^{t},\end{split} \tag{3}\]
where \([\cdot,\cdot]\) is the concatenation operator and \(k\in\{r,d\}\). \(N_{k}\) represents the spatial dimensions of \(\mathbf{\hat{\tilde{F}}}_{k}^{t}\) and \(D_{M}\) represents the
Fig. 1: Block diagram of FusionRAFT. (a) Our encoder architecture: RGB and depth frames are taken as inputs. The encoder network is a two-branch network with a transformer (self-attention plus cross-attention) and a Multimodal Transfer Module. (b) Optical flow and (c) scene flow architectures. Two consecutive RGBD frames are taken as inputs by the MFF for the feature encoder, and the first RGBD frame is taken as input by the MFF for the context encoder.
number of channels of the features. \(\textbf{W}\in\mathbb{R}^{D_{Z}\times 2D_{M}}\), \(\textbf{W}_{\tilde{\mathbf{F}}_{k}^{i}}\in\mathbb{R}^{D_{M}\times D_{Z}}\) are the weights, and \(b\in\mathbb{R}^{D_{Z}}\) are the biases of the fully connected layers.
### _Optical and scene flow estimation_
The inputs of optical and scene flow estimation are the feature vectors \([\tilde{\mathbf{F}}_{r}^{t},\tilde{\mathbf{F}}_{d}^{t}]\) and \([\tilde{\mathbf{F}}_{r}^{t+1},\tilde{\mathbf{F}}_{d}^{t+1}]\). By calculating the dot product of feature vectors between the inputs, a 4D correlation volume \(\mathbb{C}\) is generated:
\[\begin{split}& fnet(P^{t})=[\tilde{\mathbf{F}}_{r}^{t},\tilde{\mathbf{F} }_{d}^{t}]=[M_{\theta}(C_{\theta}(S_{\theta}(I^{t}),S_{\theta}(Z^{t})))],\\ &\mathbb{C}(P^{t},P^{t+1})=\langle fnet(P^{t}),fnet(P^{t+1})\rangle.\end{split} \tag{4}\]
where \(\langle\cdot,\cdot\rangle\) is the dot product operator. A four-layer pyramid \(\{\textbf{C}_{1},\textbf{C}_{2},\textbf{C}_{3},\textbf{C}_{4}\}\) is generated by reducing the last two dimensions of the correlation volume through pooling with kernels of size 1, 2, 4, and 8.
We compute 4D correlation volumes to estimate optical and scene flow [1, 10]. Through \(\{\textbf{C}_{1},\textbf{C}_{2},\textbf{C}_{3},\textbf{C}_{4}\}\), we iteratively estimate the dense displacement field \(\{\textbf{f}_{est}^{t},\textbf{f}_{est}^{t},...,\textbf{f}_{est}^{M}\}\) with M iterations to update the optical and scene flow. We train our network by computing the loss between the estimated flow and the ground-truth flow \(\textbf{f}_{gt}\) as
\[\mathcal{L}=\sum_{k=1}^{M}\gamma^{M-k}\Big{\|}\textbf{f}_{est}^{t}-\textbf{f} _{gt}\Big{\|}_{1}, \tag{5}\]
where as the iteration \(k\) increases, the weight per loss term exponentially increases with a base \(\gamma\). Fig. 1(b,c) show how our Multimodal Feature Fusion Encoder is integrated in RAFT and RAFT-3D to estimate the optical flow and the scene flow, respectively. Our module can be integrated seamlessly and does not require any modification to RAFT and RAFT-3D's modules after the 4D correlation volume computation.
## IV Experiments
We compare FusionRAFT against state-of-the-art approaches on the FlyingThings3D [15] and KITTI [16] datasets. We design two experimental settings to mimic corrupted RGB images and poor lighting condition scenarios. We also evaluate on data we acquired with a RGBD sensor in various lighting conditions. We report both quantitative and qualitative results, and carry out ablation studies.
### _Experimental setup_
**Datasets.** FlyingThings3D [15] is split into _clean_ and _final_ sets containing dynamic synthetic scenes. The former is composed of 27K RGBD images including changing lighting and shading effects, while the latter is an augmented version of the former with simulated challenging motions and blurs. Each set contains train and test splits. Previous methods [1, 10, 12] exclude samples containing fast-moving objects during the evaluation. However, as such visual challenges is of interests to our problem, we use the _whole_ training set of FlyingThings3D and sample 1K RGBD image pairs from the _whole_ test set for the evaluation. KITTI consists of real-world scenes captured from vehicles in urban scenarios. Because the original dataset does not provide depth data, we use the disparity estimated by GA-Net [29] as in [1]. We exploit KITTI to assess the ability of our model and the compared ones in generalizing from synthetic to real data, without training or finetuning using any of the KITTI's sequences. We use the training set of KITTI as our evaluation set since KITTI's test set is not publicly available. To further validate the performance of FusionRAFT in real-world scenarios, we collect an RGBD dataset using a Realsense D415 camera in an indoor office with moving people under three lighting setups, named Bright, Dimmed, and Dark. The Bright setting features bright lighting, where the moving objects are clearly visible. The Dimmed setting features dimmed lighting, where the moving objects can be observed with a lower visual quality. The Dark setting features very low lighting where the moving objects can be barely seen. We only qualitatively evaluate this dataset because we could not produce optical flow ground truth.
**Evaluation metrics.** We quantify the optical and scene flow results using conventional evaluation metrics [1, 10, 11]: for the optical flow we use \(\mathrm{AEPE_{2D}}\)(pixel), \(\mathrm{ACC_{1px}}\)(%) and \(\mathrm{F_{2D}^{11}}\)(%), for the scene flow we use \(\mathrm{AEPE_{3D}}\)(m), \(\mathrm{ACC_{0.05m}}\)(%), \(\mathrm{ACC_{0.10m}}\)(%) and \(\mathrm{F_{3D}^{11}}\)(%). \(\mathrm{AEPE_{2D}}\) measures the average end-point error (EPE) [10], which is an average value of all the 2D flow errors. \(\mathrm{AEPE_{2D}^{\mathrm{epe}<100}}\) measures the average end-point error (EPE) among the 2D flow errors that are less than 100 pixels. \(\mathrm{AEPE_{3D}}\) is the average of euclidean distance (EPE for 3D) between the ground-truth 3D scene flow and the predicted results. \(\mathrm{AEPE_{3D}^{\mathrm{epe}<1}}\) measures the average end-point error (EPE) among the 3D flow errors that are less than 1 meter. \(\mathrm{ACC_{1px}}\)[1] measures the portion of errors that are within a threshold of one pixel. \(\mathrm{ACC_{0.05m}}\)[1] measures the portion of errors that are within a threshold of 0.10 meters. \(\mathrm{ACC_{0.10m}}\)[1] measures the portion of errors that are within a threshold of 0.10 meters. \(\mathrm{MEAN_{AEPE}}\) and \(\mathrm{MEAN_{ACC}}\) are the average values of \(\mathrm{AEPE_{2D}^{\mathrm{all}}}\) and \(\mathrm{ACC_{1px}}\), respectively, calculated over FlyingThings3D-clean and FlyingThings3D-final. \(\mathrm{F_{2D}^{11}}\)[11] is the percentage of outlier pixels whose end-point error is \(>3\) pixels or \(5\%\) of the ground-truth flow magnitude. \(\mathrm{F_{3D}^{11}}\)[30] is the percentage of outlier pixels whose 3D Euclidean distance between the ground-truth 3D scene flow and the predicted one is \(>0.3\) m or \(5\%\) of the ground-truth flow magnitude.
**Evaluation settings.** Environments with poor light conditions lead to weak texture information that can compromise the stability of feature representation. Also additive Gaussian noises can affect optical and scene flow estimation. To assess the robustness, we design three experimental settings on the public FlyingThings3D and KITTI datasets: _Standard_: we use the original version of the dataset; _AGN_: we apply Additive Gaussian Noise on RGB images; _Dark_: we darken RGB images. In AGN we randomly sample noise values (\(\alpha\)) from a normal distribution centered in zero with a standard deviation equal to 35. In Dark we divide pixel values by a random factor \(\beta\sim\mathrm{U}(\{1,2,\cdots,9\})\).
**Implementation details.** We implemented FusionRAFT in PyTorch with all modules initialized with random weights. We train our network for 100K iterations with the batch size of 6 on 3 Nvidia 3090 GPUs. During training, we set the initial learning rate at \(1.25\cdot 10^{-4}\) and use linear decay. We apply
MMTM sequentially with N = 3 times as suggested in the original paper [26]. We set \(\gamma{=}0.8\) in Eq. (5) as in RAFT [10].
### _Comparisons_
We compare FusionRAFT against RGB methods for 2D optical flow estimation, i.e. RAFT [10], GMA [20], CRAFT [11], and Separable flow [31], and against methods for 3D scene flow estimation, i.e. RAFT-3D [1] and CamLiRAFT [25]. See Sec. II for the description of these methods.
#### Iv-B1 Quantitative results
Tab. I (top) reports optical flow results in Standard setting. FusionRAFT-2D outperforms GMA by \(+1.56\%\) and \(+1.55\%\) in terms of \(\mathrm{ACC}_{\mathrm{1px}}\), and \(+0.91\) and \(+0.78\) in terms of \(\mathrm{AEP}_{\mathrm{2D}}^{\mathrm{all}}\) in FlyingThings3D-clean and FlyingThings3D-final, respectively. FusionRAFT-3D outperforms RAFT-3D by \(+1.44\%\) and \(+1.42\%\) in terms of \(\mathrm{ACC}_{\mathrm{1px}}\), and \(+1.00\) and \(+0.88\) in terms of \(\mathrm{AEP}_{\mathrm{2D}}^{\mathrm{all}}\). While RAFT-3D extracts features only from RGB images, our MFF encoder extracts features from both RGB and depth, producing more informative internal representations. FusionRAFT-3D outperforms CamLiRAFT by \(+3.86\%\) and \(+4.13\%\) in terms of \(\mathrm{ACC}_{\mathrm{1px}}\), and \(+0.45\) and \(+0.15\) in terms of \(\mathrm{AEP}_{\mathrm{2D}}^{\mathrm{all}}\).
Tab. I (middle) reports optical flow results in AGN setting. FusionRAFT-2D outperforms CRAFT by \(+3.94\%\) and \(+3.96\%\) in terms of \(\mathrm{ACC}_{\mathrm{1px}}\), and \(+1.15\) and \(+1.29\) in terms of \(\mathrm{AEP}_{\mathrm{2D}}^{\mathrm{all}}\) in FlyingThings3D-clean and FlyingThings3D-final, respectively. FusionRAFT-3D performs better in terms of \(\mathrm{ACC}_{\mathrm{0.10m}}\). This suggests that \(+2.16\%\) and \(+2.45\%\) in in terms of \(\mathrm{ACC}_{\mathrm{1px}}\) and \(+0.51\) and \(+0.61\) in terms of \(\mathrm{AEP}_{\mathrm{2D}}^{\mathrm{all}}\).
Tab. II (middle) reports optical flow results in AGN setting. FusionRAFT-2D outperforms CRAFT by \(+3.94\%\) and \(+3.96\%\) in terms of \(\mathrm{ACC}_{\mathrm{1px}}\), and \(+1.15\) and \(+1.29\) in terms of \(\mathrm{AEP}_{\mathrm{2D}}^{\mathrm{all}}\) in FlyingThings3D-clean and FlyingThings3D-final, respectively. FusionRAFT-3D outperforms RGFT-3D on \(+2.16\%\) and \(+2.45\%\) in in terms of \(\mathrm{ACC}_{\mathrm{1px}}\) and \(+0.51\) and \(+0.61\) in terms of \(\mathrm{AEP}_{\mathrm{2D}}^{\mathrm{all}}\). AEP
and FusionRAFT-3D consistently produce smaller \(\mathrm{AEPE_{2D}^{all}}\) values than the other methods, which can also be visually verified with less magenta areas produced by our models. Fig. 4 shows the flow estimation on our acquired indoor dataset with RAFT, GMA, RAFT-3D, CamLiRAFT, and FusionRAFT. In the Bright setting (top), all compared methods produce good-quality results. In the Dimmed setting (middle), RAFT, GMA, and CamLiRAFT show low-quality results, which we can observe from the poor edges produced by the moving objects. In the Dark setting, FusionRAFT is the only method that produces results where the moving objects are distinguishable.
### _Ablation study_
Tab. III reports the ablation study on self-attention (SA), cross-attention (CA), and Multimodal Transfer Module (MMTM) on the FlyingThings3D dataset in both Standard and Dark settings. Overall, we can observe that all the components we added provide an incremental contribution to improve the quality of the output optical flow compared to the RGB baseline. SA and CA consistently improve performance (see Exp 3 vs 6 vs 8, 4 vs 7 vs 9, and similarly for the Dark setting). The SA applied to both depth and RGB is better than applying it to the RGB branch only (see Exp 5 vs 6 for the Standard setting, and 14 vs 15 for the Dark setting). MMTM fusion consistently outperforms the simple concatenation of RGB and depth branches in the Dark setting (see Exp 12 vs 13, 15 vs 16, 17 vs 18). There is one case in the Standard setting where this last does not occur (see Exp 6 vs 7). In general, SA focuses on intra-modality relationships while CA focuses on inter-modality relationships. MMTM further exchanges information across modalities at a deeper level. The best performance is achieved when all the modules are activated.
Fig. 3: Examples of optical flow estimation error in the KITTI dataset. The more vivid the magenta, the higher the error. FusionRAFT-2D optical flow estimation better than all RGB-based methods. FusionRAFT-3D method outperforms RAFT-3D with a smaller AEPE. Best viewed in color.
Fig. 2: Examples of optical flow estimation error in the FlyingThings3D-clean dataset. The more vivid the magenta, the higher the error. FusionRAFT-2D method handles optical flow estimation better than RGB-based methods, while FusionRAFT-3D method outperforms RAFT-3D with a smaller AEPE. Best viewed in color.
### _Computation analysis_
We measure the number of parameters, Floating-Point Operations (FLOPs), and inference time of all compared methods using FlyingThings3D. We conducted the experiments with a Nvidia 3090 GPU (24G) and I9-10900 CPUs, and reported the results in Tab. IV. Despite FusionRAFT-2D has the second-largest number of parameters, its number of FLOPs and inference time are in-between the other methods for optical flow estimation. The inference time of FusionRAFT-3D is slightly higher than that of CamLiRAFT, although our number of parameters is one order of magnitude larger than CamLiRAFT. From the per-component analysis of FusionRAFT-2D in Tab. V, we can observe that Self-attention and Cross-attention have a higher computational cost than MMTM and the two-branch encoder. The most time-consuming component is _Others_ which includes all the other modules to compute the optical flow.
## V Conclusions
We presented FusionRAFT, a novel approach for optical and scene flow estimation. FusionRAFT improves feature extraction with an early-fusion Multimodal Feature Fusion (MFF) Encoder. MFF attends to informative features and enables information exchange within and across modalities by using self-attention, cross-attention, and the Multimodal Transfer Module. Through experimental validation, we showed that FusionRAFT generates more stable and informative feature descriptions by exploiting the different modalities. FusionRAFT scores state-of-the-art results in Standard setting, but also in our newly introduced AGN and Dark settings where RGB information is corrupted. Future research directions may include the integration of FusionRAFT in robotic systems for autonomous navigation.
Fig. 4: Examples of optical flow estimation error in our real-world dataset. (top) Bright setting, (middle) Dimmed setting, (bottom) Dark setting. FusionRAFT method can handle also the Dark setting (see sharper flow boundaries). Best viewed in color. |
2306.03892 | Conformal anomaly and gravitational pair production | We argue that the rate density of particle pair production $\Gamma$ in
background fields in conformal field theories is determined by the conformal
anomaly and related to anomalous trace of the energy-momentum tensor as $\Gamma
= (\pi/2) \langle T^\mu_{\ \mu}\rangle$ if the trace is positive (and $\Gamma =
0$ otherwise). This formula perfectly reproduces (presumably, non-Hawking)
radiation generated by static gravitational fields in the absence of an event
horizon via a new evaporation mechanism suggested recently. Our relation also
correctly describes the one-loop Schwinger pair creation in massless (scalar
and spinor) quantum electrodynamics. It also accurately points to the Savvidi
instability of the gluonic vacuum towards the formation of the chromomagnetic
condensate. Photon and neutrino pair production are also discussed. | M. N. Chernodub | 2023-06-06T17:53:31Z | http://arxiv.org/abs/2306.03892v1 | # Conformal anomaly and gravitational pair production
###### Abstract
We argue that the rate density of particle pair production \(\Gamma\) in background fields in conformal field theories is determined by the conformal anomaly and related to anomalous trace of the energy-momentum tensor as \(\Gamma=(\pi/2)\langle T^{\mu}_{\mu}\rangle\) if the trace is positive (and \(\Gamma=0\) otherwise). This formula perfectly reproduces (presumably, non-Hawking) radiation generated by static gravitational fields in the absence of an event horizon via a new evaporation mechanism suggested recently. Our relation also correctly describes the one-loop Schwinger pair creation in massless (scalar and spinor) quantum electrodynamics. It also accurately points to the Savvidi instability of the gluonic vacuum towards the formation of the chromomagnetic condensate. Photon and neutrino pair production are also discussed.
**Introduction.** Signatures of vacuum instability in a strong electric field were first found in work by Sauter on the Klein paradox [1]. This effect has been recognized and developed further by Heisenberg and Euler [2] and later formalized in terms of a pair production process in QED by Schwinger [3; 4].
The physical interpretation of this instability, often called the Schwinger effect, is linked to the quantum vacuum fluctuations in which virtual pairs of electrons \(e^{-}\) and positrons \(e^{+}\) are constantly created to be annihilated shortly later. In a sufficiently strong background electric field, the created \(e^{+}e^{-}\) particles are taken away in opposite directions by the field. As they are spatially separated, they cannot annihilate and become real particles. Thus, a sufficiently strong electric field creates matter (\(e^{+}e^{-}\) pairs) from the vacuum.
A similar phenomenon exists in gravitational fields near black holes. A black hole emits the Hawking radiation [5; 6], which can be associated with the particle tunneling process [7] in which one particle from the pair, created in the vicinity of the event horizon, gets swallowed by the hole while another particle has sufficient energy escape to infinity. The escaping particles form the outgoing energy flux, which diminishes the mass of the black hole and, therefore, leads to the black hole evaporation. Due to a nonlocality of the tunneling process, this effect operates in an extended vicinity above the black hole event horizon, thus creating the notion of the quantum atmosphere [8] (see also [9]). It was recently suggested that such quantum atmospheres could possess nontrivial thermodynamic features that can be probed in condensed matter experiments [10].
In 1+1 spacetime dimensions, the Hawking radiation can be related to a gravitational (Einstein) anomaly which implies a non-conservation of energy-momentum of a chiral particle in a curved spacetime [11]. The anomaly appears due to quantum fluctuations when classical symmetries are inconsistent with the quantization procedure [12]. In 1+1 and 3+1 spacetime dimensions, the Hawking effect can also be interpreted [13] in terms of conformal (or trace) anomalies [14; 15; 16]1.
Footnote 1: It is worth mentioning about terminology used in the paper. The terms “scale” and “conformal” in relation to the symmetries of the system and the quantum anomalies are often used interchangeably in the literature. Mathematically, these concepts correspond to different symmetries as the requirement of local conformal invariance is much stronger than the condition of global scale invariance. Physically, the distinction between the scale and conformal concepts is frequently ignored because all physically relevant scale-invariant field theories in four spacetime dimensions also exhibit conformal invariance [17]. Moreover, since these anomalies are seen as a non-zero trace of the energy-momentum tensor, they are also called “trace” anomalies.
In addition, one can argue that the particle creation in a static gravitational field can also produce particles even without the event horizon [18]. In this scenario, the virtual pairs of particles are separated by local tidal forces and become real particles, similar to what happens in the Schwinger effect. Some of these real particles will fall to the gravitating body and will later be recaptured, while other particles will escape to infinity and create, similarly to the Hawking effect, an outgoing flux of matter [18].
In our article, we argue that in the off-event-horizon mechanism of Ref. [18] of particle pair production, the creation rate in the background gravitational field can be directly related to the conformal anomaly. We will show that our approach works for the Schwinger pair-production mechanism in QED and is consistent with the Savvidi vacuum instability in non-Abelian gauge theories [19].
We set \(\hbar=c=1\) everywhere in the article and work in the mostly-plus metric convention.
**Particle production and effective action.** The rate density of particle production events \(dN/dt=\Gamma\) is determined by the imaginary part, \(\Gamma=2\operatorname{Im}\mathcal{L}_{\text{eff}}\), of the Lagrangian \(\mathcal{L}_{\text{eff}}\) associated with the effective action [3],
\[W=\int d^{4}x\sqrt{-g}\mathcal{L}_{\text{eff}}\,, \tag{1}\]
which takes into account quantum corrections. The rate \(\Gamma\) has a sense of non-persistence of vacuum due to pair
creation [3; 4].
In our paper, we argue that in _conformal_ field theories, the pair-creation rate \(\Gamma\) in a background gravitational and gauge (electromagnetic or gluon) fields can be related to the conformal (trace) anomaly:
\[\Gamma=\frac{\pi}{2}\big{\langle}T^{\mu}_{\ \mu}\big{\rangle}\,, \tag{2}\]
where \(\big{\langle}T^{\mu}_{\ \mu}\big{\rangle}\equiv\big{\langle}T^{\mu}_{\ \mu}\big{\rangle}_{\rm an}\) is the anomalous trace of the energy-momentum tensor \(T^{\mu\nu}\). Relation (2) is quantum because, in classical conformal theories in an even number of spacetime dimensions, the trace of the stress-energy tensor vanishes identically, \((T^{\mu}_{\ \mu})_{\rm cl}\equiv 0\). Quantum fluctuations can violate this identity, \(\big{\langle}T^{\mu}_{\ \mu}\big{\rangle}\neq 0\), hence the term "trace anomaly" or "conformal anomaly".
In order to keep Eq. (2) as simple as possible, we used the convention that this equation has a relation to the pair production if and only if \(\big{\langle}T^{\mu}_{\ \mu}\big{\rangle}\geqslant 0\). Otherwise, \(\Gamma<0\) is equivalent to \(\Gamma\equiv 0\) because a negative pair production rate does not lead to the production of pairs.
The particle production rate of \(N\) massless scalar degrees of freedom in the curved \(d=3+1\) dimensional spacetime (described by the metric \(g_{\mu\nu}\)) in the presence of the classical electromagnetic field (characterized by the field strength \(F_{\mu\nu}\)) has been found in the recent work [18]2:
Footnote 2: We have slightly re-arranged and combined the original expressions of Ref. [18] for further convenience.
\[\Gamma_{N\rm sc}=\frac{N}{32\pi} \bigg{[}\frac{1}{180}\big{(}R_{\mu\nu\alpha\beta}R^{\mu\nu\alpha \beta}-R_{\mu\nu}R^{\mu\nu}\big{)} \tag{3}\] \[\qquad+\frac{1}{2}\left(\frac{1}{6}-\xi\right)^{2}R^{2}-\frac{q^ {2}}{12}F_{\mu\nu}F^{\mu\nu}\bigg{]}\,,\]
where the curved background is expressed via the Riemann tensor \(R_{\mu\nu\alpha\beta}\), the Ricci tensor \(R_{\mu\nu}=R^{\alpha}_{\ \mu\alpha\nu}\), and the scalar curvature \(R\equiv R^{\mu}_{\ \mu}\). The subscript "\(N\rm sc\)" in Eq. (3) stands for \(N\) scalar degrees of freedom (for example, \(N=1\) for a neutral scalar field and \(N=2\) for a complex scalar field).
The quantity \(q\) in Eq. (3) is the electric charge of the scalar particle minimally coupled to electromagnetism. A neutral (\(q=0\)) scalar field carrying one degrees of freedom (\(N=1\)) is described by the following Lagrangian:
\[\mathcal{L}=-\frac{1}{2}\partial_{\mu}\phi\partial^{\mu}\phi-\frac{1}{2}\xi R \phi^{2}-\frac{1}{2}m^{2}\phi^{2}\,, \tag{4}\]
where the parameter \(\xi\) controls the local coupling of the Ricci curvature scalar \(R\) to the scalar field. The conformally invariant massless theory corresponds to \(\xi=1/6\). For consistency with previous studies, we also add to Eq. (4) the mass term, which will be set to zero at the end of our considerations, \(m=0\). The charged (complex) scalar field carrying the elementary electric charge \(q=e\) has \(N=2\) degrees of freedom with corresponding modifications of Eq. (4).
The remarkable feature of the pair-production effect (3) is that it can take place in static gravitational fields, which immediately suggests that this effect is a Hawking-type of radiation associated with the presence of an event horizon [5; 6]. However, the pair production (3) takes place even in the absence of an event horizon (that is, not only for a black hole), thus indicating that this phenomenon is either an addition or a generalization of Hawking radiation [18].
**Effective action, trace anomaly, and pair production.** It is instructive first to start from the simplest case of the scalar field for which the trace anomaly has been elaborated in great detail in Ref. [20]. Our relation (2) between the conformal (trace) anomaly and the off-event-horizon particle production can be deduced by matching the anomalous term in the one-loop effective action \(W\) represented as an integral over the proper time \(s\) of Ref. [20] with the representation of the same action in terms of the spectral parameter \(s\) in the heat-kernel approach of Ref. [18] based on the Barvinsky-Vilkovisky expansion [21].
The one-loop action functional \(W\) is given by a formal divergent expression, \(W=(i/2)\ln\det G^{-1}\), where \(G(x,x^{\prime})\equiv\langle iT\big{(}\phi(x)\phi(x^{\prime})\big{)}\rangle\) represents the Green function associated with the quadratic Lagrangian (4):
\[\Big{(}-\frac{1}{\sqrt{-g}}\partial_{\mu}\sqrt{-g}g^{\mu\nu} \partial_{\nu}+\xi R +m^{2}\Big{)}G(x,x^{\prime}) \tag{5}\] \[=\delta(x-x^{\prime})/\sqrt{-g}\,.\]
The functional \(W\) has a close relation to the expectation value of the energy-momentum tensor, \(\langle T^{\mu\nu}\rangle\), in its response, \(W\to W+\delta W\) to the metric variation, \(g_{\mu\mu}\to g_{\mu\nu}+\delta g_{\mu\nu}\) in (even) \(D\) spacetime dimensions:
\[\delta W=\frac{i}{2}{\rm Tr}\,G\delta G^{-1}=\int d^{D}x\,\sqrt{-g}\langle T^{ \mu\nu}\rangle\frac{1}{2}\delta g_{\mu\nu}\,, \tag{6}\]
thus giving access to the evaluation of the trace \(\big{\langle}T^{\mu}_{\ \mu}\big{\rangle}\), allowing us to uncover an eventual conformal (trace) anomaly.
The variation of the effective action (6) can be expressed in the proper-time representation of Schwinger and DeWitt [3; 22] (in notations of [20]):
\[\delta W=-\frac{i}{2}\delta{\rm Tr}\,\int_{0}^{\infty}\frac{ids}{is}e^{-isH}\,, \tag{7}\]
via a relativistic Hamiltonian-like operator \(H=\Delta+\xi R+m^{2}\), where a second-order differential operator \(\Delta\) represents the kinetic term, the coupling to the curvature \(R\) plays a role of an external potential, and \(m^{2}\) gives the mass term. The correct analytical properties
of Eq. (7) and similar subsequent relations are maintained by an appropriate complex continuation of the mass term, \(m^{2}\to m^{2}(1-i0^{+})\), silently assumed here.
The effective Lagrangian (1) takes the following form:
\[L_{\rm eff}=\frac{1}{2}\frac{1}{(4\pi)^{\frac{D}{2}}}\int_{0}^{\infty}\frac{ids} {(is)^{1+\frac{D}{2}}}e^{-im^{2}s}F(x,x;is;D)\,, \tag{8}\]
where \(F(x,x^{\prime};is;D)\) is the weight bi-scalar in the proper-time Green's function \(\langle x,s|x^{\prime},0\rangle=\langle x|e^{-is\mathsf{f}}|x^{\prime}\rangle\) defined in a manner similar to Eq. (8). The Green's function satisfies the Schrodinger-like equation: \(-\frac{\partial}{\partial is}\langle x,s|x^{\prime},0\rangle=H\langle x,s|x^{ \prime},0\rangle\), which gives a quantum-mechanical flavor to the whole proper-time formalism.
We will not dwell on the precise definition of the bi-scalar \(F\), which can be found in detail in Refs. [20; 22]. The key mathematical point of our arguments is that the function \(F\) allows for the power-series expansion in terms of the proper time \(s\) (omitting other arguments):
\[F=1+is\,\mathsf{f}_{1}+(is)^{2}\,\mathsf{f}_{2}+\ldots\,, \tag{9}\]
where, in four space-time dimensions, the \(O(s^{2})\) term captures the trace anomaly [20]. On the other hand, the \(O(s^{2})\) term in an identical3 expansion of the same 1-loop effective action over the proper time \(s\) has been shown in Ref. [18] to be associated with the (off-event-horizon) pair-production rate \(\Gamma\). The mentioned equivalence of the \(O(s^{2})\) terms allows us to identify the trace-anomalous origin of the pair production and eventually leads us to Eq. (2) as we discuss below.
Footnote 3: Taking into account the signs and \(i\)-th prefactors arising from the difference between Minkowski/Euclidean spacetimes employed in Refs. [18; 20] one finds that \(\mathsf{f}_{1}=(\frac{1}{2}-\xi)R\) term in Eq. (20) of the proper-time approach of Ref. [20] coincides precisely with the second, \(O(s)\) term under the integral in Eq. (S.17) of the heat-kernel expansion of Ref. [18]. Analogously, \(\mathsf{f}_{2}\) in Eq. (24) of Ref. [20] coincides precisely with purely gravitational contribution to the third, \(O(s^{2})\) term under the integral in Eq. (S.17) of Ref. [18]. The \(\mathsf{f}_{2}\) term in series (9) is also reproduced, up to irrelevant contact term \(\square R\), by the \(m=0\) expression in the square brackets of our Eq. (13) below. Notice that our functions \(\mathsf{f}_{2}\) in Eq. (9) correspond to \(f_{2}\) of Ref. [20] and not to \(f_{2}\) of Ref. [18].
The renormalized energy-momentum tensor,
\[\langle T^{\mu\nu}\rangle_{\rm ren}=\frac{1}{4}\mathcal{A}_{4}\,g^{\mu\nu}+ \text{non-anomalous part}, \tag{10}\]
contains the anomalous part given by an \(\mathcal{A}_{4}\) function and a non-anomalous part (not shown explicitly). In \(D=4\) spacetime dimensions, the \(\mathcal{A}_{4}\) function in stress-energy tensor (10) is related to the \(O(s^{2})\) prefactor in the power series expansion (9) of the bi-scalar \(F\)[20]:
\[\mathcal{A}_{4}=\frac{1}{2}\frac{1}{(4\pi)^{2}}\left(\frac{\partial}{\partial is }\right)^{2}\left[e^{-im^{2}s}F(x,x;is,4)\right]\bigg{|}_{s=0}. \tag{11}\]
In the massless theory (\(m=0\)), the last term in Eq. (10) reduces to a traceless tensor, and the trace of the energy-momentum tensor (10) is fully determined by the trace (scale) anomaly (11):
\[\left\langle T^{\mu}_{\ \mu}\right\rangle\equiv g_{\mu\nu}\langle T^{\mu\nu} \rangle_{\rm ren}=\mathcal{A}_{4},\qquad\text{[for $m=0$]}\,, \tag{12}\]
where the short-hand notation \(\left\langle T^{\mu}_{\ \mu}\right\rangle\) is used for representational convenience (see also the discussion of Ref. [14] on non-commutativity of the regularization operation and the trace operation).
Finally, combining Eqs. (9), (11), (12) and matching them with the \(O(s^{2})\) term in the effective action of Ref. [18] leads us to our main result (2). In the rest of the paper, we ensure that Eq. (2) is valid for physical environments where both sides of this equation are known. We also discuss photon and neutrino pair production.
A neutral scalar field in curved spacetime.As the first check of our result (2), we consider a single-component neutral scalar field of mass \(m\) and generic non-conformal coupling \(\xi\) to gravity described by Lagrangian (4). It is well known that quantum fluctuations in this theory produce the following trace anomaly [20; 23]:
\[\left\langle T^{\mu}_{\ \mu}\right\rangle_{\rm 1sc} =\frac{1}{(4\pi)^{2}}\bigg{[}\frac{1}{180}R_{\mu\nu\alpha\beta}R^{ \mu\nu\alpha\beta}-\frac{1}{180}R_{\mu\nu}R^{\mu\nu} \tag{13}\] \[+\frac{1}{6}\left(\frac{1}{5}-\xi\right)\square R+\frac{1}{2} \left(\frac{1}{6}-\xi\right)^{2}R^{2}+\frac{1}{2}m^{4}\bigg{]}\,.\]
We substitute Eq. (13) to our formula (2) and recover exactly the result of Ref. [18] given in Eq. (3) for the off-horizon pair creation rate for a single (\(N=1\)) neutral (\(q=0\)) massless (\(m=0\)) scalar field4. The subscript "1sc" in Eq. (13) stresses that this expression is derived for a one-component scalar field.
Footnote 4: These equations correspond to the same physical result since the \(\square R\) term, present in Eq. (13) and absent in Eq. (3), can be removed by a finite local counterterm during the renormalization procedure and is, therefore, physically irrelevant.
General case.A quantum field theory of \(N_{S}\) scalar degrees of freedom, \(N_{F}\) Dirac fermions (a single Majorana or Weyl fermion contributes half of a Dirac fermion, \(N_{F}=1/2\)) and \(N_{V}\) species of massless vector fields, the trace anomaly gets the following form [24; 25; 14]:
\[\left\langle T^{\mu}_{\ \mu}\right\rangle=bC^{2}+b^{\prime}E_{4}+cF_{\mu\nu}F^{ \mu\nu}\,, \tag{14}\]
where
\[C^{2}=R_{\mu\nu\alpha\beta}R^{\mu\nu\alpha\beta}-2R_{\mu\nu}R^{\mu\nu}+\frac{R^{ 2}}{3}, \tag{15}\]
is the Weyl tensor squared and
\[E_{4}=R_{\mu\nu\alpha\beta}R^{\mu\nu\alpha\beta}-4R_{\mu\nu}R^{\mu\nu}+R^{2}, \tag{16}\]
is the Euler density (the integrand of the topological, Gauss-Bonnet term) in \(D=4\) dimensions. In Eq. (14), the physically irrelevant \(\Box R\) term is omitted5 and the conformal coupling (\(\xi=1/6\) for scalars) is assumed. The parameters are as follows [13; 14; 15; 20; 26]:
Footnote 5: See also a relevant discussion on \(\Box R\) in Ref. [14].
\[b= \frac{1}{120}\frac{1}{(4\pi)^{2}}\left(N_{S}+6N_{F}+12N_{V} \right)\,, \tag{17}\] \[b^{\prime}= -\frac{1}{360}\frac{1}{(4\pi)^{2}}\left(N_{S}+11N_{F}+62N_{V} \right)\,. \tag{18}\]
As an immediate check, one finds that in a pure gravitational background (\(F_{\mu\nu}=0\)), \(N_{S}=N\) species of neutral scalar fields (with \(N_{D}=N_{V}=0\)), Eq. (14) expectedly reduces to Eq. (13) with a factor \(N\) and leads us, via Eq. (2), to the recent result (3).
The last term (14) represents a non-universal ("matter") part which accounts for renormalization effects related to the scale dependence of the couplings of the theory. While Eq. (14) gives the matter term for a vector-field background, a nontrivial scalar background in an interacting scalar field theory can also generate a matter-type contribution to the conformal anomaly which can be found in Ref. [27].
For gauge vector fields coupled minimally with matter fields via the electric coupling \(e\), the prefactor
\[c=-\frac{\beta(e)}{2e}\,, \tag{19}\]
of the last term in Eq. (14) depends the beta function \(\beta(e)=\mu\,\mathrm{d}e/\mathrm{d}\mu\) associated with the running of the coupling \(e\). A nonvanishing beta function expresses the fact that radiative corrections make the electric charge \(e=e(\mu)\) dependent on the renormalization energy scale \(\mu\). This effect appears as a result of vacuum polarization which implies, for example, that the electric charges of a particle probed at a large distance (by a low-energy photon) does not match the charge of the same particle at a short distance (probed by a high-energy photon). Therefore, radiative corrections can break the scale invariance of the system and naturally contribute to the scale anomaly (14).
Notice that the gravitational part of the anomaly represented by the first two terms in Eq. (14) is exact in one loop implying that the higher-order corrections to this expression vanish. This statement is not true for the third, matter term since radiative corrections exist, generally, in all loops [28].
**Scalar QED in flat spacetime.** A similarity between the gravitational particle production and the Schwinger pair production in flat spacetime in the background electric field has been noticed in Ref. [18] on the basis of equation (3). Here we show that the conformal anomaly plays an essential role in this relation.
Consider a theory of \(N_{S}\) species of massless complex scalars coupled to electromagnetism with the same electric coupling \(e\) (a "scalar Quantum Electrodynamics" or sQED). Since we already established the relation with the gravitational part of the trace anomaly in this theory, we consider below a flat spacetime where first two (gravitational) terms in the trace (14) vanish. However, this theory still possesses the trace anomaly because its one-loop beta function is non-zero [29; 30],
\[\beta_{\mathrm{sQED}}^{\mathrm{1loop}}=\frac{N_{S}e^{3}}{48\pi^{2}}\,. \tag{20}\]
Equations (19) and (20) imply that the coefficient in the last term of the trace anomaly (14) is \(c=N_{S}e^{2}/(96\pi^{2})\). Then Eq. (2) gives us \(\Gamma_{\mathrm{sQED}}=-N_{S}e^{2}F_{\mu\nu}F^{\mu\nu}/(192\pi)\), which exactly coincides with the pair production rate (3) of Ref. [18] if one takes into account that each complex field carries two degrees of freedom: \(N=2N_{S}\).
According to our convention in Eq. (2), there is no particle production for a negative production rate. Since \(F_{\mu\nu}F^{\mu\nu}=2(\mathbf{B}^{2}-\mathbf{E}^{2})\), implies the absence of particle creation in a pure magnetic field because \(\Gamma_{\mathrm{sQED}}<0\). However, in the electric-field background, one gets the following well-known result the complex scalar field (reproduced also in Ref. [18] for \(N_{S}=1\)):
\[\Gamma_{\mathrm{sQED}}=N_{S}\frac{e^{2}\mathbf{E}^{2}}{96\pi}\,. \tag{21}\]
This equivalence further supports the validity of Eq. (2) in one loop.
**Spinor QED.** The pair creation rate in the (spinor) QED with a single flavor of massless Dirac fermion and a single gauge (electromagnetic) field can be derived with the use of correspondence (2) together with the anomaly relations (14)-(19) by setting \(N_{S}=0\), \(N_{F}=1\), and \(N_{V}=1\). Taking into account that the one-loop beta function of the spinor QED is four times bigger (per particle) than its scalar analogue (22) [30]:
\[\beta_{\mathrm{QED}}^{\mathrm{1loop}}=\frac{e^{3}}{12\pi^{2}}\,, \tag{22}\]
one gets the following prediction for the particle production rate:
\[\Gamma_{\mathrm{QED}}^{(m=0)}=\frac{1}{11\,520\pi}\biggl{(}-19R_{ \mu\nu\alpha\beta}R^{\mu\nu\alpha\beta}+184R_{\mu\nu}R^{\mu\nu} \tag{23}\] \[\qquad\qquad\qquad-55R^{2}\biggr{)}-\frac{e^{2}}{48}F_{\mu\nu}F^ {\mu\nu}\,,\]
Notice that in flat background, the particle production rate in the massless QED (23) reduces exactly to the well-known QED result [3; 30] in the limit \(m\to 0\):
\[\Gamma_{\mathrm{QED}}^{(m=0)}=\frac{e^{2}\mathbf{E}^{2}}{24\pi}\qquad\text{[ electromagnetic]}\,. \tag{24}\]
To estimate the contribution of the gravitational part to the production rate, consider now a purely gravitational background given by the static Schwarzschild spacetime of a body with the mass \(M\):
\[ds^{2} =-\left(1-\frac{2MG}{r}\right)dt^{2}+\left(1-\frac{2MG}{r}\right)^{ -1}dr^{2}\] \[+r^{2}(d\theta^{2}+\sin^{2}\theta d\varphi^{2}) \tag{25}\]
Given Ricci flatness (\(R_{\mu\nu}=0\)) of this metric, the gravitational contribution to the pair-creation rate (23) is provided only by the Kretschmann scalar:
\[K=R_{\mu\nu\alpha\beta}R^{\mu\nu\alpha\beta}=48\frac{G^{2}M^{2}}{r^{6}}\,. \tag{26}\]
Thus, it appears that the purely gravitational contribution to the pair production rate (23) is always negative
\[\delta\Gamma^{(m=0)}_{\text{QED}}=-\frac{19}{240\pi}\frac{G^{2}M^{2}}{r^{6}} \qquad\text{[gravitational]}\,, \tag{27}\]
implying that our conformal anomaly mechanism alone cannot create particles outside of the event horizon even in the presence of the strong gravitational field and even for massless QED. Moreover, our result imply that the ordinary Schwinger pair production due to background electric field (24) will be inhibited by the gravitational contribution (27) in curved spacetime.
It is worth here mentioning that the considerations of Ref. [14] on the sense of the conformal anomaly in the context of renormalization of quantum field theories suggest that in conformally-non-invariant theories (for example, for massive fields), the right-hand-side of Eq. (2) should be modified: \(\left\langle T^{\mu}_{\;\mu}\right\rangle\to g_{\mu\nu}\langle T^{\mu\nu} \rangle_{\text{ren}}-\left\langle g_{\mu\nu}T^{\mu\nu}\right\rangle_{\text{ ren}}\). This conjecture implies, in particular, that the explicitly non-conformal mass term \(m^{4}\) will not enter Eq. (2).
Coming back to the massless case in flat spacetime, the proportionality of the pair-creation rates for scalar (21) and spinor (23) QED to their beta functions, Eqs. (20) and (22), respectively, is not surprising given an intimate relation between the effective Euler-Heisenberg Lagrangian and the beta function (for an excellent review, see Ref. [30]). Since the beta function also contributes to the trace anomaly, the relation of the trace anomaly to the pair-creation rate closes the logical triangle, thus qualitatively supporting Eq. (2) on physical grounds.
Our results (2) suggest that in the flat spacetime, the creation rate of pairs of massless particles in classical (electromagnetic) background is related to the respective beta function:
\[\Gamma_{\text{flat}}=-\frac{\pi\beta(e)}{4e}F_{\mu\nu}F^{\mu\nu}\,. \tag{28}\]
This result should be valid at least in one-loop order with the already mentioned reservation that a negative production rate implies no production.
No photon production.In realistic QED in weak background electromagnetic fields with the strength below the Schwinger limit, the four-photon scattering can be neglected [30], so that photon propagation can be described by free Maxwell theory with simple Lagrangian, \(\mathcal{L}_{\text{ph}}=-(1/4)F_{\mu\nu}F^{\mu\nu}\). The particle creation rate corresponds to \(N_{F}=0\), \(N_{S}=0\) and \(N_{V}=1\) and gives us the following discouraging result \(\Gamma_{\text{ph}}=-13G^{2}M^{2}/(120\pi r^{4})<0\) implying that no photons can created in the gravitational field due to this conformal anomaly mechanism.
Neutrino-anti-neutrino pairs.Similar considerations can also be applied to neutrino-anti-neutrino pair creation with the appropriate replacement of the spinor degrees of freedom by the sum of Dirac and Majorana/Weyl neutrino species: \(N_{D}\to N_{\nu}=N_{D}+(1/2)N_{M}\) and taking \(N_{V}=N_{S}=0\) in the above expressions. One gets \(\Gamma_{\nu}=7G^{2}M^{2}/(240\pi r^{4})>0\), so that the pairs of light neutrino can potentially be created by sufficiently strong background gravitational field. For a Dirac neutrino, the pair production rate \(\Gamma_{\nu}\) is 7/2 times bigger than the rate of pair production for scalar particles which is about two times bigger than the one for the Hawking radiation. The relevant estimations for scalar particles in physically interesting gravitational fields can be found in Ref. [18].
Finally, one could ask whether these results, derived for massless spinors, are applicable to particles with mass \(m_{\nu}\). For the pure electromagnetic contribution to the pair creation rate (23), the condition is well known [30]: the electric field strength should substantially exceed the critical Schwinger field, \(E\gg E_{c}^{\text{Sch}}=m_{e}^{2}/e\simeq 1.3\times 10^{18}\,\text{V/m}\). Likewise, the same condition can be obtained by demanding that the gravitational contribution should exceed the anticipated6\(\propto m_{e}^{4}\) term generated by the explicit breaking of the conformal symmetry. For neutrinos in the field of a gravitating body with mass \(M\), the applicability condition then reads as \(r\ll r_{c}\) with the critical field \(r_{c}=\sqrt{GM}/m_{\nu}\) up to a \(O(1)\) factor.
Footnote 6: _Cf._ the last term in Eq. (13).
Savvidi magnetic instability in QCD.In the pure magnetic field, both in scalar QED and spinor QED, the right-hand of Eq. (2) is a negative quantity and, therefore, no instability associated with the particle production can occur. Of course, this natural conclusion is supported by the fact that their beta functions, Eqs. (20) and (22), are positively defined. But what happens if the beta function is negative?
Consider, for example, Yang-Mills (YM) theory which determines non-perturbative properties of Quantum Chromodynamics (QCD). The beta function of \(N_{c}\)-color YM theory, \(\beta_{\text{YM}}(g)=-11N_{c}g^{3}/(48\pi^{2})\), is a negative function of the strong coupling constant \(g\). Adopting Eq. (28) to non-Abelian fields possessing the field
strengths \(F^{a}_{\mu\nu}\), one gets the following _formal perturbative expression_ for the gluon production rate:
\[\Gamma^{\rm pert}_{\rm YM} = \frac{11N_{c}g^{2}}{192\pi}F^{a}_{\mu\nu}F^{a,\mu\nu} \tag{29}\] \[\equiv \frac{11N_{c}}{96\pi}\left[(g\mathbf{B}^{a})^{2}-(g\mathbf{E}^{a})^{2} \right]\,,\]
where the sum over the gluon species, \(a=1,\ldots,N_{c}^{2}-1\), is implicitly assumed.
Equation (29) represents a formal expression which is not applicable to the ground state of YM theory because Eq. (29) corresponds to the anomalous breaking of scale symmetry associated with the perturbative renormalization of couplings - hence the superscript "pert" in Eq. (29) - while in YM theory, the conformal symmetry is broken dynamically and non-perturbatively7. Despite this fact, Eq. (29) still allows us to make another interesting relationship with already known effect: the instability of the perturbative gluonic vacuum. Indeed, since \(\Gamma_{\rm YM}=11N_{c}(g\mathbf{B}^{a})^{2}/(96\pi)>0\), even the weakest background gluomagnetic field leads to the creation of gluon pairs and makes the gluonic vacuum unstable. This observation matches well with the instability of the perturbative gluon vacuum [31] which drives creation of the magnetic condensate (the Savvidi vacuum [19]) and the formation of the suggested magnetic-spaghetti vacuum state [32; 33] precisely due to the negativeness of the YM beta function, \(\beta_{\rm YM}(g)<0\) (see also a related critical discussion in Ref. [34]).
Footnote 7: In QCD, the magnitude of the dynamical breaking of conformal symmetry, \(\langle T^{\mu}_{\mu}\rangle\simeq\Lambda_{\rm QCD}^{4}\), is determined by an intrinsic mass scale \(\Lambda_{\rm QCD}\) of the order of a few hundred MeV.
Conclusions.We suggested the simple formula (2) for the off-horizon particle production rate in curved spacetime proposed in Ref. [18] and argued that its underlying mechanism is based on the anomalous breaking of the conformal symmetry. The anomalous particle production can occur in static gravitational fields and can operate, in particular, above the event horizons of black holes. These two properties discriminate the anomalous production from the dynamical Casimir effect in time-dependent backgrounds and the Hawking mechanism of particle production, which occurs near the black-hole horizons.
Our formula (2) agrees with known results for the pair production rate in the gravitational background for a scalar field presented recently in Ref. [18] where the off-horizon pair production has been suggested first. We also reproduce the known expressions for the Schwinger pair production in QCD with scalar and spinor particles. Our approach also supports instability in the perturbative gluonic vacuum in the chromomagnetic field, thus suggesting the formation of the magnetic condensate in accordance with widely accepted ideas about the nature of the QCD vacuum.
Our mechanism suggests that the photon pairs cannot be produced in a static Schwarzschild spacetime. However, our estimations show that a sufficiently strong gravitational field can create pairs of neutrinos and anti-neutrinos (as well as other light spinors), thus providing us with another channel for the evaporation of black holes and other gravitating objects.
|
2303.10956 | Linking High-Harmonic Generation and Strong-Field Ionization in Bulk
Crystals | The generation of high-order harmonics in bulk solids subjected to intense
ultrashort laser pulses has opened up new avenues for research in extreme
nonlinear optics and light-matter interaction on sub-cycle timescales. Despite
significant advancement over the past decade, a complete understanding of the
involved phenomena is still lacking. High-harmonic generation in solids is
currently understood as arising from nonlinear intraband currents, interband
recollision and ionization-related phenomena. As all of these mechanisms
involve or rely upon laser-driven excitation we combine measurements of the
angular dependence of nonlinear absorption and high-order harmonic generation
in bulk crystals to demonstrate the relation between high-harmonic emission and
nonlinear, laser-induced ionization in solids.
An unambiguous correlation between the emission of harmonics and
laser-induced ionization is found experimentally, that is supported by
numerical solutions of the semiconductor Bloch equations and calculations of
orientation-dependent ionization rates using maximally localized
Wannier-functions. | Peter Jürgens, Sylvianne D. C. Roscam Abbing, Mark Mero, Graham G. Brown, Marc J. J. Vrakking, Alexandre Mermillod-Blondin, Peter M. Kraus, Anton Husakou | 2023-03-20T09:30:37Z | http://arxiv.org/abs/2303.10956v1 | # Linking High-Harmonic Generation and Strong-Field Ionization in Bulk Crystals
###### Abstract
The generation of high-order harmonics in bulk solids subjected to intense ultrashort laser pulses has opened up new avenues for research in extreme nonlinear optics and light-matter interaction on sub-cycle timescales. Despite significant advancement over the past decade, a complete understanding of the involved phenomena is still lacking. High-harmonic generation in solids is currently understood as arising from nonlinear intraband currents, interband recollision and ionization-related phenomena. As all of these mechanisms involve or rely upon laser-driven excitation we combine measurements of the angular dependence of nonlinear absorption and high-order harmonic generation in bulk crystals to demonstrate the relation between high-harmonic emission and nonlinear, laser-induced ionization in solids. An unambiguous correlation between the emission of harmonics and laser-induced ionization is found experimentally, that is supported by numerical solutions of the semiconductor Bloch equations and calculations of orientation-dependent ionization rates using maximally localized Wannier-functions.
## I Introduction
High-order harmonic generation (HHG) in gases marked the birth of attosecond science allowing to track carrier dynamics on sub-cycle timescales [1; 2; 3; 4]. Transferring the concept of HHG to solid-state systems [5] has opened up several research avenues ranging from fundamental investigations of strong-field-driven carrier dynamics [6] towards the development of compact extreme-ultraviolet (XUV) sources [7] and petahertz electronics [8; 9]. In solids, four main sources of nonlinearity have been identified and investigated with regards to harmonic generation. First, traditional perturbative nonlinearities [10] have been invoked to explain conventional second harmonic generation (SHG) ever since Franken's seminal work [11] that marked the advent of nonlinear optics. It is based on the anharmonic motion of bound electrons in the valence band and is held responsible for low-order harmonic generation with multiple applications in frequency conversion as well as in optical-parametric chirped pulse amplification (OPCPA) [12]. Second, high-order harmonics extending to frequencies in the XUV [13] have been associated with interband recollisions (in strong analogy with the three-step model in the atomic case [1; 2; 14]). Third, high harmonics have also been explained by intraband currents where the nonlinearity enters through the non-parabolic shape of the conduction (and valence) bands [15]. Fourth, the nonlinearity that is inherent to the process of photoionization was proposed as a possible origin for harmonic generation [16; 17]. Three possible ionization-related mechanisms have been discussed in the context of HHG [18; 19]. Brunel harmonics, arising from the acceleration of quasi-free carriers excited by photoionization have been analyzed in gases, clusters, bulk solids and thin films [20; 21; 22; 23; 24]. Moreover, after non-resonant photoionization an excited electron has nonzero velocity which provides sub-cycle contributions to the polarization and gives rise to the generation of harmonics. However, the strength of these velocity harmonics is expected to be insignificant under typical experimental conditions due to the modest photon energies that result in small excess velocities. Finally, the previously overlooked injection current, originating from the spatial displacement of an electron wavepacket during laser-driven tunneling ionization, was identified as the dominant source of low-order harmonic generation in fused silica [19; 25].
The highly anisotropic angular dependence of the HHG process in crystalline solids has been correlated to the electronic band structure [13], petahertz photocurrents [26], real-space trajectories of electronic wavepackets [27] and van-Hove singularities [28; 29]. Low-order (below-bandgap) harmonic emission has been associated with intraband dynamics while above-bandgap HHG is generally attributed to interband recollision [26; 30].
All of the aforementioned HHG mechanisms - except for the Kerr-type nonlinearity - rely either on the presence or on the excitation of quasi-free conduction band electrons. In the most common experimental scheme the photon energy of the driving near-infrared or mid-infrared laser field \(\hbar\omega\) is smaller than the bandgap \(E_{g}\) of the irradiated crystal. Thus, excited carriers are generated by nonlinear photoionization or electron-impact ionization [31]. Despite intense research in the past decade, the connection between carrier excitation and its impact on the anisotropic angular dependence of the harmonic emis
sion has not yet been fully understood. While photoionization is often a key factor in harmonic generation, to date, there has been no investigation of its angular dependence under HHG conditions.
In this article we experimentally demonstrate a correlation between orientation-dependent ionization yields and the efficiency of HHG in various bulk crystals. We combine measurements of the angular dependence of nonlinear absorption (also referred to as multiphoton crystallography [32; 33]) and angle-dependent HHG efficiencies to establish a direct link between the excitation of quasi-free carriers and the emission of high-order harmonics. We identify an unambiguous connection between intense HHG emission and strong ionization for distinct orientations in various oxide and fluoride crystals. Our experimental results are supported by numerical simulations based on the semiconductor Bloch equations (SBEs) and calculations of the photoionization rate based on the Wannier-formalism. Our numerical results reproduce the experimentally observed angular dependence of the harmonic emission and provide insights into the complex interplay of interband and intraband dynamics as well as the importance of ionization-related mechanisms for low-order harmonic generation.
## II Experimental setup
In the experiments we used the signal beam of a homebuilt, high-repetition-rate, dual-beam OPCPA (100 kHz) with a central wavelength of 1500 nm and a full-width half-maximum (FWHM) pulse duration of \(\sim\)50 fs (architecture based on the one presented in Ref. [34]) to generate high-order harmonics from bulk crystals. The linearly polarized beam was focused to an \(1/e^{2}\) beam diameter of \(\sim\)50 um (measured by the knife-edge technique in ambient air) providing a maximum peak intensity in ambient air of \(\sim\)30 TW cm\({}^{-2}\) in the focal plane. High harmonics emitted from bulk crystals were analyzed in transmission geometry with the help of a commercial visible-ultraviolet (VIS-UV) spectrometer (Avantes AvaSpec-HS1024x58/122TEC). The energy remaining in the short-wave infrared (SWIR) signal beam was directed onto a photodiode [cf. Fig. 1(a)] with the help of a highly reflective mirror at 1500 nm. A possible contamination of the transmitted signal beam by harmonic radiation was excluded due to the low reflectivity of the dielectric mirror for the harmonic wavelengths and by the low conversion efficiencies (typically around \(10^{-6}\)[35]). The crystalline samples were mounted on a computer-controlled rotation stage enabling a precise and reproducible variation of the angle \(\theta\) between the main axis of the crystal and the laser polarization axis.
## III Experimental results
Figure 1(b) and (c) show exemplary high-harmonic spectra obtained in a 200 um thick MgO crystal, cut along the (100) plane, at two different orientation angles (\(\theta\) = 25deg and 45deg, for minimum and maximum signal strength) and the absorption of the SWIR laser pulse as a function of \(\theta\), for two different peak intensities (\(I_{1}\) = 11 TW cm\({}^{-2}\), \(I_{2}\) = 18 TW cm\({}^{-2}\)). The harmonic yield of all observed orders is higher for \(\theta\) = 45deg when compared to \(\theta\) = 25deg (the crystal angles were calibrated by perturbative third harmonic generation at a moderate excitation intensity of \(\sim\)2 TW cm\({}^{-2}\)). Based on the inset shown in Fig. 1(b) illustrating the cubic crystal structure of MgO, such a yield anisotropy can be attributed to high-symmetry directions in the crystal lattice. At an angle of 45deg the real space trajectories of the excited electron wavepackets point towards the nearest-neighbour of the same species (Mg-Mg or O-O) inside the crystal structure (monatomic nearest-neighbour direction).
The SWIR absorption exhibits no pronounced \(\theta\)-dependence at a low peak intensities below \(\sim\) 8 TW cm\({}^{-2}\). Instead, the absorption stays close to 0 after accounting for the transmission losses due to Fresnel reflection at the interfaces (\(\sim\) 16 % for MgO). At an intermediate SWIR excitation intensity of 11 TW cm\({}^{-2}\) the average absorption increases to \(\sim\)4 % [yellow circles in Fig. 1(c)]. At the
Figure 1: Experimental setup and results obtained in a bulk MgO crystal. (a). A short-wavelength infrared (SWIR) laser pulse (\(\lambda_{0}\) = 1500 nm, \(\tau\) = 50 fs) is focused into the bulk of a crystal mounted on a rotation stage. The harmonic radiation is analyzed using a visible-ultraviolet (VIS/UV) spectrometer. The transmitted fundamental radiation can be analyzed by inserting a moveable dielectric mirror directing the SWIR beam onto a photodiode. (b). Illustrative harmonic spectra containing the third, the fifth and the seventh harmonic of the SWIR driving field obtained in a bulk MgO crystal at two different orientations (\(\theta\) = 25deg and 45deg) and a peak intensity of 18 TW cm\({}^{-2}\). The large spectral width of the seventh harmonic is attributed to spectral broadening in the detection system. (c). Absorption of the fundamental laser field for different peak intensities below the damage threshold as a function of the orientation angle \(\theta\).
same time distinct extrema begin to emerge. When the excitation intensity approaches the laser-induced damage threshold of the used samples (determined to be \(\sim\)20 TW cm\({}^{-2}\)), the modulation depth of these oscillations increases and a clear eight-fold symmetry becomes apparent in the \(\theta-\)dependent absorption of the SWIR pump laser pulse with maxima appearing in the diatomic (\(\theta=\) 0\({}^{\ast}\), 90\({}^{\ast}\), 180\({}^{\ast}\) and 270\({}^{\ast}\)) and monoatomic (\(\theta=\) 45\({}^{\ast}\), 135\({}^{\ast}\), 225\({}^{\ast}\) and 315\({}^{\ast}\)) nearest-neighbour directions.
With the goal to link the nonlinear absorption and the emission of high-order harmonics, the orientation-dependent harmonic yields are compared to the nonlinear absorption in Fig. 2. Similar to the angular distribution of the absorption, the third harmonic signal exhibits an eight-fold symmetry at a SWIR peak intensity of 18 TW cm\({}^{-2}\) [see dark green circles in Fig. 2(a)]. As reported in Ref. [27] the maxima of the third harmonic emission along the monoatomic nearest-neighbour directions (\(\theta=\) 45\({}^{\ast}\), 135\({}^{\ast}\), 225\({}^{\ast}\) and 315\({}^{\ast}\)) are suppressed at low intensity where perturbative mechanisms dominate the nonlinear response [orange circles in Fig. 2(a)]. Hence, a transition from a four-fold symmetry to an eight-fold symmetry can be observed for the third harmonic as the excitation intensity increases. At an excitation intensity of 18 TW cm\({}^{-2}\) the harmonic yield of all observed orders is maximized along the diatomic nearest-neighbour directions (Mg-O bonding directions at \(\theta=\) 0.0\({}^{\ast}\), 90\({}^{\ast}\), 180\({}^{\ast}\) and 270\({}^{\ast}\)) as shown in Fig. 2(b). Further maxima are formed along the monoatomic nearest-neighbour directions (at \(\theta=\) 45\({}^{\ast}\), 135\({}^{\ast}\), 225\({}^{\ast}\) and 315\({}^{\ast}\)) resulting in an eight-fold symmetry of all observed harmonic orders whose extrema align exactly with those of the nonlinear absorption [pink circles in Fig. 2(b)] suggesting that the harmonic emission is directly correlated with nonlinear photoionization. Due to the lower signal strength the fifth and seventh harmonic could only be observed at intensities where all orders exhibited an eight-fold symmetry.
For further analysis the angle-dependent harmonic yields \(Y(\theta)\) as well as the \(\theta\)-dependent absorption are approximated by a periodic fit function according to
\[Y(\theta)=A_{0}+A_{\rm dia}\ast\cos[2\theta]^{2n}+A_{\rm mono}\ast\sin[2\theta ]^{2n}. \tag{1}\]
Here \(A_{0}\) is an offset amplitude, \(A_{\rm dia}\) and \(A_{\rm mono}\) denote the amplitudes associated with the oscillations along the mono- and diatomic nearest-neighbour directions, respectively. The parameter \(n\) determines the width of the maxima and is well approximated by 3 throughout this work.
In Fig. 2(c) the amplitudes that were introduced in Eq. 1 are analyzed as a function of the SWIR intensity. The eight-fold angular dependence of the fifth and the seventh harmonic contains equal contributions of mono- and diatomic amplitudes. However, for the third harmonic \(A_{\rm mono}\) and \(A_{\rm dia}\) evolve differently. While \(A_{\rm dia}\) constantly exhibits values between 0.3 and 0.5 over the full range of intensities, \(A_{\rm mono}\) sharply increases up to an SWIR intensity of \(\sim\) 4.5 TW cm\({}^{-2}\) before following the same trend as \(A_{\rm dia}\) at a slightly (\(\sim\) 0.15) lower level. This intensity-dependent behavior is consistent with the aforementioned transition from a four-fold to an eight-fold symmetry as shown in Fig. 2(a).
Figure 2(d) demonstrates how the transmission of the SWIR pulse decreases as the excitation intensity increases towards the laser-induced damage threshold of the MgO crystal. At the same time, the modulation amplitude (for the absorption \(A_{\rm mono}\approx A_{\rm dia}\)) of the eight-fold symmetry increases with intensity in a manner that is notably different from the intensity dependence of the amplitudes that were discussed in Fig. 2(c). This is consistent with the fact, that although the transmission is related to the frequency conversion process, these processes only account for a negligible part of the transmission loss.
We repeated the same experiments in bulk sapphire samples (150 um thick) cut along the M-plane (10\(\overline{1}\)0) for which the projection of the hexagonal unit cell onto the M-plane exhibits a similar structure as MgO [see Fig. 3(d)]. For an excitation intensity of 18 TW cm\({}^{-2}\) (damage threshold at \(\sim\) 21 TW cm\({}^{-2}\)) we observe an eight-fold symmetry for all detected harmonic orders as well as for the nonlinear absorption of the fundamental SWIR laser pulse [see Fig. 3(a)]. Again, we observe maxima in the angular distribution of the SWIR absorption when the high-harmonic emission is maximized. Even though the orientation angles of the HHG and absorption maxima do not correspond to mono- and diatomic nearest-neighbour directions we keep the terminology that was introduced in the MgO
Figure 2: Experimentally determined HHG and absorption in MgO. (a) Angular dependence of the third harmonic in the perturbative (at \(I_{0}=\) 6 TW cm\({}^{-2}\), orange circles) and in the non-perturbative regime (at \(I_{0}=\) 18 TW cm\({}^{-2}\), grey circles). (b) \(\theta\)-dependence of the observed odd harmonics and the SWIR absorption in a 200 μm thick MgO crystal obtained at a peak intensity of 18 TW cm\({}^{-2}\). The cubic crystal structure of MgO is sketched in the background to indicate the nearest-neighbour directions. The harmonic signals as well as the absorption are normalized and offset vertically for clarity. (c) Amplitudes extracted from the fit function (Eq. 1) for the observed harmonic orders as a function of the SWIR excitation intensity. (d) Average transmission and modulation amplitude of the fundamental pump laser pulse as a function of intensity.
case. Figure 3(b) shows the modulation amplitudes extracted from Eq. 1 as a function of the SWIR peak intensity. Even at the lowest intensity of \(\sim\) 4 TW cm\({}^{-2}\) the third harmonic signal exhibits an eight-fold symmetry resulting in comparable magnitudes of \(A_{\mathrm{mono}}\) and \(A_{\mathrm{dia}}\) [see blue circles and blue triangles in Fig. 3(b)]. For all observed harmonic orders \(A_{\mathrm{mono}}\) and \(A_{\mathrm{dia}}\) display a similar intensity dependence. We find a transition from a stronger diatomic amplitude for intensities below 12 TW cm\({}^{-2}\) to a stronger monoatomic amplitude, before, at the highest measured intensities, the two amplitudes become equal again. This observation is confirmed by plotting the ratio \(A_{\mathrm{dia}}/A_{\mathrm{mono}}\) in Fig. 3(e). The ratio changes from values \(\geq\) 1 for excitation intensities up to 12 TW cm\({}^{-2}\) to ratios \(\leq\) 1 for intensities between 12 TW cm\({}^{-2}\) and 18 TW cm\({}^{-2}\). This indicates that the emission from real-space trajectories towards monoatomic nearest-neighbours at \(\theta\) = 45\({}^{\circ}\), 135\({}^{\circ}\), 225\({}^{\circ}\) and 315\({}^{\circ}\) becomes more important than the emission from shorter diatomic nearest-neighbour trajectories at \(\theta\) = 0\({}^{\circ}\), 90\({}^{\circ}\), 180\({}^{\circ}\) and 270\({}^{\circ}\) for \(I_{0}\geq 12\) TW cm\({}^{-2}\). The transmission results obtained from the sapphire sample display the same characteristics as in the MgO case with the exception that \(A_{\mathrm{mono}}\) is constantly slightly larger than \(A_{\mathrm{dia}}\) [see Fig. 3(c)].
In order to verify the generality of our experimental approach we furthermore analyzed the angular dependence of the HHG process and the nonlinear absorption in wide-bandgap fluoride crystals. As a prominent example we used LiF crystals (200 um thickness) with a cubic crystal structure similar to the one of MgO [see inset in Fig. 4(a)]. Apart from the fact that no transition from a four-fold to an eight-fold symmetry was observed for the third harmonic, the results shown in Fig. 4 exhibit comparable characteristic features as in MgO. Both, the observed odd harmonic orders as well as the nonlinear absorption display an eight-fold symmetry in which the maxima of the HHG signal coincide with those of the SWIR absorption [Fig. 4(a)]. The angular contrast of the odd harmonics and of the absorption reaches comparable values as in the MgO case [see Fig. 4(b)] and the average transmission shows the characteristic drop from the Fresnel-related losses at low intensity to lower values at higher intensity towards the damage threshold [see Fig. 4(c)].
The experimental results presented in this section provide consistent evidence for a correlation between the emission of high-order harmonics and nonlinear absorption of the SWIR pump laser pulse. In all investigated materials the harmonic emission was the strongest at orientation angles where also the nonlinear absorption was at its peak. We interpret these findings as an indication that ionization, which results from the nonlinear absorption, drives the maximization of the HHG process. In MgO a transition from a four-fold to an eight-fold symmetry was observed for the angular dependence of the third harmonic as the SWIR intensity increased. In Al\({}_{2}\)O\({}_{3}\) all detected harmonic orders exhibited an eight-fold symmetry irrespective of the pump laser intensity. However, a change of the strongest emission angles from \(\theta\) = 0\({}^{\circ}\), 90\({}^{\circ}\), 180\({}^{\circ}\) and 270\({}^{\circ}\) (called diatomic nearest-neighbour directions in the MgO case) to \(\theta\) = 45\({}^{\circ}\), 135\({}^{\circ}\), 225\({}^{\circ}\) and 315\({}^{\circ}\) (monoatomic nearest-neighbour directions) was identified.
Based on our experimental results we identified an unambiguous link between HHG and laser-induced nonlinear ionization. However, all widely discussed
Figure 4: Experimentally determined HHG and absorption in LiF. (a) \(\theta\)-dependence of HHG and SWIR absorption in LiF. (b) Fitted amplitudes \(A_{\mathrm{mono}}\) and \(A_{\mathrm{dia}}\) as a function of the SWIR intensity. (c) Average transmission and modulation amplitude of the 8-fold symmetric transmission loss.
Figure 3: Experimentally determined HHG and absorption in Al\({}_{2}\)O\({}_{3}\). (a) \(\theta\)-dependence of HHG and SWIR absorption in Al\({}_{2}\)O\({}_{3}\) at an excitation intensity of 18 TW cm\({}^{-2}\). (b) Fitted amplitudes \(A_{\mathrm{mono}}\) and \(A_{\mathrm{dia}}\) as a function of the SWIR intensity. (c) Average transmission and modulation amplitude of the 8-fold symmetric transmission loss. (d) Sketch of the projection of the hexagonal unit cell of an M-cut Al\({}_{2}\)O\({}_{3}\) sample. (e) Ratio of the mono- and diatomic amplitudes extracted from the fit function (Eq. 1) showing the transition from a stronger \(A_{\mathrm{dia}}\) to a stronger \(A_{\mathrm{mono}}\) with increasing intensity, finally settling at \(A_{\mathrm{dia}}\sim A_{\mathrm{mono}}\).
mechanisms for HHG in solids (interband recollision, intraband dynamics and ionization-related mechanisms) require the presence of quasi-free, excited electrons in the conduction bands. Hence, an unambiguous identification of the dominating HHG mechanism solely based on the presented experimental results is out of reach.
## IV Numerical results and discussion
To improve our understanding of the dominant HHG mechanism and of the observed \(\theta\)-dependent features we performed numerical simulations based on the SBEs [36; 37] for the case of MgO. In our simulations we implement the band structure of MgO (taken from Ref. [27], using one valence and one conduction band) to estimate the \(\theta\)-dependence of HHG by projecting the electric field of the SWIR pump laser pulse onto the \(\Gamma-K\) and \(\Gamma-X\) directions. We then calculate the resulting high-harmonic emission due to the interband polarization \(P(\omega)\) and the intraband current \(J(\omega)\) (details of the numerical model can be found in Ref. [38]). The relative contribution of the two competing mechanisms is displayed in Fig. 5(a). Strikingly, the intraband current (orange triangles) dominates the emission of H5, H7 and H9 for SWIR peak intensities \(\geq\) 1 TW cm\({}^{-2}\) while for the third harmonic the emission due to the interband polarization (dark circles) remains stronger. Thus, contrary to previous publications [6; 15; 26; 30; 39] we do not observe a dominant intraband mechanism for all observed below-bandgap harmonics. Instead our simulations indicate a transition from a prevailing intraband mechanism at 0.5 TW cm\({}^{-2}\) to a dominating interband mechanism at a peak intensity of 10 TW cm\({}^{-2}\), while both contributions lead to comparable harmonic yields in between.
Figures 5(b)-(e) show the orientation-dependence of H3-H9 for four different SWIR peak intensities (0.5 TW cm\({}^{-2}\), 1 TW cm\({}^{-2}\), 5 TW cm\({}^{-2}\) and 10 TW cm\({}^{-2}\)). The angular distribution of H3 changes from a four-fold symmetry with maxima at \(\theta=\) 0\({}^{\circ}\), 90\({}^{\circ}\), 180\({}^{\circ}\) and 270\({}^{\circ}\) (diatomic nearest-neighbour directions) at \(I_{0}=\) 0.5 TW cm\({}^{-2}\) to an eight-fold symmetry at \(I_{0}=\) 1 TW cm\({}^{-2}\). At even higher SWIR peak intensities a four-fold symmetry with maxima at \(\theta=\) 45\({}^{\circ}\), 135\({}^{\circ}\), 225\({}^{\circ}\) and 315\({}^{\circ}\) (monoatomic nearest-neighbour directions) develops. Comparing these observations to Fig. 5(a) indicates that the \(\theta-\)dependence associated with a dominating intraband mechanism (at \(I_{0}=\) 0.5 TW cm\({}^{-2}\), four-fold symmetry with maxima along diatomic nearest-neighbour directions) differs from the angular dependence due to the interband polarization (at \(I_{0}=\) 10 TW cm\({}^{-2}\), four-fold symmetry with maxima along monoatomic nearest-neighbour directions). Thus, the transition from a four-fold to an eight-fold symmetry corresponds to a transition from a purely intraband case to an intermediate situation where the combination of the interband polarization and the intraband current generates an eight-fold symmetry of the total H3 signal. The experimentally observed transition from a four-fold to an eight-fold symmetry of H3 (compare Fig. 2) could hence be interpreted as a result of such a combined emission.
The \(\theta\)-dependence of H5 exhibits an eight-fold symmetry over the full range of SWIR intensities with one exception at \(I_{0}=\) 5 TW cm\({}^{-2}\) where the maxima along the diatomic nearest-neighbour directions break up into two peaks leading to a twelve-fold symmetry that was experimentally not observed. However, as the excitation intensity further increases up to 10 TW cm\({}^{-2}\), an eight-fold symmetry in good agreement with our measurements is observed again. In the intensity range where we could experimentally access the seventh harmonic, the numerical simulations based on the SBEs reproduce the constant eight-fold symmetry of H7 while H9 tends to display a four-fold symmetric angular dependence whose preferred orientation for efficient HHG varies between the diatomic and the monoatomic nearest-neighbour directions [see Fig. 5(e)].
Generally, several microscopic physical mechanisms contribute to the two observables [\(P(\omega)\) and \(J(\omega)\)] in the SBE-calculations. While interband recollisions are expected to dominate the above-bandgap response of the interband polarization \(P(\omega)\), high-harmonic generation due to the intraband current is usually associated with quasi-free carriers being accelerated to non-parabolic regions of the energy bands. However, other HHG mechanisms that cannot easily be isolated are naturally included in these two observables. In particular, ionization-related HHG mechanisms are expected to severely contaminate the low-order harmonics [16; 31]. We expect the Brunel mechanism that is traditionally associated with harmonic generation due to photoionization in gases, clusters and solids to contribute to the
Figure 5: Numerical simulations of HHG in MgO using SBEs. (a) Relative importance of the interband polarization and the intraband current for the total HHG signal obtained by integrating the harmonic yields for \(\theta\in[0,2\pi]\). (b) - (e) Orientation-dependence of the odd harmonics (H3-H9) for various SWIR peak intensities.
intraband current as it relies upon carrier motion within the excited state. The injection current, in contrast, is directly connected to the spatial displacement of an electron during the interband excitation itself and is thus expected to contribute to the interband polarization. To investigate a potential contribution of these ionization-related HHG mechanisms that cannot directly be extracted from the SBE calculations we performed further numerical simulations of the angular dependence of the photoionization rate. We have conceived a simple model for the photoionization rate that can support physical insights and provide realistic qualitative estimates of the angular dependence of solid-state HHG. Conventional computations of the \(\theta\)-dependence of the photoionization rate only consider carrier dynamics within distinct energy bands, i.e. transitions between Bloch-states that are based on single-atom wavefunctions localized at the same atom. Here we work in the Wannier-basis [40; 41; 42] which allows to explicitly resolve transitions from one atomic site to another. Since such nearest-neighbour transitions are not included in the dipole momenta used for the SBE calculations we treat them separately and qualitatively [see Fig. 6(a)]. The photoionization rate (i.e. the rate of transitions from the valence to the conduction band) is approximated by the multiphoton formalism [43] as
\[\Gamma\approx\sum_{s}|\vec{E}\cdot\vec{s}|^{2N}d_{s}^{2N} \tag{2}\]
where the sum is taken over the neighbouring atomic sites, \(\vec{E}\cdot\vec{s}\) is the projection of the electric field on the direction towards the atomic site, and \(d_{s}\) is the transition dipole moment between a state in the valence zone and a state in the conduction zone at atomic site \(s\). The electric field modifies the energy difference beetween these states and correspondingly the number of photons \(N\) that is needed for a multiphoton transition.
For MgO we calculated the transition dipole moment based on the known energy structure of the isolated atoms, the bandgap value and the interatomic distances. For the MgO crystal the valence-zone electrons are localized at oxygen atoms, while the localized electrons in the conduction zone are predominantly positioned at oxygen atoms and partially at magnesium atoms [44]. For the calculations of the angle-dependent photoionization rate we have taken nearest-neighbour O-O transitions (corresponding to \(\theta=\) 45\({}^{\circ}\), 135\({}^{\circ}\), 225\({}^{\circ}\) and 315\({}^{\circ}\)), second-nearest-neighbour O-O transitions (corresponding to \(\theta=\) 0\({}^{\circ}\), 90\({}^{\circ}\), 180\({}^{\circ}\) and 270\({}^{\circ}\)) and nearest-neighbour O-Mg transitions (corresponding to \(\theta=\) 0\({}^{\circ}\), 90\({}^{\circ}\), 180\({}^{\circ}\) and 270\({}^{\circ}\)) into account [see Fig. 6(b)-(d)]. We only analyze the \(\theta\)-dependence of the photoionization rate itself since all ionization-related harmonic orders are proportional to this rate and will inherit the same angular distribution. Numerical results for the \(\theta\)-dependence of the ionization rate are presented in Fig. 6(b). At a low SWIR intensity of 2 TW cm\({}^{-2}\) a four-fold symmetry with maxima along the monoatomic nearest-neighbour directions is observed. This corresponds to the same directions for which the interband polarization dominates H3 [see brown line for \(I_{0}=\) 10 TW cm\({}^{-2}\) in Fig. 5(b)]. As the excitation intensity increases, a transition from a four-fold to an eight-fold symmetry is found in the \(\theta\)-dependence of the photoionization rate [see orange line for a peak intensity of 13 TW cm\({}^{-2}\) in Fig. 6(b)]. The additional maxima appearing at \(\theta=\) 0\({}^{\circ}\), 90\({}^{\circ}\), 180\({}^{\circ}\) and 270\({}^{\circ}\) can be associated with ionization along the diatomic nearest-neighbour directions (excitation from an O ion to a Mg ion or vice versa). Since this is a direct signature of Wannier-transitions from states localized at one ion to states localized at another ion, the SBE results do not include these maxima in the interband polarization [see Fig. 6(a)]. The combination of both numerical approaches indicates that the interband polarization becomes dominant for H3 at sufficiently high intensities (\(\geq\) 10 TW cm\({}^{-2}\)), while the experimentally observed eight-fold symmetry is linked to Wannier-transitions between adjacent atoms. H3 (photon energy: 2.48 eV) is situated well below the bandgap of bulk MgO (\(\sim\)7.8 eV), hence interband recollision can not be responsible for the detected emission. As the injection current is the only mechanism that generates below-bandgap harmonics in a non-perturbative manner and contributes to the interband polarization, we attribute the emission of H3 at SWIR intensities \(\geq\) 10 TW cm\({}^{-2}\) to the injection current.
Figure 6: Numerical simulations of HHG in MgO using maximally-localized Wannier-functions. (a) Depiction of electronic excitation at a single atom (as captured by the SBEs), to interatomic space and to neighbouring atoms (Wannier-Jumping). (b) Orientation-dependence of the photoionization rate calculated according to Eq. 2 for different SWIR laser intensities. (c) % (d) Angular dependence of the photoionization rate of the various nearest-neighbour and next-nearest neighbour transitions at a SWIR peak intensity of 0.1 TW cm\({}^{-2}\) (c) and 20 TW cm\({}^{-2}\) (d).
Conclusion
In summary, we have presented experimental and numerical results of the angular dependence of HHG in periodic crystals. To correlate the nonlinear frequency conversion process to laser-induced ionization we performed simultaneous transmission measurements that provide information on the ionization yield at different crystallographic orientations. Our investigations were focused on bulk MgO crystals with a cubic crystal structure and were complemented by experimental investigations in Al\({}_{2}\)O\({}_{3}\) and LiF. In all materials a distinct correlation between HHG and nonlinear ionization was observed which manifested itself in a well-defined orientation dependence of both signals. In detail, we observed maxima of the harmonic emission coinciding with maxima of the nonlinear absorption. We interpreted this as an enhanced HHG conversion efficiency at angles where laser-induced ionization is also maximized. A symmetry analysis of the angular dependence of the experimentally observed signals connected the angles of maximum HHG emission and SWIR absorption with monoatomic and diatomic nearest-neighbour directions in the crystal lattice.
To further substantiate our interpretation and investigate the dominant mechanisms responsible for HHG we performed two sets of numerical simulations using MgO as a model system. First, we numerically solved the SBEs using one valence and one conduction band. Our results reproduced the experimentally observed transition from a four-fold to an eight-fold symmetry of H3 at intermediate intensity while at high intensity the interband polarization was found to dominate, resulting in a four-fold symmetry with maxima along the monoatomic nearest-neighbour directions. For H5 and H7 an eight-fold symmetry was predicted for the experimentally used SWIR intensities in agreement with our experimental findings. Within the SBE calculations the transition in the angular distribution was caused by a change from a dominating intraband current at low intensities to a prevailing interband polarization at higher intensities. The observed shift in dominant amplitude from \(A_{\mathrm{dia}}\) to \(A_{\mathrm{mono}}\) in Al\({}_{2}\)O\({}_{3}\) may also be attributed to the same phenomenon.
Second, we numerically investigated the possible influence of ionization-related HHG mechanisms on the experimentally detected harmonic spectra. Our results unveiled a transition from a four-fold to an eight-fold symmetry of the photoionization rate for increasing intensity where the strongest signal is predicted along the monoatomic nearest neighbour directions equivalent to the dominant interband polarization contribution predicted by the SBEs. As Wannier-jumping (i.e. the transition from one atom to a neighbouring atom) is not included in the dipole momenta used in the SBE calculations the additionally emerging peaks leading to the eight-fold symmetry cannot be seen in the SBE results. Since the injection current is directly associated with interband excitations and thus contributes to \(P(\omega)\) and since the recollision mechanism can be excluded due to the below-bandgap photon energy, we attribute - like in our previous work [19; 25] - the emission of H3 in the strong-field regime to the injection mechanism, the emission of harmonics due to the spatial displacement of electrons during the excitation process.
## Funding.
Funding by the German Research Foundation - SFB1477 "Light-Matter Interaction at Interfaces," Project No. 441234705, is gratefully acknowledged. The work of S.D.C.R.A. and P.M.K. has been carried out at the Advanced Research Center for Nanolithography (ARCNL), a public-private partnership of the University of Amsterdam (UvA), the Vrije Universiteit Amsterdam (VU), the Dutch Research Council (NWO), and the semiconductor equipment manufacturer ASML, and was partly financed by Toeslag voor Topconsortia voor Kennis en Innovatie (TKI) from the Dutch Ministry of Economic Affairs and Climate Policy. P.M.K. acknowledges support from ERC Starting Grant ANACONDA (grant no. 101041819).
## Disclosures.
The authors declare no conflicts of interest.
## Acknowledgements
It is our pleasant duty to thank M. Ivanov and T. Fennel for discussion of our experimental results and the numerical methods.
## Data Availability.
Data underlying the results presented in this paper are not publicly available at this time, but may be obtained from the authors upon reasonable request.
|
2304.05442 | Performance Study of Partitioned Caches in Asymmetric Multi-Core
Processors | The current workloads and applications are highly diversified, facing
critical challenges such as the Power Wall and the Memory Wall Problem.
Different strategies over the multiple levels of Caches have evolved to
mitigate these problems. Also, to work with such diversified applications, the
Asymmetric Multi-Core Processor (AMP) presents itself as a viable solution. In
this paper, we study the performance of L2 and Last Level Cache for different
cache partitions against various AMP configurations. In addition, this study
investigates the optimal cache partitioning for a collection of Multi-threaded
benchmarks from PARSEC and SPLASH2 benchmark suites under medium-sized inputs.
We have studied the effect of block replacement strategies and their impact on
the key metrics such as total on-chip power consumption and L2 \& LLC Miss
rates. Our study presents an intermediate cache design for AMPs between the two
extremities of fully shared and fully private L2 \& LLC level Cache, which
helps achieve the desired power values and optimal cache miss penalties. | Murali Dadi, Shubhang Pandey, Aparna Behera, T G Venkatesh | 2023-04-11T18:30:06Z | http://arxiv.org/abs/2304.05442v1 | # Performance study of Partitioned Caches in Asymmetric Multi-Core processors
###### Abstract
The current workloads and applications are highly diversified, facing critical challenges such as the Power Wall and the Memory Wall Problem. Different strategies over the multiple levels of Caches have evolved to mitigate these problems. Also, to work with such diversified applications, the Asymmetric Multi-Core Processor (AMP) presents itself as a viable solution. In this paper, we study the performance of L2 and Last Level Cache for different cache partitions against various AMP configurations. In addition, this study investigates the optimal cache partitioning for a collection of Multi-threaded benchmarks from PARSEC and SPLASH2 benchmark suites under medium-sized inputs. We have studied the effect of block replacement strategies and their impact on the key metrics such as total on-chip power consumption and L2 & LLC Miss rates. Our study presents an intermediate cache design for AMPs between the two extremities of fully shared and fully private L2 & LLC level Cache, which helps achieve the desired power values and optimal cache miss penalties.
Asymmetric Multi-Core Processors, L2 cache, Last Level Cache, Cache replacement policy, CPU power
*Corresponding author(s). E-mail(s): [email protected]; [email protected]; Contributing authors: [email protected]; [email protected];
## 1 Introduction
The current workloads and applications are highly diversified, facing critical challenges such as the Power Wall and the Memory Wall Problem. Different strategies over the multiple levels of Caches have evolved to mitigate these problems. Also, to work with such diversified applications, the Asymmetric Multi-Core Processor (AMP) presents itself as a viable solution. In this paper, we study the performance of L2 and Last Level Cache for different cache partitions of various AMP configurations. In addition, this study investigates the optimal cache partitioning for a collection of Multi-threaded benchmarks from PARSEC and SPLASH2 benchmark suites under medium-sized inputs. We have studied the effect of block replacement strategies and their impact on the key metrics such as total on-chip power consumption and L2 & LLC Miss rates. Our study presents an intermediate cache design for AMPs between the two extremities of fully shared and fully private L2 & LLC level Cache, which helps achieve the desired power values and optimal cache miss penalties.
There has been immense progress in recent times in designing high-performance and energy-efficient asymmetric Multi-Core processors. However, the trade-off between performance and power plays a crucial role in the processor design. With the aggressive scaling in IC Technology, power density (\(W/cm^{2}\)) has increased due to an increase in the number of transistors per unit area [1]. In addition, the objective function (better performance or lower power consumption) may also change depending on the requirements and the operating conditions of a device. For example, in mobiles, energy needs to be optimized during idle periods for getting better battery life, while the performance needs to be prioritized during active times. LLCs are one of the processor's resources that significantly impact system performance and energy usage. So it becomes a significant challenge to handle LLCs efficiently [1],[2]. Motivated by this point we set the aim of our paper to carry out an extensive performance evaluation of the LLC of AMPs as detailed below. Note that in the context of this paper, we have three cache levels and the terms L3 and LLC have been used interchangeably.
In this paper, we have studied the performance aspects of LLC in Asymmetric Multi-Core architectures. The high level overview of the paper is as follows. In Multi-Core architecture, managing the shared LLCs is a critical task. Thus we have explored the effect of different configurations for L2 and LLCs which affects the critical system metrics such as L2 miss rate, L3 miss rate, and total power consumption of a Multi-Core architecture. Further in our study, we have investigated the impact on the power consumption of an Asymmetric Multi-Core processor due to different cores (with different operating frequencies) and also different order of execution (in-order or out-of-order). This forms the high level overview of our paper.
The remainder of this paper is organized as follows. Section 2 gives us a brief literature survey on the performance evaluation of the LLC. Then, section 3 presents the study related to the LLC and its corresponding simulation results.
Inferences drawn from these simulation results and the concluding remarks are presented in Section 4.
## 2 Literature Survey
This section reviews the existing literature works related to the study of performance and energy efficiency of the LLC. Cache memories are often employed in microprocessors to increase the system performance, and thus these caches have been the subject of numerous studies[2; 3; 4; 5; 6]. The replacement policies of LLC significantly affect the off-chip miss traffic and power consumption. Peneau et al. have studied how different LLC replacement strategies affect the system performance, and energy consumption [7]. The asymmetric Multi-Core architectures are in high demand, and the existing replacement policies have significant challenges when implemented in Asymmetric Multi-Core systems. Ramtake et al. have studied the effect of Associativity on L1, L2 caches in a Multi-Core system concerning the cache hit ratio and IPC (Instructions per Cycle) [8]. The modern VLSI chips integrate larger caches onto the processor, and managing such larger cache sizes has considerable overhead. Wu et al. have proposed a machine learning based management scheme for the shared LLC in chip multiprocessors [9]. Jang et al. [10] have suggested a cache design for larger LLCs, so that good performance is attained even with high granularity. Anandkumar et al. have proposed a new hybrid cache replacement strategy for heterogeneous Multi-Cores that combines LRU, and LFU replacement policies [11]. Heterogeneity in a Multi-Core system can be achieved by changing individual core frequencies, cache sizes, and other cache parameters. Silva et al. have investigated the benefits of having various cache sizes in HMPs (Heterogeneous Multi-Core Processors) and how a scheduling technique can explore such benefits to reduce the overall miss rate [12]. The LLC in a modern chip-multiprocessor (CMP) is typically shared by all the cores. Processors use the shared caches more frequently; therefore, eviction of the shared data causes more cache misses[13]. Thus, to efficiently utilize the shared LLC on a CMP, Sato et al. have proposed cache partitioning to protect the shared data by reducing unnecessary evictions [14]. This approach separates shared and private data and uses cache partitioning to give each type of data its own cache space. Several research works have been done on the partitioning of the shared LLC to improve system performance. But they all miss the heterogeneity in the spatial locality of different applications. Gupta et al. [15] showed how leveraging spatial locality allows significantly more effective cache sharing. They have highlighted that when large block size is used, the cache capacity requirements of many memory-intensive applications can be dramatically lowered, allowing them to give more capacity to other workloads effectively. In CMPs (Chip Multi Processors), private LLC provides a better access latency than shared caches. But more private caches result in replication of shared data, leading to under-utilization of total net capacity of cache, thus decreasing the overall hit ratio. A part of the work performed by Chen et al. on private and shared
caches highlight the above mentioned issues both in terms of performance and energy [16]. To handle the problem mentioned above, Yuan et al. have proposed a new cache management technique that improves the performance of a CMP using the private LLC [17]. Sibai et al. have discussed issues related to sharing and privatizing second and third-level cache memories in homogeneous Multi-Core architectures [18].
Although caches can significantly boost system performance, they use a considerable amount of overall system power. Chakraborty et al. [19] have analyzed the effect of LLC on the chip temperature, and they have proposed a new policy that resizes on-chip LLC at run time so that the leakage power consumption is reduced.
From the above literature works, we observed that the performance of the LLC concerning the Multi-Core processors was investigated in-depth. However the combined effect of sharing/partitioning both the L2 and LLC along with the effect of replacement needs further study. Finally, the performance of the LLC in the context of asymmetric multi-core processors has not received much attention. To fill this gap, we have done an extensive performance study of the LLC, primarily concentrating on the heterogeneity of the processors.
The unique features of our paper are as follows.
1. While [18] evaluates the performance of shared/private LLC for homogeneous Multi-Core processors, we concentrate on the same problem but for the case of heterogeneous Multi-Core architectures.
2. In most of the papers such as [12], the heterogeneity of the Multi-Core processors is introduced either by varying cache size or by varying block size. A unique feature of our paper is that we have introduced the heterogeneity by varying the core frequency as well as by varying the order of execution (either in-order or out-of-order execution).
3. Another novel feature of our work is that we have extensively studied the performance of different configurations of AMPs that differ by the way the L2 and LLC are partitioned and arrive at the optimal configuration.
## 3 Performance trade-off in Asymmetric Multi-Core Architectures (AMPs)
In this section, we study the performance trade-off in Asymmetric Multi-Core architecture. Considering both performance and power as primary concerns, we have implemented nine different configurations. In configurations 1 to 5, all cores are powerful cores with Out-of-order execution and an operating frequency of \(2.66GHz\). But in the configurations 6 to 9, we have introduced Asymmetricity by changing the order of execution of cores (In-order or Out-of-order) and the frequency of operation of each core (\(1GHz\) or \(2.66GHz\)). In these configurations, cores \(0-7\) are In-order cores running at \(1GHz\), whereas \(8-15\) are Out-of-order cores running at \(2.66GHz\). When Out-of-order and In-order cores share a cache memory, there is a chance that Out-of-order cores may
occupy a large portion of the cache, which further degrades the performance of In-order cores running at a lower frequency. So we gradually partitioned the LLC among the cores from fully shared to fully private. The remaining details of all the nine configurations are given in Table 1. The architecture used for our study, along with the corresponding simulation setup, is as given below.
### Architecture Studied
We have referred to the Nehalem architecture [20], one of the most successful processor architectures introduced by Intel. Nehalem has scalable performance for from 1 to 16 (or more) threads and from 1 to 8 (or more) cores. It contains scalable and configurable system interconnects and also contains an integrated memory controller. The three-level cache hierarchy of this microarchitecture which is shown in Fig 1 consists of 64KB of \(L1\) cache with 32KB of data cache and 32KB of the instruction cache. Further, it has 256KB of \(L2\) cache per core (private cache) for handling data and instructions. Finally, it has a fully inclusive and fully shared LLC of size 8MB where all applications can use the entire cache. Nehalem has a more out-of-order window and scheduler size, which helps it identify more independent operations that can run in parallel. It also has larger-sized buffers in the core to ensure that they do not limit the performance.
\begin{table}
\begin{tabular}{|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|} \hline
**Configuration** & \(L2\) **Cache Details** & \(L3\) **Cache Details** & **Core Frequencies** \\ \hline \hline Configuration 1 & All 16 cores sharing \(2048KB\) of \(L2\) cache & All 16 cores sharing \(8192KB\) of \(L3\) Cache & All 16 cores are Out-of-order, running at \(2.66GHz\) \\ \hline Configuration 2 & Two sets of 8 cores and each set sharing \(1024KB\) of \(L2\) cache & All cores sharing \(8192KB\) of \(L3\) Cache & All cores are Out-of-order, running at \(2.66GHz\) \\ \hline Configuration 3 & Four sets of 4 cores, and each set sharing \(512KB\) of \(L2\) cache & All cores sharing \(8192KB\) of \(L3\) Cache & All cores are Out-of-order, running at \(2.66GHz\) \\ \hline Configuration 4 & Eight sets of 2 cores, and each set sharing \(256KB\) of \(L2\) cache & All cores sharing \(8192KB\) of \(L3\) Cache & All cores are Out-of-order, running at \(2.66GHz\) \\ \hline Configuration 5 & Each core with private \(L2\) cache of \(128KB\) & All cores sharing \(8192KB\) of \(L3\) Cache & All cores are Out-of-order, running at \(2.66GHz\) \\ \hline Configuration 6 & Each core with private \(L2\) Cache of size \(128KB\) & Two sets of 8 cores and each set sharing cache of size \(4096KB\) & Core \(0-7\), in-order, speed \(1GHz\). Cores \(8-15\) out-of-order, speed \(2.66GHz\) \\ \hline Configuration 7 & Each core with private \(L2\) Cache of size \(128KB\) & Four sets of 4 cores, and each set sharing Cache of size \(2048KB\) & Cores \(0-7\), in-order, speed \(1GHz\). Cores \(8-15\) out-of-order, speed \(2.66GHz\) \\ \hline Configuration 8 & Each core with private \(L2\) Cache of size \(128KB\) & Eight sets of 2 cores, and each set sharing Cache of size \(1024KB\) & Cores \(0-7\) in-order, speed \(1GHz\). Cores \(8-15\) out-of-order, speed \(15\) out-of-order, speed \(2.66GHz\) \\ \hline Configuration 9 & Each core with private \(L2\) Cache of size \(128KB\) & Each core with private \(L3\) Cache of size \(512KB\) & Cores \(0-7\) in-order, speed \(1GHz\), Cores \(8-15\) out-of-order, speed \(2.66GHz\) \\ \hline \end{tabular}
\end{table}
Table 1: L2 and L3 Cache details for different Configurations
### Simulator and Workloads used
We have used \(Sniper\) v7.3 simulator for our study [21]. It is an accurate high-speed \(X86\) based simulator suitable for exploring different heterogeneous Multi-Core architectures. This simulator also provides us high-speed timing simulations for the multi-threaded, multi-program workloads and shared-memory applications. We have used five different workloads in this study from the PARSEC Benchmarks Suite [22] and SPLASH2 Benchmark Suite [23]. They are PARSEC-Bodytrack, PARSEC-freqmine, SPLASH2-barnes, PARSEC-fluidanimate, and SPLASH2-radiosity. Further McPAT (Multicore Power, Area, and Timing) framework is integrated with Sniper for modeling the power and area aspects of many-core architectures. We have used McPAT v1.0 [24] in this study to get the different power consumption values of the processor.
### Simulation Results
We have simulated all nine different configurations in the initial stage as mentioned in Table 1. We compared all these configurations using their L2 miss rate, L3 miss rate, and total power consumption as the metrics for comparison. The consolidated results are given in Tables 3,4 and presented separately in the sub-figures of Fig 2 (a), (b) and (c) for better understanding. Except for \(L2\) and \(L3\) Cache levels, all the remaining parameters are common in all the configurations, which are as mentioned in Table 2
moving from fully shared to fully private caches. But \(L2\) cache is private to each core from configuration 5 to configuration 9, where the miss rate is almost the same.
From Fig. 1(b), we can observe that when as we go from configuration 1 to 5, the L3 miss rate is decreasing even if the L3 cache is shared in all these cases. If we observe the number of total misses to the L3 cache in configurations 1 to 5 in the Table 4, they are almost the same. Due to the increasing miss rate of L2, the number of accesses to the L3 cache increases, reducing the overall L3 miss rate. L3 cache is changing from a fully shared cache to a fully private cache in configurations 5 to 9, increasing the miss rate due to increased data inconsistency (coherent misses). In addition, when the number of private
\begin{table}
\begin{tabular}{|l|l|l|l|} \hline Configuration & Total \(L3\) misses/ Total \(L3\) & L3 Miss rate (\%) & Total \\ Index & Accesses & rate (\%) & Power (W) \\ \hline Configuration 1 & \(83,872/1,792,716\) & 4.67 & 275.238 \\ Configuration 2 & \(87,401/2,430,864\) & 3.59 & 275.449 \\ Configuration 3 & \(87,561/3,481,529\) & 2.51 & 274.293 \\ Configuration 4 & \(82,067/5,435,954\) & 1.51 & 273.325 \\ Configuration 5 & \(85,948/17,871,357\) & 0.48 & 273.718 \\ Configuration 6 & \(169,125/17,984,595\) & 0.94 & 263.471 \\ Configuration 7 & \(309,358/18,176,373\) & 1.7 & 265.63 \\ Configuration 8 & \(527,606/18,178,885\) & 2.9 & 268.418 \\ Configuration 9 & \(1,033,091/18,161,730\) & 5.688 & 270.978 \\ \hline \end{tabular}
\end{table}
Table 4: \(L3\) cache Miss rate and Power for all configurations
\begin{table}
\begin{tabular}{|l|l|} \hline Number of Cores & 16 \\ Block Size & 64 Bytes \\ Cache Coherence Protocol & MESI \\ Replacement Policy & LRU \\ \(L1\) size & \(32KB(I)\), \(32KB(D)\) \\ \(L1\)shared cores & 1 (private to each core) \\ \(L1\) Associativity & \(8(D)\), \(4(I)\) \\ \(L2\) Size & \(2048KB\) (Total) \\ \(L2\) Associativity & 8 \\ \(L3\) Size & \(8192KB\) (Total) \\ \(L3\) Associativity & 16 \\ \hline \end{tabular}
\end{table}
Table 2: Configuration details
\begin{table}
\begin{tabular}{|l|l|l|l|} \hline Configuration & Total \(L2\) Misses / Total \(L2\) & L2 Miss rate \\ Index & Accesses & rate \\ \hline Configuration 1 & \(1,775,452/32,658,514\) & 5.43 \\ Configuration 2 & \(2,414,712/23,034,264\) & 10.48 \\ Configuration 3 & \(3,464,822/24,173,825\) & 14.33 \\ Configuration 4 & \(5,413,832/20,838,462\) & 25.98 \\ Configuration 5 & \(17,851,930/20,985,015\) & 85.06 \\ Configuration 6 & \(17,945,030/20,985,015\) & 85.36 \\ Configuration 7 & \(18,105,468/21,045,636\) & 85.99 \\ Configuration 8 & \(18,096,468/21,045,663\) & 85.99 \\ Configuration 9 & \(18,013,786/20,914,704\) & 86.13 \\ \hline \end{tabular}
\end{table}
Table 3: \(L2\) cache Miss rate for all configurations
caches is increased, there is a chance for replicating the same data, which also increases the total miss rate due to inefficient utilization of net cache capacity.
From Fig. 2c, we observe that in Configurations 1 to 5, all cores are running at 2.66GHz (Out-of-Order), so the total power consumed is almost the same in all the cases. But configurations 6 to 9 have eight cores running at 1GHz (Inorder), and the other eight cores are running at 2.66GHz (Out-of-order), which gives low power consumption. So the power consumption is reduced when we go for asymmetric Multi-Cores. However, as we go from configurations 5 to 9, the power consumption increases due to the increase in L3 miss rate, resulting in power-intensive off-chip main memory access.
3.1 Configurations \(\boldsymbol{1,6,9}\) with different replacement policies and with different workloads
In section 3.3, we have encountered a trade-off between L3 miss rate and total power consumption of Asymmetric Multi-Core architecture. Using configuration 1, configuration 6, and configuration 9, we want to investigate this trade-off further with three different replacement policies (LRU, MRU, Round Robin) using five different workloads (PARSEC-bodytrack, PARSEC-freqmine, SPLASH2-barnes, PARSEC-fluidanimate, SPLASH2-radiosity). We have examined the L2 miss rate, L3 miss rate, and total power consumption
Figure 2: Performance trade-off in Heterogeneous Multi-Core Architecture
in configurations 1,6,9 with the five different workloads mentioned above. Corresponding results for the above mentioned simulation using three different replacement policies are shown in Fig. 3 ( LRU case), Fig. 4 ( MRU case) and Fig. 5 ( Round Robin case).
Configuration 6 is better for both in terms of L3 miss rate and power. We observe that a trade-off exists between the L3 miss rate and total power consumption from the above figure. Independent of the replacement policy and the workloads used when we go from configuration 1 to configuration 9, total power consumption decreases significantly in all the cases, but the \(L3\) miss rate increases. The LRU and Round Robin replacement policies give a better L2 miss rate when compared to the MRU policy. But we need to consider the L3 misses over the L2 misses because the misses to the L3 cache go to the main memory, increasing the overall latency and energy consumption.
For the LRU replacement policy, when the configuration is changed from 1 to 6, the L3 miss rate and power consumption are reduced by 11.6% and 9.058%, respectively. Further, when the configuration changes from 6 to 9, both the L3 miss rate and power consumption increase by 27.25% and 3.06%, respectively. When the configuration is changed from 1 to 9, the L3 miss rate increases
Figure 3: Configurations \(1,6,9\) with LRU replacement policy
by 13.25%, and power consumption reduces by 6.28%. In MRU replacement policy, from configuration 1 to configuration 6, L3 miss rate reduces by 14.2%, and power consumption reduces by 11.752%. Later from configuration 6 to configuration 9, both the L3 miss rate and power consumption increases by 32% and 2.75% respectively. From configuration 1 to configuration 9, the L3 miss rate is increased by 20.25%, and power consumption is reduced by 9.3%. In the Round Robin replacement policy, from configuration 1 to configuration 6, the \(L3\) miss rate is reduced by 7.6%, and 10.81% reduction is observed for power consumption. From configuration 6 to configuration 9, the L3 miss rate increases by 28%, and power consumption increases by 3.668%. From configuration 1 to configuration 9, the L3 miss rate increases by 20.4%, and power consumption is reduced by 7.704%.
## 4 Inference and Conclusions
This paper has done a performance study of shared LLC exploiting the heterogeneity for the Asymmetric Multi-Core architecture. We started our investigation by understanding the metrics that significantly affect the L2 and LLCs by varying the parameters such as Cache size and associativity against different replacement strategies. We realized that replacement policy plays a
Figure 4: Configurations \(1,6,9\) with MRU Replacement policy
significant impact on overall cache performance. From our first part of the study, we observe that an increase in cache size and degree of associativity improves the hit rate of a cache. But we see that the improvement in hit rate is not the same in all the cases. So we can say that cache hit rate not only depends on its parameters like cache size, associativity, block size, etc. But also depend upon the replacement policy that we are using and the different memory access patterns that the given workload follows.
In the second part of our work, we understand Cache Hierarchy and simulate the AMP Architecture. We inculcated the heterogeneity in our 16 core AMP by varying the individual core operating frequency and the method of execution, that is, whether the core is In-order or Out-of-order. It is crucial to mention beforehand that throughout our investigation of AMPs and their cache performance, the DVFS
\begin{table}
\begin{tabular}{|l|l|l|} \hline Configuration & \(L3\) miss rate & Power Consumption \\ \hline Configuration 1 to 6 & 11.3\% (Decrease) & 10.54\% (Decrease) \\ Configuration 6 to 9 & 29.08\% (Increase) & 3.154\% (Increase) \\ Configuration 1 to 9 & 17.9\% (Increase) & 7.761\% (Decrease) \\ \hline \end{tabular}
\end{table}
Table 5: Comparision results for all configurations
Figure 5: Configuration \(1,6,9\) with Round Robin Replacement policies
(Dynamic Voltage and Frequency Scaling) is kept enabled. We observe that the power variations in our results must not be considered on absolute terms; instead, they should be regarded as relative to each architecture. We begin our simulations with all cores operating on the same frequency, with each core sharing both the L2 and LL caches also have Out-of-order execution. Then we gradually partition the caches such that they are private to each core and make some cores simpler, like lowering the frequency values and making them In-order to save on power. Our investigation showed that Configuration 6 is an optimal choice as it saves power compared to Configuration 1 and has the best LL miss rate. As we know that off-chip memory accesses usually consume more latency than on-chip memory accesses.
We tested our architectures on multi-threaded PARSEC and SPLASH2 benchmarks to have a real-world understanding. From Section 3.3.1, we have observed that on average, from configuration 1 to configuration 9, there is a 17.9% increase in \(L3\) miss rate and 7.761% decrease in power consumption. We further observe from configuration 1 to configuration 6 that both the \(L3\) miss rate and power consumption reduces by 11.14% and 10.54% respectively. In contrast, from configuration 6 to configuration 9, the L3 miss rate increases by 17.9%, and power consumption increases by 3.154%. So configuration 6 is giving a better performance as well as power. The above-mentioned comparison results for all the configurations are listed down in Table. 5.
An immediate next step to our work could be to increase the number of cores and bring more levels of heterogeneity in the system, such as flexible cache designs, varying the reorder buffer sizes, and propose many more optimizations.
**Funding:** This research received no specific grant from any funding agency in the public, commercial, or not-for-profit sectors.
**Informed Consent:** Not Applicable to this article.
**Conflict of Interest:** On behalf of all authors, the corresponding author states that there is no conflict of interest.
**Data Availability Statement:** All data generated or analysed during the study are included in this article.
**Author Contribution:**
Murali Dadi: Conception and design of the study, Analysis and/or interpretation of data, Writing - original draft, Writing - review & editing.
Shubhang Pandey: Conception and design of study, Analysis and/or interpretation of data, Writing - original draft, Writing - review & editing.
Aparna Behera: Conception and design of study, Analysis and/or interpretation of data, Writing - original draft, Writing - review & editing.
TG Venkatesh: Conception and design of study, Analysis and/or interpretation of data, Writing - original draft, Writing - review & editing.
|
2306.14336 | Real-time Seismic Intensity Prediction using Self-supervised Contrastive
GNN for Earthquake Early Warning | Seismic intensity prediction from early or initial seismic waves received by
a few seismic stations can enhance Earthquake Early Warning (EEW) systems,
particularly in ground motion-based approaches like PLUM. While many
operational EEW systems currently utilize point-source-based models that
estimate the warning area based on magnitude and distance measures, direct
intensity prediction offers a potential improvement in accuracy and
reliability. In this paper, we propose a novel deep learning approach, Seismic
Contrastive Graph Neural Network (SC-GNN), for highly accurate seismic
intensity prediction using a small portion of initial seismic waveforms from a
few seismic stations. The SC-GNN consists of two key components: (i) a graph
neural network (GNN) to propagate spatiotemporal information through a
graph-like structure representing seismic station distribution and wave
propagation, and (ii) a self-supervised contrastive learning component to train
the network with larger time windows and enable predictions using shorter
initial waveforms. The efficacy of our approach is demonstrated through
experiments on three real-world seismic datasets, where it shows superior
performance over existing techniques, including a significant reduction in mean
squared error (MSE) and the lowest standard deviation of error, indicating its
robustness, reliability, and strong positive relationship between predicted and
actual values. Notably, the SC-GNN model maintains superior performance even
with 5s input waveforms, making it especially suitable for enhancing EEW
applications. | Rafid Umayer Murshed, Kazi Noshin, Md. Anu Zakaria, Md. Forkan Uddin, A. F. M. Saiful Amin, Mohammed Eunus Ali | 2023-06-25T20:42:11Z | http://arxiv.org/abs/2306.14336v4 | Real-time Seismic Intensity Prediction using Self-supervised Contrastive GNN for Earthquake Early Warning
###### Abstract
Seismic intensity prediction in a geographical area from _early or initial_ seismic waves received by a few seismic stations is a critical component of an effective Earthquake Early Warning (EEW) system. State-of-the-art deep learning-based techniques for this task suffer from limited accuracy in the prediction and, more importantly, require input waveforms of a large time window from a handful number of seismic stations, which is not practical for EEW systems. To overcome the above limitations, in this paper, we propose a novel deep learning approach, Seismic Contrastive Graph Neural Network (SC-GNN) for highly accurate seismic intensity prediction using a small portion of initial seismic waveforms received by a few seismic stations. The SC-GNN comprises two key components: (i) a graph neural network (GNN) to propagate spatiotemporal information through the nodes of a graph-like structure of seismic station distribution and wave propagation, and (ii) a self-supervised contrastive learning component to train the model with larger time windows and make predictions using shorter initial waveforms. The efficacy of our proposed model is thoroughly evaluated through experiments on three real-world seismic datasets, showing superior performance over existing state-of-the-art techniques. In particular, the SC-GNN model demonstrates a substantial reduction in mean squared error (MSE) and the lowest standard deviation of the error, indicating its robustness, reliability, and a strong positive relationship between predicted and actual values. More importantly, the model maintains superior performance even with 5s input waveforms, making it particularly efficient for EEW systems.
Earthquake Early Warning (EEW), Seismic Intensity Prediction, Contrastive Learning, Graph Neural Network.
## I Introduction
EarthQUAKES pose a significant threat to life and property, a reality underscored by the recent M7.8 Turkey-Syria earthquake that struck on February 6, 2023. This seismic event devastated the region, causing a loss of over fifty thousand lives and inflicting damage valued in tens of billions of US dollars to the economy, social infrastructure, and precious historical sites [1]. Such catastrophic consequences highlight the critical necessity for an effective Earthquake Early Warning (EEW) system [2]. EEW is a network of seismic sensors set up to detect the initial, less harmful seismic waves (P-waves) from an earthquake, thereby providing seconds to minutes of advance warning before the arrival of the more destructive waves (S and surface waves). An integral component of an EEW system is seismic intensity prediction, which aims to forecast an earthquake's potential strength and destructive potential at different geographical points [3]. This key predictive element plays a crucial role in providing actionable information, thereby enabling immediate safety measures to mitigate the impacts of seismic events on human life and infrastructure.
The history of EEW systems and seismic intensity prediction dates back several decades. Early systems were primarily based on detecting initial seismic waves, such as P-waves, on providing a short lead time for warning recipients [4]. Over time, advancements in seismic monitoring networks and computational techniques have led to the development of more sophisticated methods for predicting seismic intensity, including the use of physics-based models, empirical relationships, and machine learning algorithms (e.g., [5, 6, 7]).
Traditional methods for seismic prediction tasks primarily rely on physics-based models and classical machine-learning techniques. Ground motion prediction equations (GMPEs) are widely used to estimate ground motion parameters, including peak ground acceleration (PGA) and spectral acceleration (SA) [8]. These equations take into account the source, path, and site parameters and are typically derived from empirical analyses of strong motion data. Other classical machine learning techniques, such as support vector machines (SVMs) [9] and artificial neural networks (ANNs) [6, 10], have been employed to estimate ground motion parameters or predict the occurrence of seismic events.
Deep learning has gained popularity in various domains in recent years due to its ability to learn complex representations from large datasets [11, 12, 13]. Convolutional neural networks (CNNs) have been applied to earthquake detection tasks [14], earthquake magnitude estimation [7], location and origin time estimation [15, 16]. Recurrent neural networks (RNNs), particularly long short-term memory (LSTM) networks, have been used for seismic event detection [17] and earthquake aftershock prediction [18]. Additionally, convolutional recurrent neural networks have also been proposed to detect earthquakes
[19] and predict p-wave arrival time [20]. Transfer learning has also been used to predict ground shaking of an area due to earthquake [21]. Generative Adversarial Networks (GAN) have been used to reduce false triggers from local noises [22]. Moreover, deep learning techniques have been employed in EEW systems and seismic intensity prediction tasks, demonstrating promising results in terms of accuracy and speed. In [23], a CNN-based technique has been used to predict ground motion intensity due to earthquakes using a 10 s window from the earthquake origin time without requiring prior knowledge of earthquake sources. A graph convolutional network (GCN) based approach for multivariate time-series regression is proposed in [24]. This approach, named TISER-GCN, is tested on two seismic datasets for intensity prediction and demonstrates promising results with an average MSE reduction of 16.3%--compared to [23].
Existing ground motion estimation and seismic intensity prediction algorithms possess several limitations and shortcomings that can impact their effectiveness. These systems often face challenges in accurately estimating earthquake location, depth, and magnitude in real-time, leading to imprecise intensity predictions [4]. For instance, traditional methods, such as physics-based modelling and classical machine learning techniques, may suffer from high computational complexity and limited accuracy due to the intricate nature of seismic data [3]. Although deep-learning-based algorithms have advanced the state-of-the-art, they have yet to achieve the desired level of accuracy and often require long time windows for prediction [21, 24, 23], which makes them impractical for an EEW system. This is primarily because existing deep-learning (DL) models may not fully leverage the potential of advanced architectures and contemporary learning algorithms, often leading to sub-optimal performance in handling complex seismic data marked by temporal dependencies and high dimensionality. It is crucial to understand that even a seemingly small inaccuracy in the range of 0.5-1.0 on the intensity scale can have significant consequences. As the damage caused by an earthquake increases by a factor of 10 for each 1.0-point rise in intensity scale, precise predictions are vital for effective earthquake preparedness and response efforts. Moreover, every second saved in early warning generation can increase the alerted area by a few kilometres, protecting many more lives and properties [25].
Early prediction involves leveraging the initial seismic waveforms received by proximal seismic stations, extracting critical earthquake information from these, and using it to forecast the seismic intensities at an array of stations spanning the affected region. Building upon this concept and the limitations of existing EEW systems and seismic intensity prediction algorithms, we develop an innovative approach that leverages a relatively small portion of initial seismic waveforms at various seismic stations across a sparse geographical region. We aim to accurately predict seismic intensity at these stations and others in the surrounding area where the seismic waves have not yet arrived [26]. Recognizing the graph-like structure of seismic station distribution and wave propagation, we employ a graph neural network (GNN) as the foundation of our approach [27, 28]. The unique power of GNNs lies in their ability to propagate information through the nodes of a graph, allowing us to predict seismic intensity at distant stations by exploiting a fraction of the information gathered at the early-receiving stations. In essence, GNNs enable us to make globally informed predictions with locally available data. Most importantly, to address the need for shorter time-window predictions, we incorporate self-supervised contrastive learning, enabling the model to be trained using larger time windows
Fig. 1: Seismic intensity prediction with SC-GNN
while making predictions using shorter ones [29, 30]. This integration of contrastive learning and specialized GNN layers results in an accurate and efficient approach that requires significantly shorter time windows for prediction. Moreover, the inherent self-supervised nature of this contrastive learning approach eliminates the necessity for exhaustive labelling of the input data. We aptly name our proposed model as Seismic Contrastive GNN (SC-GNN).
The efficacy of our proposed method is thoroughly demonstrated through a comprehensive series of experiments conducted using three real-world seismic datasets. Experimental results substantiate that our approach consistently surpasses the performance of state-of-the-art techniques across a broad spectrum of evaluation metrics. In particular, on our principal dataset, our SC-GNN model demonstrates substantial improvement with a mean squared error (MSE) of 0.4172, reflecting an approximately 234% enhancement over the best-performing state-of-the-art GCN model. Additionally, our model maintains the lowest standard deviation of error of 0.61 and attains the highest correlation coefficient, indicating robustness, reliability, and a strong positive relationship between predicted and actual values. As the input time window diminishes, our model's performance remains consistently superior to the baseline models, underlining its capability to handle variable input sizes efficiently. The main contributions of this paper are threefold:
1. We propose a contrasting learning-based deep learning framework for near real-time seismic intensity prediction that facilitates highly accurate seismic intensity prediction from much shorter input waveforms than its competitive methods. Numerical results demonstrate that our model achieves superior performance (143% improvement) with an input window length of 5s compared to the 10s window input of the other baseline models. Also, the self-supervised nature of the proposed framework eliminates the need for extensive labelled data during the contrastive training phase.
2. We adopt a cutting-edge GNN architecture comprising highly sophisticated graph convolutional and attention layers that capture the complex spatial relationships between seismic stations. This allows us to effectively model the propagation of seismic waves received from a geographically sparse set of seismic stations.
3. We present the SC-GNN model's exceptional performance compared to baseline models across all standard metrics. The superior performance is reflected in a substantially reduced Mean Squared Error (MSE) of 0.41 on the primary dataset, a lower standard deviation of the error indicating higher reliability, and a high correlation coefficient of around 84%, suggesting a robust match between predicted and actual values. Furthermore, the SC-GNN model promises remarkable utility in earthquake early warning (EEW) systems, where approximately 70% of the locations potentially receive a warning time of more than 10 seconds, sufficient for taking various precautionary measures.
The remainder of this paper is organized as follows. Section II formulates the problem. Section III provides a brief background on graph neural networks and contrastive learning. Section IV describes the proposed method, including the GNN architecture and the self-supervised contrastive learning framework. Section V presents the experimental setup, results, and comparisons with existing approaches. Finally, Section VI concludes the paper and discusses potential future directions.
## II Problem Formulation
One of the key challenges of earthquake early warning (EEW) systems is to predict the seismic intensity in surrounding geographical locations at the earliest possible time. As seismic stations are placed in geographically sparse locations in a region of interest, initial seismic waves may only be detected by the few seismic stations located nearby to the earthquake's origin. Thus, given the few seismic stations have detected the initial seismic waves, our goal in this paper is to accurately predict the seismic intensity for all the points of interest, including the distant seismic stations within a geographical region of our interest.
Let \(\mathcal{N}=\{n_{1},n_{2},...n_{|\mathcal{N}|}\}\) be the set of \(N\) stations (seismic stations and points of interest). Also, let \(\mathcal{N}^{\prime}\subset\mathcal{N}\), \(|\mathcal{N}^{\prime}|\lll|\mathcal{N}|\), be the set of stations where initial seismic waves have been recorded, and \(w_{i}(t)\) be the seismic wave of station \(n_{i}\in\mathcal{N}^{\prime}\) at time \(t\). Our goal is learn a function \(F\) that can predict the seismic intensities, \(I_{1},I_{2},...,I_{N}\) of stations \(n_{1},n_{2},...n_{N}\), respectively, given the initial seismic waves of stations in \(\mathcal{N}^{\prime}\). Mathematically, this can be formulated as \(F:(w_{1}(t),w_{2}(t),...,w_{|\mathcal{N}^{\prime}|}(t))\mapsto(I_{1},I_{2},..,I _{N})\).
Figure 2 shows a prototypical example of our problem setting. In the left figure, the epicentre of the earthquake is shown using a starlike symbol, and seven stations, \(n_{1},n_{2},...n_{7}\) are shown using circles, where two stations, \(n_{1}\) and \(n_{2}\) marked as blue, have received initial seismic waves at time \(t\), and the remaining five stations are yet to receive any signal. Now, we are to learn a function, \(F\), that can predict the seismic intensity of all seven stations, as depicted in the right figure.
## III Background on Contrastive Learning and GNN
This section provides a brief background on the key concepts of contrastive learning and graph neural networks (GNNs), which form the basis of our proposed method for predicting seismic intensity.
### _Contrastive Learning_
Contrastive learning is a self-supervised learning approach that aims to learn useful representations from unlabeled data by solving a pretext task [31]. The core idea behind contrastive learning is to encourage similarity between representations of similar or related instances while maximizing the dissimilarity between representations of dissimilar or unrelated instances. This is achieved by designing a contrastive loss function that minimizes the distance between positive pairs (related instances) and maximizes the distance between negative pairs (unrelated instances) in a latent representation space [32].
Contrastive Learning begins with augmentation to create two viewpoints of a batch of input samples. When originating from the same sample, these views are regarded as positive samples; when from distinct samples, they are negative samples. An encoder network encodes, and a projection network maps the augmented samples to a feature space. This feature space applies a well-designed contrastive loss function. Contrastive loss groups positive samples together and separates negative samples. Doing so clusters positive samples and disperses negative samples. [33].
### _Graph Neural Networks (GNNs)_
Graph neural networks (GNNs) are a class of deep learning models designed specifically for learning from graph-structured data [34, 35]. GNNs can effectively capture the complex relationships between nodes in a graph by iteratively aggregating and updating the node features through message-passing mechanisms among the layers [36]. GNNs can handle irregular and non-Euclidean data structures, making them well-suited for a wide range of applications, such as social network analysis [37], molecular modelling [38], and geospatial analysis [39].
The adjacency matrix is an essential element of GNNs, representing the graph's structure. The adjacency matrix, typically denoted as \(\mathbf{A}\), is a square matrix where each element \(A_{ij}\) indicates the presence (often with a 1) or absence (with a 0) of an edge between nodes \(i\) and \(j\). This matrix is crucial for GNNs as it provides the necessary information about the interconnections between nodes in the graph, allowing the network to understand and learn from the graph's topological features. In some GNN variants, the adjacency matrix might be further enriched with edge weights or additional features that represent specific characteristics of connections between nodes.
Over time, GNNs have evolved into two primary categories: Spectral methods and Spatial methods. Spectral methods primarily concentrate on the eigenvalues and eigenvectors of a graph, whereas Spatial methods prioritize the graph's connectivity [40].
GNN is driven by a transfer function that generates the state vector of the node and contains the neighbourhood information. The expression of the transfer function \(\text{h}_{\text{u}}\) is as follows:
\[h_{\text{u}}=f(x_{\text{u}},x^{e}_{\text{u, \,ne[u]}},h_{\text{ne[u]}},x_{\text{ne[u]}}) \tag{1}\]
where \(\text{x}_{\text{u}}\) denotes the feature of the node, \(x^{e}_{\text{u, \,ne[u]}}\) denotes the features of the edges connecting with \(\text{u}\), \(\text{h}_{\text{ne[u]}}\) denotes the transfer functions of the neighbouring nodes of u, and \(\text{x}_{\text{ne[u]}}\) denotes the features of the neighbouring nodes of u. The expression of the output functions is as follows:
\[o_{\text{u}}=g(h_{\text{u}},x_{\text{u}}) \tag{2}\]
The local transfer functions and the local output functions are applied to every node in the graph. By repeatedly iterating this process, the GNN gradually converges to a stable state [41]. By capturing the spatial relationships between seismic stations and the seismic wave propagation patterns, GNNs have demonstrated their efficacy in the field of seismic intensity prediction [24]. This utilization of GNNs results in enhanced accuracy and the ability to generalize well to unseen data.
## IV Proposed SC-GNN Methodology
In this section, we propose a novel architecture, namely the seismic Contrastive Graph Neural Network (SC-GNN) model, that seamlessly integrates contrastive learning [31] with graph neural networks (GNN) [35] for the _fast_ and _accurate_ seismic intensity prediction. We use GNN to effectively capture spatiotemporal features from seismic waveform data from different stations spread across a geographical region. However, as we observe, GNNs alone are not sufficient for early prediction of seismic intensity due to their reliance on extended time window waveforms for feature extraction. Therefore, this paper introduces a contrastive learning-based approach to learning unique seismic embeddings for each earthquake event, even from shortened time-window waveform inputs, which ultimately help us in fast and accurate seismic intensity prediction.
Next, we discuss our key ideas of using a contrastive learning-based approach for early seismic prediction (Section IV-A). Then we give an overview of our proposed deep
Fig. 2: Seismic intensity prediction at seismic stations and points-of-interest.
learning architecture, SC-GNN, followed by the details of each component (Section IV-B)). Following the SC-GNN architecture, we explain the training process and optimization in Section IV-C. Finally, we discuss how the trained model can be utilized to generate reliable seismic intensity predictions in real-time (Section IV-C2).
### _The Key Ideas of Contrastive Learning in Early Earthquake Prediction_
In this discourse, we present the key ideas of integrating contrastive learning with graph neural networks (GNNs) to address the challenge of early warning generation for seismic intensity prediction at distant locations.
Our contrastive approach primarily aims to learn unique seismic embeddings for each earthquake event, even from shortened time-window inputs. This approach is driven by the fundamental intuition of ensuring similarity in embeddings generated from differing input window lengths of the same earthquake event while maintaining distinctness in embeddings for separate events. Generating analogous embeddings from full and their corresponding truncated time windows implies that the model is capable of discerning the same features from reduced time windows as it does from extended ones. This proficiency is paramount for early warning systems, where precise intensity prediction from the shortest possible time window is the core of the task. Consequently, the sooner the model can make an accurate prediction, the earlier a warning can be issued, enhancing the efficacy and utility of the system.
Figure 3 visually illustrates the above concept. An original seismic waveform from earthquake \(E_{1}\) is represented alongside a truncated, zero-padded augmentation, reflecting a shortened variant of the waveform. A distinct waveform is also depicted for event \(E_{2}\). Our contrastive learning framework operates by promoting similarities between the original and augmented \(E_{1}\) samples' embeddings while distinguishing the \(E_{1}\) and \(E_{2}\) samples' embeddings. This approach facilitates learning unique seismic embeddings by drawing parallels with the concept of positive and negative samples in a typical contrastive learning framework. Furthermore, as earthquake events do not require any pre-labelling for this task, our approach can be considered a self-supervised learning technique. This training structure enables our model to predict seismic intensities from significantly shorter time windows, empowering the model to deliver distinct intensity predictions for each individual earthquake.
### _Architectural Overview of SC-GNN_
The Seismic Contrastive Graph Neural Network (SC-GNN) architecture introduces an innovative approach to seismic intensity prediction. In the following, we elucidate the distinct functional blocks within the architecture and the purpose and operation of each layer within these blocks. A schematic representation of SC-GNN in terms of its functional blocks is shown in Fig. 4.
#### Iv-B1 Input Block
The first point of interaction with the model begins with the input block. The SC-GNN model accepts 3-component seismic waveforms and an adjacency matrix representing the relationships between different seismic stations. The adjacency matrix is crucial in conveying spatial correlations between stations to the subsequent GNN block. The inputs are first normalized using a TimeDistributed Batch Normalization [42] layer to ensure optimal input scale for the network. We denote the 3-component waveform input of the i-th station by \(w_{i}(t)\).
**Adjacency Matrix Preparation**: To adequately represent the graph structure of the seismic network, we prepare the adjacency matrix in a way that considers the mutual distances between stations, carefully considering the distance reciprocity and selective thresholding. Here, we outline the steps involved in preparing the adjacency matrix.
In the first step, we begin with an initial adjacency matrix which contains the pairwise distances between the seismic stations. However, considering our objective of inferring the intensity at distant stations, it is more intuitive to model the connections between the stations as a function of proximity rather than the actual distance. Hence, the entries of the
Fig. 3: Integration of contrastive learning in SC-GNN.
adjacency matrix are inversed, thereby encoding the notion of reciprocal distance into the adjacency matrix. This results in closer stations having a higher edge weight than distant ones.
To ensure that the diagonal entries (representing self-loops) do not introduce bias into the network, they are initially filled with a large number, essentially representing an 'infinite' distance. After taking the reciprocal, these diagonal entries become infinitesimally small. To rectify this, the diagonal elements are replaced with the maximum value in the transformed adjacency matrix.
In the next step, the entire adjacency matrix is normalized by the maximum value, thereby ensuring that all edge weights fall within the range [0,1].
The final stage of preparing the adjacency matrix involves thresholding. We calculate the adjacency matrix's 75th percentile (the threshold for the top 25% of values). All edge weights falling below this threshold are set to zero. This is done to sparsify the adjacency matrix, effectively retaining only the strongest connections in the graph and making the GNN computation more efficient. The threshold of the top 25% has been determined through extensive empirical study. Algorithm 1 succinctly presents this entire adjacency matrix preparation process.
This methodical process of adjacency matrix preparation enables the efficient representation of the seismic network, thereby ensuring that our GNN model accurately captures the crucial characteristics necessary for successful seismic intensity prediction.
```
1:Load matrix containing mutual inter-station distances in Km
2:Fill the diagonal of the matrix with a large value
3:Compute reciprocal of matrix elements, scale by minimum value in the matrix
4:Replace diagonal elements with the maximum value in the matrix
5:Normalize the matrix by its maximum value
6:Compute the threshold as the 75th percentile of matrix elements
7:Set all elements below the threshold to zero
```
**Algorithm 1** Preparation of the adjacency matrix
#### Iii-B2 1D CNN Feature Extractor
The normalized seismic waveforms are then processed by a series of TimeDistributed 1D Convolutional (Conv1D) and MaxPooling layers. These layers, equipped with _swish_ activation functions [43], capture and highlight the essential characteristics in the waveform data. Dropout layers are added to promote model robustness and mitigate overfitting, providing a form of regularization. Following this, the extracted features undergo further refinement via multiple Conv1D layers with varying kernel sizes and filters. Additional Dropout and Batch Normalization layers are interspersed for enhanced regularization and normalization of the feature vectors. Finally, the extracted features are flattened and passed through multiple dense layers to reduce the length of the feature vectors. Mathematically, we can represent the operation of the 1D CNN feature extractor by
\[F_{i}=f_{CNN}(w_{i}(t));\quad\forall i\in\{1,2,....,N\}, \tag{3}\]
where \(F_{i}\) denotes the extracted features for the seismic waveforms from the \(i\)-th station and \(f_{CNN}(.)\) indicates the feature extraction operation performed by the 1D CNN block of SC-GNN.
In terms of explicit construction, this 1D CNN feature extractor consists of three Conv1D blocks, one after another. Each block consists of two Conv1D layers, followed by a Batchnorm, a max-pooling and a drop-out layer. As we traverse deeper into the architecture, the number of filters in the Conv1D layers progressively increases while kernel sizes decrease. The initial Conv1D layer applies 16 filters with a kernel size of 50, whereas the deepest layer uses 96 filters with a kernel size of 4. Moreover, Max Pooling layers have pool sizes that progressively increase (4, 6, and 12).
Fig. 4: Functional Blocks of SC-GNN
The first convolutional block applies zero padding to prevent excessive feature dimension reduction, while the last two blocks avoid it to limit noise insertion. After the Conv1D blocks, the output is flattened and processed through two dense layers of 256 neurons each, reducing the dimensionality for the Graph Neural Network (GNN) block, where they serve as node features. A Dropout layer is included post-dense layers to counteract overfitting [44].
#### Iii-B3 Graph Neural Network Block
After the feature extraction, the data advances to the Graph Neural Network (GNN) block. In this segment, the temporal information from the seismic waveforms is paired with the spatial relationships embedded in the adjacency matrix. A couple of ChebConv layers having 256 channels and Dropout layers are applied to the waveform data, followed by a GCSConv layer with 256 channels. This approach enables the model to encode the complex spatiotemporal relationships present in the seismic data, an advantage over conventional Convolutional Neural Networks that might miss such intricate correlations. We express the operation of the GNN block mathematically by
\[G_{i}=f_{GNN}(F_{i},A);\quad\forall i\in\{1,2,....,N\}, \tag{4}\]
where \(G_{i}\) denotes the output of the GNN block corresponding to the input CNN feature vector \(F_{i}\), the adjacency matrix \(A\), and the graph convolutional and pooling operations, \(f_{GNN}(.)\), performed by the GNN block of SC-GNN.
To gain a deeper understanding of the Graph Neural Network (GNN) block in our proposed SC-GNN model, we need to elaborate on the distinct GNN layers utilized - ChebConv (Chebyshev Convolution) and GCSConv (Graph Skip Convolution).
* ChebConv: The ChebConv layer in a GNN employs Chebyshev polynomials to approximate the spectral filter derived from GNN [45]. This facilitates the execution of convolution operations over the graph structure, thereby encapsulating both local and non-local information.
* GCSConv: The GCSConv layer, an evolved variant of the GCN layer with trainable skip connections, enhances the local structure and feature diffusion [46]. It accomplishes this by amalgamating spectral diffusion, which focuses on the graph's global structure, akin to ChebConv, and spatial diffusion, which emphasizes each node's local neighbourhood structure.
The intuition behind using two ChebConv layers followed by GCSConv is to initially capture global graph features via the ChebConv layers, subsequently refining these features by considering local neighbourhood information using the GCSConv layer. This strategy is designed to be especially beneficial in regression tasks such as intensity prediction, where precision is paramount. It potentially harnesses both the global and local graph structures to generate more accurate predictions. The dual usage of ChebConv and GCSConv layers fosters a more comprehensive representation of the graph as they may capture disparate features.
#### Iii-B4 Embedding Layer
The data exits the GNN block and enters the Embedding Layer, where a GlobalAttentionPool layer with five channels is used. This layer aids the model in focusing on the most informative parts of the graph concerning the prediction task [47]. Along with the attention mechanism, the data passes another Dense layer having 100 neurons. Here, the model generates the final embeddings by concatenating the GlobalAttentionPool and Dense layers' outputs, encapsulating each earthquake's distinct seismic characteristics. Mathematically, we can represent this embedding generation as
\[Emb_{i}=f_{emb}(G_{i});\quad\forall i\in\{1,2,....,N\}, \tag{5}\]
where \(Emb_{i}\) denotes the embeddings for the i-th station with length \(D_{E}\) and \(f_{emb}(.)\) indicates the embedding generation operation performed by the embedding layer of SC-GNN.
#### Iii-B5 Contrastive Head
In the final stage of the SC-GNN architecture, the Projection Network employs a multi-layer perceptron (MLP) to project the seismic embeddings into a lower dimensional space. Using a series of Dense layers, supplemented with Dropout and Batch Normalization for stability and regularization, the network projects the embeddings into a space conducive to the contrastive learning objective. Similar earthquakes are positioned close together in this space, and dissimilar ones are spread apart. The projection operation can be expressed as
\[z_{i}=P(Emb_{i});\quad\forall i\in\{1,2,....,N\}, \tag{6}\]
where \(z_{i}\) denotes the embedding projection for the i-th station embedding with length \(D_{p}\) and \(P(.)\) represents the projection operation executed by the projection head of SC-GNN. Now, the embedding project vector, \(z\), which is used directly in the contrastive loss function, is given by
\[z=\begin{bmatrix}z_{1}\\ z_{2}\\ \vdots\\ z_{N}\end{bmatrix}. \tag{7}\]
A noteworthy aspect of our model architecture is the transient nature of the contrastive head. This module's primary function is to generate projection embeddings essential for computing the contrastive loss, thereby playing a pivotal role during the hybrid contrastive training phase. However, it is important to underline that this contrastive head is not a permanent feature of the architecture. In fact, it is intentionally transient and is designed to be discarded during subsequent stages of the model's application. The self-supervised contrastive loss function [48], \(\mathcal{L}^{\text{cont}}\), applied at this stage is given by
\[\mathcal{L}^{\text{cont}}=\sum_{m\in\mathcal{M}}\mathcal{L}^{\text{cont}}_{m }=-\sum_{m\in\mathcal{M}}\log\frac{\exp(z_{m}\cdot z_{m^{\prime}}/\tau)}{\sum \limits_{a\in A(m)}\exp(z_{m}\cdot z_{a}/\tau)}. \tag{8}\]
where, \(\mathcal{M}=\{1,2,\dots M\}\), represents a training batch with all the odd samples being original samples and the adjacent even samples corresponding to their respective augmented samples, together forming a positive pair. For each sample \(m\in\mathcal{M}\) and its corresponding positive sample \(m^{\prime}\in\mathcal{M}\), \(A(m)=\mathcal{M}\setminus m\). \(z\) denotes the \(D_{p}\) dimensional projection of the \(D_{E}\) dimensional representation, and \(\tau\) is the temperature parameter. The augmentation we perform here is quite unique.
Instead of the typical 1D signal augmentations such as noise injection, scaling, time-shifting, time-scaling, frequency shifting etc., we clip and zero-pad the original signal to generate the augmented signal as shown in Figure 3. We elaborate more on this augmentation technique in the next sub-section (IV-C).
In parallel with the contrastive loss, we also apply another regression loss to ensure that the generated embeddings are task-specific, i.e., specifically oriented towards accurate intensity prediction. While the contrastive loss is applied to the outputs of the contrastive head, this regression loss [49] is applied to the outputs of the prediction head. This regression loss is given by
\[\begin{split}\mathcal{L}^{\text{reg}}=w_{1}\times(1-r^{2})+w_{2} \times L_{\mathrm{HL}}+w_{3}\times\epsilon^{2}+w_{4}\times|\epsilon|\\ +w_{5}\times L_{\mathrm{HL}}^{\mathrm{g}},\end{split} \tag{9}\]
where \(L_{\mathrm{HL}}\) denotes the well-known HL value, \(L_{\mathrm{HL}}\) is a modified asymmetric version of the Huber-loss, \(r\) represents the Pearson correlation coefficient, \(\epsilon\) is the error in the prediction, and \(w_{1},w_{2},w_{3},w_{4}\) and \(w_{5}\) are the weights applied to the correlation loss, HL, MSE, MAE and the asymmetric HL, respectively. The exact values used for \(w_{1},w_{2},w_{3},w_{4}\) and \(w_{5}\) are determined to be \(0.002,1.0,0.0096,0.002\) and \(0.0032\), respectively, through extensive trial and error to suit the required task best. The \(\mathcal{L}^{\text{reg}}\) function is a custom loss function that combines elements of Huber loss (HL) and correlation loss. The HL component is less sensitive to outliers in the data, providing a more stable training process. In contrast, the correlation component ensures the predicted seismic intensities are closely aligned with the actual values. Notably, none of the well-known regression losses (i.e., MSE, MAE, mean absolute percentage error, etc.) except HL solely performed as well in our experiments.
The \(\mathcal{L}^{\text{hyb}}\) function is a hybrid of the \(\mathcal{L}^{\text{cont}}\) and \(\mathcal{L}^{\text{reg}}\) functions, comprehensively evaluating the model's performance during the contrastive training stage. It is a straightforward sum of \(\mathcal{L}^{\text{cont}}\) and \(\mathcal{L}^{\text{reg}}\), i.e, \(\mathcal{L}^{\text{hyb}}=\mathcal{L}^{\text{cont}}+\mathcal{L}^{\text{reg}}\).
Following the conclusion of the hybrid contrastive training phase, the contrastive head is removed as it serves no further purpose. Specifically, the contrastive head is omitted during the regression training phase and subsequent inference tasks. This is due to the different requirements of these subsequent phases, which do not involve calculating contrastive loss and, therefore, do not need the projection embeddings generated by the contrastive head. Therefore, while the contrastive head is crucial for the initial self-supervised learning and embedding generation during the hybrid contrastive training phase, its functionality is deliberately limited to this stage, underscoring the disposable nature of this module within our model's architecture.
#### Iv-B6 Output Block (Prediction Head)
After passing through the embedding layer, the processed data reaches the Output Block. In this block, the model generates the final predictions and embeddings. The SC-GNN model generates two forms of output: seismic intensity predictions at various seismic stations and seismic embeddings. The intensity predictions, produced by a sequence of Dense layers and a'relu' activation function, represent the potential earthquake intensities at different seismic stations. On the other hand, the seismic embeddings are produced through another sequence of Dense layers, capturing the distinct characteristics of each earthquake.
This two-fold output serves multiple purposes. On the one hand, it aids in understanding the specifics of an earthquake's characteristics by inspecting the embeddings. On the other hand, it allows for predicting seismic intensities at various stations, a critical component for effective earthquake early warning systems. Mathematically, the seismic intensity predictions can be expressed as
\[I_{i}=f_{out}(Emb_{i});\quad\forall i\in\{1,2,....,N\}, \tag{10}\]
where \(I_{i}\) denotes the seismic intensity for the i-th station and \(f_{out}(.)\) represents the operation executed by the output block of SC-GNN. In the prediction head, we utilize the \(\mathcal{L}^{\text{reg}}\) loss function to train the generated intensity predictions.
The SC-GNN architecture, with its dedicated functional blocks, successfully extracts, refines, and leverages seismic data to generate precise seismic intensity predictions. The individual blocks, each with its unique functionality, work together seamlessly, creating a robust and efficient model. This architecture provides an advanced solution for early warning generation in the realm of earthquake predictions.
### _Training Process of SC-GNN_
In this subsection, we provide a detailed overview of the training and inference procedures of the SC-GNN model. The SC-GNN model leverages a unique combination of contrastive training and regression to create robust, task-specific embeddings. The sub-section is divided into two segments: model training and inference.
#### Iv-C1 Model Training
The training process of the SC-GNN model unfolds in two primary stages. The initial phase employs contrastive training with a distinct data augmentation scheme. Each seismic waveform, denoted as \(w_{i}(t)\), undergoes transformation to produce augmented samples, \(w_{i}^{a}(t)\). The augmentation procedure involves clipping the original waveform to \(t_{c}\) seconds, where \(t_{c}\) is an integer uniformly selected from the set \(\{5,10,15,20,25\}\), followed by zero-padding
Fig. 5: TSNE plot displaying seismic event embeddings: (a) before contrastive training (b) after contrastive training.
to maintain a consistent input length of 30 seconds1. This process yields five augmented samples per original waveform. Selecting distinct window-length inputs from the same seismic event forms positive pairs, while waveform data from disparate events generate negative pairs.
Footnote 1: 3000 samples at a fixed sampling frequency of 100 Hz
This contrastive learning phase employs a novel hybrid loss function, \(\mathcal{L}^{\mathrm{hyp}}\), combining both contrastive and regression losses. This approach aims to direct the generation of task-specific embeddings for seismic intensity prediction. The model is trained under this hybrid loss function for the first 100 epochs. The seismic embeddings generated during the training phase are exemplified by the TSNE plot depicted in Fig. 5. The plot showcases embeddings for ten distinct seismic events, labelled 1 through 10, alongside their respective augmented samples with varying input window lengths before (Fig. 5 (a)) and after the contrastive training stage (Fig. 5 (b)). Notably, the embeddings of augmented samples belonging to the same seismic event but with different window lengths (5, 10, 15, 20, 25, and 30s) are clustered together after the contrastive training phase, indicating that the model has successfully learned to generate similar embeddings for augmented samples from the same event. Furthermore, distinct seismic events form separate unique groups, illustrating the model's ability to differentiate between individual events. It is important to mention that each distinct earthquake is represented by a unique colour in Fig. 5, while augmented samples belonging to the same event are depicted in the same colour.
The second training phase centres on fine-tuning the model for seismic intensity prediction at various geographical locations using the input earthquake data. This is approached as a regression task, with the majority of the model layers being frozen, preserving the integrity of the embeddings learned during the contrastive training phase. The regression training is guided solely by the regression loss function \(\mathcal{L}^{\mathrm{reg}}\), and it continues for an additional 100 epochs.
The training process employs the Adam optimizer, owing to its efficient handling of large-scale data. An exponential decay learning rate scheduler, as illustrated in Fig. 6, assists in stabilizing the training process over a total of 200 epochs. The batch size is set at 32, balancing adequate learning and manageable computational requirements. The model's best-performing weights during the training process are captured using checkpoints to ensure that the optimal model configuration is preserved. Algorithm 2 concisely presents the SC-GNN training steps.
```
0: Training data: Waveforms, Inter-station Distances, Shakemap Labels of Intensities
0: Predicted intensities at all stations
1: Convert all PGA values to intensity using the EMS conversion equation using 12
2: Impute all missing waveforms with zeroes
3: Generate augmented samples by clipping and zero-padding the waveforms
4: Prepare the adjacency matrix using inter-station distances
5: Segment the data into training, validation, and test sets for cross-fold validation
6: Train the SC-GNN model for 100 epochs using \(\mathcal{L}_{\mathrm{hyb}}\)
7: Freeze all layers up to the embedding layer and discard the contrastive head
8: Load the best model weights based on the validation metric, \(\mathcal{M}_{\mathrm{val}}=\mathcal{L}_{\mathrm{val}}^{\mathrm{cont}}+100 \times\mathcal{L}_{\mathrm{val}}^{\mathrm{reg}}\)
9: Train the model for another 100 epochs using \(\mathcal{L}_{\mathrm{reg}}\)
10: Load the best model weights based on the validation metric, which is the same as the validation loss \(\mathcal{L}_{\mathrm{val}}^{\mathrm{reg}}\)
11: Pass the test data for inference, i.e., intensity prediction
```
**Algorithm 2** Training Process of the SC-GNN
#### Iv-B2 Model Inference
Upon completing the model training, we progress to the inference stage. Any window length of input seismic waveforms for model inference is accepted, provided they are zero-padded to comply with the fixed input size. Additionally, the corresponding adjacency matrix must also be inputted. The model outputs seismic intensities at all geographic points contained within the adjacency matrix. This functionality empowers the model to generate real-time, reliable seismic intensity predictions.
Along with the intensity predictions, the SC-GNN also produces the embedding vector, \(Emb\), that represents the entire seismic event or earthquake, and it is given by
\[Emb=\begin{bmatrix}Emb_{1}\\ Emb_{2}\\ \vdots\\ Emb_{N}\end{bmatrix}. \tag{11}\]
Without the loss of generality, though the above embedding vector is applied for our intensity prediction task, it can potentially be used to predict various other seismic parameters.
## V Experimental Evaluation
In this section, we assess the performance of our proposed SC-GNN model for real-time seismic intensity prediction using three real-world seismic datasets. Additionally, we compare the effectiveness of our proposed model against
Fig. 6: Learning-rate Scheduler used to train SC-GNN
several state-of-the-art baseline models by examining standard performance metrics.
### _Environment_
The proposed model is implemented using Python, Tensorflow 2.12, and Keras 2.12. The GNN layers are imported from Spektral 1.2.0. Numpy is utilized for calculations, and some figures are generated using MATLAB 2022b. The model is trained on Google Colab Pro+ with 83.5 GB system RAM, 166.8 GB disk space, and an NVIDIA A100-SXM4-40GB GPU to expedite the training process. Following training, the model's size is compact (approximately 2.8 MB), allowing deployment on standard PC hardware and edge computing platforms. Notably, the baseline model simulations are conducted using the same configuration for a fair comparison.
### _Data Description_
We use the following three widely used datasets to demonstrate the prediction performance of our proposed algorithm.
#### Iv-B1 Central Italy (CI) Dataset
As delineated in [50], this dataset draws upon three-component waveforms from 915 earthquake events captured by an extensive network of 39 stations across Central Italy. It comprises three distinct channels of waveforms: HN, HH, and EH. The EH channels harbour waveforms at a 100 Hz or 125 Hz sampling rate, while the HH and HN channels exclusively accommodate 100 Hz waveforms. However, all the waveforms are resampled to a fixed uniform sampling rate of 100 Hz in the finalized dataset.
To utilize this dataset for early warning analysis, the imperative step of identifying the p-wave via Phasenet [17] is undertaken. This necessitates the modification of waveform sampling rates that deviate from 100 Hz, ensuring a uniform rate throughout. Furthermore, Phasenet requires waveforms of equal length. After discerning the longest waveform length to be 10310, shorter waveforms are extended to match this length via zero padding. Based on two criteria--capturing time of p-picks and phase score--Phasenet makes p-wave pick predictions.
The input dataset is generated by extracting the first 30 seconds of all waveforms, starting from the respective earthquake origin times. For earthquakes with magnitudes (M) below 4, HH and EH channels are employed, while HN channels are utilized for earthquakes with M greater than or equal to 4. The ground truth is generated using the peak ground acceleration (PGA), which is then converted to intensity through the ground motion-to-intensity conversion equation (GMICE) described later. Instances, where waveform data for specific earthquakes are absent from some stations, are addressed by zero-filling. The ground truth in those cases is imputed by utilizing USGS shakemap version 4.1 [51].
Due to the availability of a higher number of seismic events and waveforms, we choose this CI dataset as our primary dataset. Unless otherwise mentioned, the experimental results presented in the subsequent discussions are generated using this dataset.
#### Iv-B2 Central Western Italy (CW) dataset
This dataset consists of 3-component waveforms of 266 earthquakes from central western Italy recorded by 39 stations. It is thoroughly described in [52], and the prepared dataset can be found in [53]. Similar to the CI dataset, intensity labels are generated by converting the PGA values to intensity using the GMICE. It is important to note that all the waveforms available for the CW dataset are of length 10s. Hence, the value of \(t_{c}\) for augmented sample generation during the contrastive training is confined to the set \(\{5,6,7,8,9\}\), and the input length is 1000 samples at a fixed sampling frequency of 100 Hz.
#### Iv-B3 STEAD (STanford EArthabae Dataset)
The STEAD [54] serves as our third dataset for the purposes of this study. Specifically, we focus on the region of _California_, limited within the geographical coordinates of latitude 32.5\({}^{\circ}\) to 35\({}^{\circ}\) and longitude -115\({}^{\circ}\) to -118\({}^{\circ}\), as this area encompasses the majority of earthquakes documented in the dataset. Within California, we identified 191 earthquakes that a union of 194 seismic stations recorded. Each station's recordings consist of 3-component seismograms, with each component spanning a duration of 1 minute. The instrument responses associated with the stations, obtained from the Incorporated Research Institutions for Seismology (IRIS) [55], was acquired and converted into acceleration units. However, we extract the initial 30 seconds from each waveform to generate the final input dataset, following the same approach applied to the CI dataset.
To impute the missing PGA values in stations where waveform data are unavailable, USGS shakemap version 4.1 [51] is used to make PGA prediction with the help of instructions from shakemap manual [56]. Using Shakemap, we generate XML grids of interpolated ground shaking data for each earthquake. XML grid consists of thousands of location-specific data fields around the earthquake's epicentre spaced at 0.0055\({}^{\circ}\sim\) 0.0167\({}^{\circ}\) intervals of latitude and longitude. Each field contains PGA, PGV, MMI and other attributes of a particular location.The process of installation and grid file generation using shake map is described in videos given in the online repository. The PGA values of the 194 stations are obtained from the field having the nearest latitude and longitude from each station. Again, we convert the PGA values to intensity using GMICE.
The key attributes of the datasets are shown in Table I.
### _Ground Motion to Intensity Conversion_
The implications of an earthquake at a specific site demand the determination of intensity value at that locale. The
\begin{table}
\begin{tabular}{|p{42.7pt}|p{42.7pt}|p{42.7pt}|p{42.7pt}|} \hline
**Properties** & **CI** & **CW** & **STEAD** \\ & & & _(California)_ \\ \hline Earthquakes & 915 & 266 & 191 \\ \hline Stations & 39 & 39 & 194 \\ \hline Available Waveforms & & & \\ \hline Magnitudes & \(2.9\leq M\leq 6.5\) & \(2.9\leq M\leq 5.1\) & \(1.7\leq M\leq 5.42\) \\ \hline Periods & 1-1-16 to 29-11-16 & 1-11-17 & 19-2-10 to 9-4-18 \\ \hline \end{tabular}
\end{table} TABLE I: Comparison of the Datasets
previously employed Ground Motion Prediction Equations (GMPEs) yield Peak Ground Acceleration (PGA) values, necessitating a conversion into intensity. For this transformation, we utilize the Ground Motion to Intensity Conversion Equation (GMICE) proposed by Zanini _et al._[57].
This equation generates intensity values following the European Macroseismic Scale (EMS-98), originating from regression analysis of Italian seismic data collected from the Parametric Catalogue of Italian Earthquakes and ITACA. The equation is as follows:
\[I_{\text{EMS-98}}=2.03+2.28\log(PGA(cm/s^{2})). \tag{12}\]
It is important to observe that Equation 12 was specifically designed to operate within the range \(2\leq I_{\text{EMS-98}}\leq 9.5\). Consequently, any values resulting from this equation are confined within this predefined range to ensure validity. Specifically, if a conversion using this equation results in an \(I_{\text{EMS-98}}\) value less than 2, the value is elevated to meet the minimum threshold of 2. This procedure is conducted to adhere to the stipulated constraints of the GMICE described by Equation 12.
### _Baseline Models_
To demonstrate the effectiveness of our proposed approach, we have compared the performance of our proposed SC-GNN with some state-of-the-art models for seismic intensity prediction. These models, along with their adopted parameter settings, are described below:
#### Iv-D1 Gmpe [58, 59]
The first baseline model is regression-based ground motion prediction equations (GMPE), considering the latest release of the strong motion database. These predict peak ground acceleration (PGA), peak ground velocity (PGV), and 5%-damped spectral acceleration over a magnitude range of 4-6.9 and distances up to 200 km. The total standard deviation confirms the large variability of ground shaking parameters for regional datasets containing small to moderate-magnitude events. This model is an update of the ITA08 GMPE [60], considering improved data, reprocessing, and an extended distance range.
For comparison purposes with our proposed SC-GNN, in the case of the CI and CW dataset, the prediction of PGA values is achieved via the GMPE, as formulated by Bindi _et al._[58]. This equation, deriving from the Italian strong motion database, "Italian Accelerometric Archive" (ITACA), necessitates magnitude, station distance, and soil site class as inputs. Ideally, the GMPE utilizes the Joyner-Boore distance (RJB); however, the lack of fault geometric details in this instance compels us to substitute it with the epicentral distance--a deviation acceptable within the purview of this equation. Identification of local site classes at respective station locations is carried out using the standalone software Soil Class-Italy (SSC-Italy). This software applies the Eurocode 8 soil classification developed by Forte _et al._[61].
For STEAD, PGA values at the station locations for each earthquake are predicted using GMPE given by Boore _et al._[59]. This GMPE is developed using NGA-West2 ground motion database provided by Pacific Earthquake Engineering Research Center (PEER). This equation requires magnitude, distance of the site and local site effects in terms of shear wave velocity (Vs30). Similar to the CI and CW datasets, we have used epicentral distance instead of Joyner-Boore distance (RJB). For identifying the Vs30 value at the location of the stations, we have used the Global Vs30 grid file, which is made available by the United States Geological Survey (USGS) based on the works of Heath _et al._[62].
#### Iv-D2 CNN Based Model [23]
The second baseline model is a deep Convolutional Neural Network (CNN) designed to predict earthquake ground shaking intensity measurements using multistation 3C acceleration waveforms. The input data consists of normalized waveform data from various seismic stations, and the model does not require prior knowledge of the earthquake source. The CNN architecture is adapted from Kriegerowski _et al._[63] and consists of three convolutional layers followed by fully connected layers. The first two convolutional layers learn temporal patterns station-by-station, while the third layer gathers cross-station information. The model has been tested on raw data without data pre-selection and has shown stability and accurate prediction of ground shaking intensity. The technique is not designed for earthquake early warning but provides useful estimates of ground motions within 15-20 seconds after the earthquake origin time.
#### Iv-D3 GCN Based Model [24]
Recently, a Graph Convolutional Network (GCN) based approach, named TISER-GCN, has been proposed for multi-variate time-series regression that achieves state-of-the-art performance. It predicts ground-shaking intensity at seismic stations using a regression approach. The model utilizes two 1D CNN layers with wide kernel sizes, small strides, increasing filters, and ReLU activation functions to learn the temporal patterns of each station. After feature extraction, the model combines node features (latitude, longitude) with partially flattened feature vectors to create input for the Graph Convolutional Network (GCN) layers. The GCN layers reduce the dimensions of the input and handle cross-station information. In contrast to standard graph pooling techniques, this model uses a flattened output from the final GCN layer to preserve meaningful features. The architecture concludes with a dense layer and five linear activations function-based dense layers that predict target variables such as PGA, PGV, etc. Here, we only utilize the PGA output to calculate the seismic intensity using the GMICE.
### _Performance Metrics_
We have used a range of performance metrics to evaluate and compare the effectiveness of the proposed model with the baseline models.
**Mean Squared Error (MSE):** MSE measures the average squared difference between the predicted and actual values. It is widely used in regression problems to quantify the error in predictions. A lower MSE indicates better model performance, with zero being the ideal value.
**Standard Deviation (SD):** The standard deviation (SD) of the error represents the residuals' dispersion around the mean. It helps in understanding the variability of the model's predictions. A lower standard deviation indicates that the model's predictions are more consistent and reliable.
**Correlation Coefficient:** The correlation coefficient (CC) measures the strength and direction of the relationship between the predicted and actual values. A value close to 1 (\(100\%\)) indicates a strong positive correlation, while a value close to 0 represents no correlation at all. A high correlation coefficient signifies that the model's predictions align with the actual values.
Additionally, conditional scatter plots and Bland-Altman plots are used to assess the models' performance, biases, and generalization capabilities. By utilizing these performance metrics, we thoroughly assess the effectiveness of the proposed model and compare it with the baseline models, considering various aspects such as prediction accuracy, consistency, and reliability.
### _Comparison with Baseline Models_
#### Iv-F1 Performance Comparison with Baselines
In this section, we compare the performance of the proposed SC-GNN model with the baseline models: TISER-GCN, CNN, and GMPE, on our primary dataset with a 10s input window. The results are presented in Table II.
The proposed model, SC-GNN, outperforms the baseline models across all metrics in the CI dataset. Our SC-GNN model achieves the lowest Mean Squared Error (MSE) of 0.4172, which reflects around _234%_ improvement over the state-of-the-art best-forming TISER-GCN model, indicating more accurate and lower error rate predictions. The standard deviation of the error for the GNN model is the lowest at 0.6111, suggesting more consistent and reliable predictions than the other models. Furthermore, the SC-GNN model has the highest correlation coefficient of 83.94%, signifying a strong positive relationship between the predicted and actual values.
The significant improvement in performance metrics for our SC-GNN model can be attributed to its proficient capacity to capture intricate spatial and temporal patterns inherent in earthquake data by utilising sophisticated GNN layers. Specifically, the ChebConv and GCSConv layers integrated within the SC-GNN effectively capture both local and non-local information encoded within the seismic graph, allowing the model to better understand the underlying structural dynamics of the data. In addition, the seismic embeddings generated during the self-supervised contrastive training phase acquire key traits ingrained in the extended seismic waveforms. This leads to more accurate and reliable predictions. In contrast, the baseline models may struggle to capture these relationships due to their respective limitations in handling the data's spatial and temporal aspects and lack of any contrastive learning phase.
#### Iv-F2 The Effect of Varying Time Window
In this section, we compare the performance of the proposed SC-GNN model with the baseline TISER-GCN and CNN models on the CI dataset when varying the input time windows from 5 seconds to 10 seconds. The results are presented in Table III.
As the input time window is reduced, the performance of all models degrades, indicated by an increase in the mean squared error (MSE). However, the proposed SC-GNN model consistently outperforms the baseline TISER-GCN and CNN models across all input time windows. The SC-GNN model maintains a significantly lower MSE compared to the baselines, demonstrating its robustness and effectiveness in handling varying input sizes. Furthermore, the deterioration in the performance of the baseline models occurs much faster compared to our proposed SC-GNN when the input time window is shortened. Notably, even when using a 5s window input, the SC-GNN model demonstrates a remarkable 143% improvement in performance compared to the next best-performing model, TISER-GCN, with a 10s input window.
#### Iv-F3 Conditional Plots
We have generated conditional scatter plots to better understand the performance of our SC-GNN model and the baseline models (TISER-GCN and CNN) concerning earthquakes' magnitude and depth. Conditional scatter plots based on the depth and magnitude of earthquakes help visualize the model's performance concerning earthquake depth and magnitude. By analyzing the plot, we can understand how well the model performs for different ranges of depth and magnitude, identify any potential biases, and assess the model's generalization capabilities. Fig. 7 (a-c) presents the magnitude-based conditional scatter plots for the proposed SC-GNN and baseline models. For the proposed SC-GNN model (Fig. 7a), the regression lines for all magnitude ranges (3-3.5, 3.5-4.5, and greater than 4.5) almost overlap with each other, indicating that the predictions are unbiased with respect to the earthquakes' magnitudes. The regression lines are also quite close to the ideal regression diagonal line, suggesting good prediction accuracy across all magnitude ranges.
In contrast, the baseline TISER-GCN model (Fig. 7b) and CNN model (Figure 7c) exhibit regression lines that do not overlap for all magnitude ranges, revealing biases in the predictions. Both models show much higher error for earthquakes with magnitudes greater than 4.5, with the regression lines being far from the ideal diagonal line. This indicates that the baseline models struggle to predict ground motion intensities for larger earthquakes accurately.
Fig. 7(d-f) illustrates the depth-based conditional scatter plots for the proposed SC-GNN and baseline models. For
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline Metric/Model & **SC-GNN** & TISER-GCN & CNN & GMPE \\ \hline MSE & 0.4172 & 0.9645 & 1.4027 & 1.3507 \\ \hline SD & 0.6110 & 0.9005 & 0.9701 & 1.0979 \\ \hline CC & 83.94\% & 61.34\% & 48.42\% & 43.11\% \\ \hline \end{tabular}
\end{table} TABLE II: Comparison of the proposed SC-GNN model with baseline models on the CI dataset for \(2<I_{\text{EMS-98}}<9.5\).
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline Time-Window & **Proposed SC-GNN** & TISER-GCN & CNN \\ \hline
10s & 0.1137 & 0.2467 & 0.3593 \\ \hline
9s & 0.1234 & 0.2619 & 0.3771 \\ \hline
8s & 0.1406 & 0.2796 & 0.3988 \\ \hline
7s & 0.1523 & 0.2968 & 0.4215 \\ \hline
6s & 0.1608 & 0.3217 & 0.4503 \\ \hline
5s & 0.1723 & 0.3604 & 0.4982 \\ \hline \end{tabular}
\end{table} TABLE III: MSE Comparison of the proposed SC-GNN model with baseline models on the CI dataset with varying input time windows.
the proposed SC-GNN model (Fig. 6(d)), the regression lines for all depth ranges (1-8 km, 8-10 km, and greater than 10 km) almost overlap with each other, signifying that the predictions are unbiased with respect to the earthquakes' depths. The regression lines are also quite close to the ideal regression diagonal line, demonstrating accurate predictions across various depths.
However, the baseline TISER-GCN model (Fig. 6(e)) and CNN model (Fig. 6(f)) exhibit regression lines that do not overlap for all depth ranges, highlighting biases in the predictions. Both models show much higher error for earthquakes with lower depths, with the regression lines being far from the ideal diagonal line. This implies that the baseline models have difficulty accurately predicting ground motion intensities for shallow earthquakes.
In summary, the proposed SC-GNN model outperforms the baseline models in terms of unbiased predictions and accuracy across different magnitude and depth ranges. These results further demonstrate the superiority of the SC-GNN model for ground motion intensity prediction.
#### Vi-B4 Bland-Altman Plots
The Bland-Altman plots provide a useful visualization to assess the agreement between two different measurement techniques. In this case, we are comparing the predictions from our proposed SC-GNN model and the baseline models (TISER-GCN and CNN) against
Fig. 7: Conditional Scatter Plots: Magnitude-based (a-c) and Depth-based (d-f).
the true observed earthquake intensities from the seismic waveforms. The main components of a Bland-Altman plot are the mean difference (bias) and the limits of agreement (LoA), which provide an estimate of the range within which 95% of the differences between the two measurements lie. The plots display the difference between the two methods against their average, allowing the identification of systematic biases, outliers, and trends in the differences. It aids in evaluating the model's consistency with respect to the actual observed intensity.
For the proposed SC-GNN model (Fig. 7(a)), the Bland-Altman plot shows a mean difference of 0.40 and limits of agreement of [-0.79, 1.60]. This indicates that the SC-GNN model predictions are, on average, in good agreement with the true observed intensities. The narrow range of the limits of agreement suggests that the model's performance is consistent across the range of earthquake intensities.
In contrast, the baseline TISER-GCN model (Fig. 7(b)) presents a mean difference of 0.67 and limits of agreement of [-1.09, 2.43]. The increased mean difference compared to the SC-GNN model suggests that the TISER-GCN model predictions are less accurate. Additionally, the wider limits of agreement indicate a higher level of variability in the model's performance.
For the baseline CNN model (Fig. 7(c)), the mean difference is 0.82, and the limits of agreement are [-1.08, 2.72]. This result shows that the CNN model has the highest bias among the three models, with its predictions deviating significantly from the true observed intensities. The limits of agreement are also wider than those for the SC-GNN and TISER-GCN models, suggesting a much greater level of variability in the performance of the CNN model.
In summary, the Bland-Altman plots demonstrate the superior performance of the proposed SC-GNN model in predicting earthquake intensities compared to the baseline TISER-GCN and CNN models. The GNN model exhibits the smallest mean difference and the narrowest limits of agreement, indicating higher accuracy and consistency in its predictions across the range of earthquake intensities. This analysis further supports the effectiveness of the proposed SC-GNN model for predicting earthquake intensities.
#### Iv-B5 Performance Comparison on Secondary Datasets
In this discourse, we assess the performance of our proposed SC-GNN model on secondary datasets (CW and STEAD), comparing it to baseline models: TISER-GCN, CNN, and GMPE. Performance is gauged using MSE, SD, and CC.
In the evaluation on the CW dataset (Table IV), the SC-GNN model demonstrates superior performance compared to the TISER-GCN and CNN models, as evidenced by lower MSE, SD, and higher CC. It also slightly outperforms the GMPE. It is worth noting that the CW dataset is smaller in size than CI in terms of the number of events and available waveform data, which leads to a decrease in the performance of all DL-based models.
In the case of the STEAD dataset (Table V), the SC-GNN model exhibits significant improvements over all the baseline models except for the GMPE. This is reflected in lower MSE, lower SD, and higher CC values. However, it is important to mention that the STEAD dataset comprises only 4299 waveform data points, accounting for a mere 11.60% of the possible 37054 waveforms (\(191\times 194\)). Consequently,
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline Metric/Model & **Proposed SC-GNN** & TISER-GCN & CNN & GMPE \\ \hline MSE & 0.8959 & 1.2196 & 1.5277 & 0.3512 \\ \hline SD & 0.8295 & 0.9379 & 0.8682 & 0.5881 \\ \hline CC & 40.68\% & 14.07\% & 13.66\% & 77.25\% \\ \hline \end{tabular}
\end{table} TABLE V: Comparison of the proposed SC-GNN model with baseline models on the STEAD dataset for \(2<f_{\text{EMS-98}}<9.5\).
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline Metric/Model & **Proposed SC-GNN** & TISER-GCN & CNN & GMPE \\ \hline MSE & 0.5326 & 1.1664 & 1.7914 & 0.5765 \\ \hline SD & 0.6858 & 1.0163 & 0.9305 & 0.7539 \\ \hline CC & 70.07\% & 29.28\% & 16.94\% & 65.91\% \\ \hline \end{tabular}
\end{table} TABLE IV: Comparison of the proposed SC-GNN model with baseline models on the CW dataset for \(2<f_{\text{EMS-98}}<9.5\).
Fig. 8: Bland-Altman plots displaying the mean difference and LOA of observed and predicted values.
the limited data availability poses challenges for these data-driven DL models, including the proposed SC-GNN, whereas the GMPE remains relatively unaffected. Nonetheless, the SC-GNN model showcases comparatively better resilience and predictive capabilities on this challenging STEAD dataset, further affirming its effectiveness in earthquake ground shaking intensity prediction.
#### V-B6 Comparison of Model Parameters
Now, we carry out a detailed analysis of the number of parameters utilized by our proposed SC-GNN and compare this to the parameter count of two other baseline models: the TISER-GCN and the CNN. A concise summary of these comparisons is encapsulated in Table VI.
As evident from the tabulated results, our proposed SC-GNN model utilizes fewer parameters, approximately 0.705 million, and consequently, its overall model size is 2.8 MB. In contrast, both the TISER-GCN and CNN models require almost twice as many parameters, around 1.26 and 1.35 million, respectively, and larger model sizes of 4.8 and 5.1 MB.
The significance of these findings becomes even more pronounced in the context of early warning systems for seismic activities. A lower number of parameters directly implies a more efficient model in terms of computational resources. This efficiency translates to faster computations, which is a critical factor in timely predicting seismic activities and issuing early warnings.
Furthermore, the reduced model size of the SC-GNN makes it a more suitable choice for implementation on resource-constrained devices. This characteristic is critical, as seismic early warning systems often operate on field-deployed devices with limited computational power and storage capacity.
These factors underscore the suitability of SC-GNN over the TISER-GCN and CNN models in the context of seismic activity prediction. They also validate the design choices in developing the SC-GNN, emphasizing its efficient utilization of parameters without sacrificing prediction performance, as evidenced in the earlier discussions of model accuracy and performance on varying input window lengths.
### _Ablation Study_
#### V-G1 Model Variations
In this section, we present the results of an ablation study conducted to evaluate the impact of different combinations of layers in our proposed SC-GNN model. The final proposed model consists of a combination of two Chebyshev Conv (ChC), one Graph-skip Conv (GCSC), and one Graph-Attention Pool (GAP) layer. We experimented with removing some layers and adding extra layers, such as the graph convolutional layer (GCN), Diffusion Conv (DC) and graph attention layer (GAT), to demonstrate that the final proposed model performs better than other layer combinations. The results are shown in Table VII.
The results from the ablation study clearly demonstrate that the final proposed model, which combines ChC, GCSC, and GAP layers, achieves the lowest MSE (0.1137) and the lowest normalized MSE (4.83%). This indicates that the combination of these layers is the most effective in predicting ground motion intensities. Here, the normalized MSE is obtained by dividing the raw MSE values by the observed data mean. This computational step guarantees that the normalized MSE appropriately reflects the relative error, taking into consideration the data set scale.
The superior performance of the final proposed model can be attributed to the combined strengths of the ChC, GCSC, and GAP layers. The ChC layer effectively captures local spatial information in the graph, while the GCSC layer helps learn long-range dependencies and skip uninformative features. The GAP layer, on the other hand, focuses on aggregating the most relevant information from the graph by attending to the most important nodes.
By comparing the final proposed model with other layer combinations, we can infer that removing any of these layers leads to decreased performance, as evidenced by higher MSE and normalized MSE values. This confirms that the synergy between ChC, GCSC, and GAP layers is crucial for achieving the best performance in our GNN model.
#### V-G2 Time Window Variation
We evaluated the performance of the proposed SC-GNN model by varying the input window length from 5s to 30s. The results of this analysis are presented in Table VIII.
As shown in the table, the performance of the SC-GNN model deteriorates as the input window length is reduced, with the mean squared error (MSE) increasing and the correlation coefficient decreasing. The reason for this deterioration is that shorter input windows capture less information about the seismic waveforms, making it more challenging for the model to accurately predict ground motion intensities.
Earthquake early warning systems (EEWS) require shorter window lengths for faster response times, which is crucial
\begin{table}
\begin{tabular}{|c|c|c|} \hline Model & Parameters (Millions) & Model Size (MB) \\ \hline \multicolumn{3}{|c|}{**Proposed SC-GNN**} & 0.705 \\ \hline TISER-GCN & 1.26 & 4.8 \\ \hline CNN & 1.35 & 5.1 \\ \hline \end{tabular}
\end{table} TABLE VI: Comparison of model parameters and model sizes.
\begin{table}
\begin{tabular}{|c|c|c|} \hline Time-Window & MSE & SD & Correlation Coefficient \\ \hline
30s & 0.0844 & 0.2893 & 92.86\% \\ \hline
25s & 0.0880 & 0.2985 & 92.46\% \\ \hline
20s & 0.0969 & 0.3107 & 91.89\% \\ \hline
15s & 0.1030 & 0.3181 & 91.38\% \\ \hline
10s & 0.1137 & 0.3369 & 90.09\% \\ \hline
5s & 0.1723 & 0.4071 & 83.29\% \\ \hline \end{tabular}
\end{table} TABLE VIII: Performance analysis of the proposed SC-GNN model with varying input window length
\begin{table}
\begin{tabular}{|c|c|c|} \hline GNN Layers & MSE & Normalized MSE \\ \hline
**ChC + (MC + GCSC + GAP)** & 0.1137 & 4.83\% \\ \hline ChC + ChC + GCSC & 0.1848 & 7.86\% \\ \hline ChC + ChC + GAT & 0.5431 & 23.11\% \\ \hline ChC + ChC + GCN & 0.4337 & 18.45\% \\ \hline ChC + ChC + DC & 0.9372 & 39.88\% \\ \hline ChC + GCSC + GAP & 0.3697 & 15.73\% \\ \hline \end{tabular}
\end{table} TABLE VII: Ablation study results for the proposed SC-GNN model on the CI dataset.
for timely alerts and potentially saving lives and property. However, as demonstrated in our analysis, there is a trade-off between window length and prediction accuracy. The shorter the window length, the less accurate the model becomes.
To achieve an optimal balance between response time and prediction accuracy for EEWS applications, it is essential to carefully consider the choice of input window length. Future research could focus on further optimizing the GNN model or exploring other approaches that maintain high accuracy while using shorter input windows.
### _SC-GNN for Early Warning_
In this sub-section, we showcase the significant promise and effectiveness of our proposed model, SC-GNN, as an integral part of the earthquake early warning (EEW) systems. The ability of the SC-GNN to deliver rapid and accurate seismic intensity predictions, even in the critical window of just 5s, underscores its potential as a pioneering tool for early seismic warnings. The following discussion contains some key results using main CI dataset.
A vital testament to the utility of SC-GNN in EEW is reflected in the distribution of the P-wave arrival times depicted in Fig. 9(a). The histogram reveals that a majority of the stations, approximately 90%, receive the prediction before the P-wave arrives 2, demonstrating the potential of SC-GNN to provide timely warnings. It is important to note that the actual warning time available to take preparatory measures will also depend on the time required to disseminate warnings, which can vary based on infrastructure, technology, and geolocation. Research shows that, with ideal infrastructure, this transmission time can be relatively negligible compared to the early detection advantage provided by SC-GNN [64, 65].
Footnote 2: Here, we assume 5s window input to the SC-GNN.
The histogram of the maximum ground shaking times, as shown in Fig. 9(b), further underscores the benefit of SC-GNN. As observed, more than 95% of the locations potentially receive the warning well ahead of the maximum ground shaking, often the most destructive phase of an earthquake. This suggests that taking into account practical transmission times, there might still be a valuable window for the populace and infrastructure to prepare, potentially mitigating the seismic event's impacts.
The cumulative density function (CDF) of the warning times (Fig. 9(c)) provides an illustrative perspective on the capabilities of SC-GNN in EEW. The plot suggests that timely warnings could be disseminated to a significant number of areas, strengthening the argument for integrating SC-GNN into EEW systems. We observe that around 70% of the locations potentially receive a warning time of more than 10s, which, after accounting for transmission times, might be sufficient for various precautionary measures like taking cover, shutting off utilities, evacuation etc [66].
Furthermore, the relationship between the warning time and the epicentral distance (Fig. 9(d)) affirms the effectiveness of SC-GNN. The warning time proportionally increases with the distance from the epicentre; for approximately every 4 km, the warning time is incremented by 1s. This suggests that areas farther from the epicentre, which traditionally had to wait longer for the warning will now have more time to brace for the incoming seismic waves.
In summary, while acknowledging the practical considerations of warning transmission times, our proposed SC-GNN framework exhibits promising potential for integration into EEW systems. By leveraging the initial seismic waveforms, SC-GNN aims to extract critical earthquake information and generate accurate seismic intensity predictions, contributing to early warning efforts. It is anticipated that this could significantly increase the warning lead times in a majority of areas, providing a valuable cushion of time for implementing appropriate disaster mitigation measures.
## VI Conclusion
In this paper, we have proposed a novel deep learning framework, SC-GNN, that comprises two key deep learning components, i.e., a graph neural network (GNN) for capturing spatiotemporal characteristics of seismic waves in a geographical area and a contrastive learning module to find the representation of seismic waves from a small portion of initial seismic waveforms. More specifically, the GNN part has a unique ability to propagate information through the nodes of a graph-like structure of seismic station distribution. Wave propagation enables globally informed predictions with locally
Fig. 9: SC-GNN for early warning: a) Histogram displaying p-wave arrival times. b) Histogram displaying the instant at which maximum ground shaking occurs. c) Cumulative density function of warning times. d) Histogram displaying p-wave arrival times.
available data. On the other hand, the self-supervised contrastive learning phase enabled us to learn the representation of seismic waveforms in such a way that facilitates predicting seismic intensity from a significantly shorter input waveform, which is a key factor in an earthquake early warning (EEW) system.
We have shown in experiments that the proposed SC-GNN is adaptive to varying input window lengths, with a commendable performance even at a reduced window length of 5s. This trait is particularly valuable for EEW systems, where every second of early warning can mean the difference between life and death. Our SC-GNN model outperformed all state-of-the-art methods on three well-known seismic datasets across multiple assessment measures. Finally, when potentially integrated into an earthquake early warning (EEW) system, SC-GNN delivers rapid and accurate seismic intensity predictions, with approximately 90% of the stations receiving warnings even before the P-wave arrival.
Incorporating meta-learning and transfer-learning schemes and exploring their potential applications within the broader field of seismic event prediction and disaster management can be a potential future research.
**Data and Code Availability:** Data and code will be made available upon request.
## Acknowledgment
This project is funded by RISE Internal Research Project ID 2021-01-027, Title: "Earthquake Early Warning System in Bangladesh", from Bangladesh University of Engineering and Technology (BUET), Dhaka.
|
2308.04489 | Gapped Interfaces in Fracton Models and Foliated Fields | This work investigates the gapped interfaces of 3+1d fracton phases of matter
using foliated gauge theories and lattice models. We analyze the gapped
boundaries and gapped interfaces in X cube model, and the gapped interfaces
between the X-cube model and the toric code. The gapped interfaces are either
"undecorated" or "decorated", where the "decorated" interfaces have additional
Chern-Simons like actions for foliated gauge fields. We discover many new
gapped boundaries and interfaces, such as (1) a gapped boundary for X-cube
model where the electric lineons orthogonal to the interface become the
magnetic lineons, the latter are the composite of magnetic planons; (2) a
Kramers-Wannier-duality type gapped interface between the X-cube model and the
toric code model from gauging planar subsystem one-form symmetry; and (3) an
electromagnetic duality interface in the X-cube model that exchanges the
electric and magnetic lineons. | Po-Shen Hsin, Zhu-Xi Luo, Ananth Malladi | 2023-08-08T18:00:06Z | http://arxiv.org/abs/2308.04489v1 | # Gapped Interfaces in Fracton Models and Foliated Fields
###### Abstract
This work investigates the gapped interfaces of 3+1d fracton phases of matter using foliated gauge theories and lattice models. We analyze the gapped boundaries and gapped interfaces in X cube model, and the gapped interfaces between the X-cube model and the toric code. The gapped interfaces are either "undecorated" or "decorated", where the "decorated" interfaces have additional Chern-Simons like actions for foliated gauge fields. We discover many new gapped boundaries and interfaces, such as (1) a gapped boundary for X-cube model where the electric lineons orthogonal to the interface become the magnetic lineons, the latter are the composite of magnetic planons; (2) a Kramers-Wannier-duality type gapped interface between the X-cube model and the toric code model from gauging planar subsystem one-form symmetry; and (3) an electromagnetic duality interface in the X-cube model that exchanges the electric and magnetic lineons.
November 5, 2021
###### Contents
* 1 Introduction
* 1.1 Summary of results
* 2 Review of Foliated Field Theory and Fracton Models
* 2.1 Foliation one-forms
* 2.2 Foliated gauge fields
* 2.3 Restricted mobility from gauge invariance
* 2.4 Example: foliated gauge theory for X-cube model
* 2.4.1 Observables
* 2.4.2 Braiding
* 2.4.3 Lattice model
* 3 Gapped Boundaries of X-Cube Model
* 3.1 Undecorated gapped boundaries
* 3.1.1 Lattice model for undecorated gapped boundary
* 3.2 Decorated gapped boundaries
* 3.2.1 Review of decorated toric code boundary
* 3.2.2 Decorated boundaries in X-cube model
* 4 Gapped Interfaces between X-Cube and Toric Code models
* 4.1 Undecorated gapped interfaces
* 4.2 Decorated gapped interfaces
* 4.3 Lattice models
* 4.3.1 Example: undecorated interface
* 4.3.2 Example: decorated interface
* 5 Gapped interfaces in X-Cube Model
* 5.1 Undecorated interfaces
* 5.2 Example of decorated interface: electromagnetic duality interface
* 5.2.1 Fusion rule
* 6 Outlook
* A Presentations of X-Cube Model on Lattice
Introduction
Gapped phases of matter, _i.e._ phases with an energy gap separating the ground states from the excited states for large systems, can be robust to small perturbations and play an important role in understanding the phases of matter. Examples of interesting gapped phases include topological insulators and topological superconductors [1], and topological ordered states [2], which have important applications to quantum computation [3, 4, 5, 6, 7, 8] and the mathematical theories of higher fusion categories [9, 10, 11, 12, 13].
Experimental realizations of quantum systems have boundaries, and thus it is important to understand the boundaries and interfaces for the gapped phases. Interfaces between different gapped phases are also important in the study of quantum phase transitions between different gapped phases: such interface comes from varying the parameters that control the phase transition over the space, as studied in [14, 15, 16]. Gapped interfaces in general quantum systems generate global symmetries and constrain the dynamics, such as whether the systems are short-range entangled (_e.g._[17, 18, 19, 20]), and deconfinement versus confinement phases in quantum chromodynamics (_e.g._[21, 22, 23, 24]). Examples of gapped interfaces between conventional topological ordered states are discussed in _e.g._[25, 26, 27, 28, 29, 30, 31] in (3+1)d, in [32, 33] in higher spacetime dimensions, and there are extensive studies of gapped interfaces in topological orders in (2+1)d using the formalisms of Lagrangian algebras, anyon condensations, tunneling matrices and Frobenius algebra, etc. [32, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50]. To this date, universal well-accepted methods for studying the gapped interfaces in general topological orders in (3+1)d and higher spacetime dimensions are still lacking.
In this work, we investigate the gapped interfaces between conventional topological orders and fracton topological orders in (3+1)d (_e.g._[51, 52, 53]), the latter have excitations of restricted mobility, such as the X-cube model [54, 55]. We focus on foliated fracton orders, where the ground states can be decoupled using local unitary transformations into that of topological orders in one dimension lower [56, 57]. There are experimental proposals for realizing fracton topological orders [58, 59, 60, 61, 62], and they have applications such as quantum memory [63, 64]. While conventional topological orders are believed to be described by topological quantum field theories [65], there is no well-accepted universal framework to describe the low energy physics for general fracton topological order. We will focus on a large class of fracton topological orders that admit a foliation structure [56, 66, 67], and they can be described by foliated field theory [68, 69, 70, 71]. Examples of gapped boundaries in foliated fracton models, which are special cases of interfaces, have also been studied in a case-by-case manner on the lattice [72] and using different field theoretic approaches [73, 74].
We provide a systematic method to study gapped interfaces using foliated field theories. The gapped interfaces are gapped boundary conditions of the foliated field theories, and they
correspond to suitable condensations of bulk gapped excitations on the interfaces.1 From the condensations, we construct the corresponding local commuting projector lattice Hamiltonian models for the gapped interfaces.
Footnote 1: For instance, the boundary condition \(b|=0\) for an Abelian gauge field \(b\), where \(|\) means the restriction to the interface, implies that the bulk excitation corresponds to the operator \(e^{i\int b}\) condenses on the interface: the excitation can move from the bulk to the boundary and disappear.
* If a bulk excitation condenses on the interface, we add the operators that create the excitations to the Hamiltonian terms along the interface. We call such terms the condensation Hamiltonian terms. As they create excitations, they do not commute with some of the bulk Hamiltonian terms.
* After all the condensation Hamiltonian terms are added, we modify the bulk Hamiltonians near the interface such that they commute with the condensation Hamiltonian terms as in the Brillouin-Wigner perturbation theory (see _e.g._[75]). For instance, we can modify the Hamiltonian terms that do not commute with the condensation terms, by replacing these Hamiltonian terms with suitable products of nearby stabilizer Hamiltonian terms.2
Footnote 2: Examples of lattice Hamiltonian models for condensation of excitations on the entire space are discussed in _e.g._[76, 77, 78, 79]. Here, we are condensing the excitations along an interface instead of on the entire space, _i.e._ the interfaces are condensation descendants [80]. For stabilizer Hamiltonians, we can regard the construction as measuring the check operators given by the condensation Hamiltonian terms along the interface, which gives another local commuting projector Hamiltonian model. The construction of gapped interface can also be viewed as gauging the symmetry generated by the excitation-creation operators along the interface, see _e.g._[18, 81] for examples.
We will focus on the (3+1)d X-cube model. We classify the gapped boundaries and gapped interfaces in the X-cube model, as well as the gapped interfaces between the X-cube model and (3+1)d toric code model [3, 82, 83]. We will divide the gapped interfaces (and gapped boundaries) into two classes: the undecorated interfaces and decorated interfaces. Similar distinction is used in the classification of gapped boundaries of finite group gauge theory (see _e.g._[84]), where the gapped boundaries are described by (1) an unbroken subgroup of the bulk gauge group, and (2) a topological action of the remaining subgroup gauge field on the boundary. When the topological action is trivial, we call such boundaries (or interfaces) undecorated, and otherwise decorated. For a systematic exploration of gapped interfaces in ordinary finite group gauge theories, see [85].
### Summary of results
We reproduce the gapped boundaries of the X-cube model discussed in [72, 73], and also discover new gapped interfaces. A representative partial list of such new gapped interfaces are as follows: (denote the 3d space coordinate by \((x,y,z)\) and time coordinate by \(t\))
* New gapped boundaries of the X-cube model terminated at constant \(z\) coordinate. An undecorated example corresponds to the condensation of the electric \(z\)-lineons as
well as magnetic \(z\)-lineons on the interface. The electric \(z\)-lineons are usually called as "lineons" in the literature, which are violation of the vertex terms, _i.e._ the "X" terms in the lattice "X"-cube model. Magnetic \(z\)-lineons are combinations of what are usually called \(xz\)-planons and \(yz\)-planons. Another decorated example of gapped boundary corresponds to the case where strings of magnetic \(xz\)-planon, extended from the bulk to the boundary, is dressed by an electric \(y\)-lineon at its endpoint. Moreover, the combination of the electric \(z\)-lineons and fractons are condensed on the boundary.
* Many new gapped interfaces between the X-cube model and (3+1)d Toric code. In particular with decorations, we found an interesting interface where certain electric/magnetic excitations in the X-cube are exchanged with the magnetic/electric excitations in the toric code. Again the electric and magnetic excitations refer to the violations of vertex and cube/plaquette terms in the usual lattice models. More concretely for one example interface, the electric \(z\)-lineons in X-cube model are condensed on the interface, the magnetic \(z\)-lineons become the electric charge in Toric code on the interface. The magnetic flux loops in the Toric code on the interface are condensed on the interface, and they are decorated with the \(x\)-lineons and \(y\)-lineons of the X-cube model.
* Kramers-Wannier duality type new gapped interface between the X-cube model and the toric code model given by gauging subsystem one-form symmetry on half space. This uses the property that the X-cube model can be obtained from the toric code model by gauging the planar subsystem one-form symmetry in the three directions (see _e.g._[86]). We show that such gapped interface can be described by a mixed Chern-Simons like term for the foliated fields. Similar Kramers-Wannier type duality defects for one-form symmetry is discussed in _e.g._[17, 18, 87].
* New gapped interface in the X-cube model generates "electromagnetic duality": the magnetic lineons become the electric lineons with the same mobility: \[\text{Magnetic lineons}\quad\longleftrightarrow\quad\text{Electric lineons}\.\] (1.1) Here, the magnetic lineons are again composites of magnetic planons. Similarly, by taking the composite of lineons, the magnetic fractons are exchanged with the electric fractons. 3 For such interfaces supported on a leaf of a foliation, the magnetic lineons on the interface with mobility parallel to the interface can be identified with the magnetic planons which can also move perpendicularly to the interface, restricted on the interface. The interface fuses with itself to give an interface that generates charge conjugation symmetry. We note that such interface is similar to the electromagnetic duality interface in (2+1)d \(\mathbb{Z}_{N}\) gauge theory that exchanges the electric and magnetic charges. Footnote 3: We note that whether an excitation is composite or elementary depends on the description. For instance, the (2+1)d ordinary \(\mathbb{Z}_{2}\) gauge theory with boson electric charge is equivalent to the theory with fermion electric charge, where the latter fermion electric charge is the composite of the bosonic electric and magnetic charges in the former description.
The work is organized as follows. In Section 2, we review foliated field theories. In Section 3, we study the gapped boundaries of fracton topological orders such as the X-cube model. In Section 4, we study the gapped interfaces between the fracton topological order such as X-cube model, and conventional topological order such as the toric code. In Section 5, we discuss gapped interfaces in the X-cube model. In Section 6, we discuss future directions. In Appendix A, we review the relation between the X-cube lattice model used in the main text, and another presentation of the model used in the literature.
## 2 Review of Foliated Field Theory and Fracton Models
In this section, we will review properties of foliated gauge fields following [69, 70]. We will use the notation in [70].
Fracton models are described by excitations whose mobility is restricted, such as particles that can only move along a line (lineons), a plane (planons), or cannot move at all (fractons) without creating additional excitations. For excitations created by non-local operators, this means the line operator describes the worldlines of the excitations have support constrained on suitable directions. For instance, the line operator describing the lineons moving in the \(x\) direction is constrained to lie on the subspace spanned coordinate \((x,t)\). Similarly, the line operator describing the planon that moves on the \(x,y\) plane is constrained to lie on the subspace spanned by the coordinate \((x,y,t)\), On the other hand, the line operator describing the immobile fracton can only lie on the temporal direction. Such restriction on the support of the operators can be naturally described by foliation. In the following, we will review the concept of foliation and the foliated fields describing excitations in fracton models.
### Foliation one-forms
A (codimension-one) foliation of the spacetime manifold is a decomposition into submanifolds. The submanifolds are called the leaves of the foliation. We will focus on the case where all leaves have the same dimension, which is called regular foliation. We will furthermore focus on the case that all leaves have codimension-one, and we will denote leaves by \(M_{\mathcal{L}}\). For an introduction to foliation on manifolds, see _e.g._[88].
In the discussion, we will focus on Euclidean spacetime in (3+1)d with coordinate \((t,x,y,z)\), and the leaves for foliation \(k\) are described by the slices of constant coordinate \(x^{k}\), with \(k=1,2,3\) labelling the foliations. (We will also denote the coordinates by \(x^{1}=x,x^{2}=y,x^{3}=z\)). Such foliations can also be describe by foliation one-forms \(e^{k}=dx^{k}\), \(de^{k}=0\). The leaves are the Poincare duals of the foliation one-forms.
### Foliated gauge fields
We will focus on Abelian foliated gauge fields. They are Abelian gauge fields with the following constraints (the notations \(A,B\) are swapped compared to [69]) :
* We will use the notation \(A_{n}^{k}\) to denote an \(n\)-form Abelian gauge field whose bundle has the gauge transformation \[A_{n}^{k}\to A_{n}^{k}+d\lambda_{n-1}^{k}+\alpha_{n}^{k}\,\] (2.2) where \(\alpha_{n}^{k}e^{k}=0\) with additional gauge transformation \(\alpha_{n}^{k}\to\alpha_{n}^{k}-d\alpha_{n-1}^{k}\). \(\lambda_{n-1}\) transforms as \(\lambda_{n-1}^{k}\to\lambda_{n-1}^{k}+\alpha_{n-1}^{k}\). For instance, if the foliation one-form is \(e^{k}=dx^{1}\), then this means that the component \(A_{1iz_{i}3,\cdots i_{n}}\) can be removed by gauge transformation \(\alpha^{k}\).
* We will use the notation \(B_{n}^{k}\) to denote an \(n\)-form Abelian gauge field that obeys the condition \[B_{n}^{k}e^{k}=0\.\] (2.3) It has gauge transformation \[B_{n}^{k}\to B_{n}^{k}+d\lambda_{n-1}^{k}\,\] (2.4) where \(\lambda_{n-1}^{k}e^{k}=0\). For instance, if the foliation one-form is \(e^{k}=dx^{1}\), then this means that the only non-zero components are \(B_{1iz_{i}3\cdots i_{n}}\). For \(n=2\) in (3+1)d with coordinate \((t,x,y,z)\) and \(x^{1}:=x\), this means \(B_{xt},B_{xy},B_{xz}\) are the only non-vanishing components of the foliated gauge field \(B\).
We note that the gauge bundle of \(A_{n}^{k}\) can be obtained from the bundle of an ordinary Abelian gauge field by imposing \(n\)-form gauge transformation with the second type gauge field as the gauge parameter \(\alpha_{n}^{k}e^{k}=0\).
As discussed in _e.g._[70], the field \(B_{2}^{k}\) is also related to symmetric rank-two tensor gauge field, which also describes various "exotic" field theories for fracton models, _e.g._[89, 90].
On the lattice, we represent \(A_{n}^{k}\) as operator acting on the local Hilbert space on every \(n\)-simplex that does not span the direction \(x^{k}\), and \(B_{n}^{k}\) as operator acting on the local Hilbert space on every \(n\)-simplex that span the direction \(x^{k}\). For instance, \(A_{1}^{k}\) are operators acting on the Hilbert space on the edges in all spatial directions except the edges along the \(x^{k}\) direction. Similarly, \(B_{2}^{k}\) are operators acting on the Hilbert space on the plaquettes in the \(x^{i}\)-\(x^{k}\) direction for every spatial direction \(i\neq k\), see _e.g._[70, 91] for examples of foliated gauge theories and the corresponding local commuting projector lattice Hamiltonian models.
### Restricted mobility from gauge invariance
We can define observables such as Wilson line of the foliated gauge field. When the support of the observable \(\Sigma\) can only extend in certain directions to be gauge invariant, it means the
corresponding excitations have restricted mobility. For instance, the operator
\[e^{i\oint_{\Sigma_{n}}A^{k}_{n}} \tag{2.5}\]
can only be defined for \(n\)-dimensional closed surface \(\Sigma_{n}\) on the leaf of foliation \(k\), since otherwise the operator would not be gauge invariant under the transformation \(\alpha^{k}_{n}\) that satisfies \(\alpha^{k}_{n}e^{k}=0\). This means that the corresponding excitations can only move on the leaf. They have the mobility of planons.
The operator
\[e^{i\int_{\Sigma^{\prime}_{n}}B^{k}_{n}}\;, \tag{2.6}\]
is gauge invariant for \(n\)-dimensional submanifold \(\Sigma^{\prime}_{n}\) whose boundary lies on the leaf of foliation \(k\), since it is the same as the operator on the closed \(n\)-dimensional submanifold \(\widetilde{\Sigma^{\prime}_{n}}:=\Sigma^{\prime}_{n}\cup S_{\mathcal{L}^{k}}\) where the \(n\)-dimensional submanifold \(S_{\mathcal{L}^{k}}\) is on a leaf \(M_{\mathcal{L}_{k}}\) of foliation \(k\) such that \(\partial\Sigma^{\prime}_{n}=-\partial S_{\mathcal{L}^{k}}\):
\[e^{i\int_{\Sigma^{\prime}_{n}}B^{k}_{n}}=e^{i\int_{\widetilde{\Sigma^{\prime }_{n}}}B^{k}_{n}}e^{-i\int_{S_{\mathcal{L}^{k}}}B^{k}_{n}}=e^{i\int_{ \widetilde{\Sigma^{\prime}_{n}}}B^{k}_{n}}\;, \tag{2.7}\]
the equality comes from the constraint \(B^{k}_{n}e^{k}=0\), which implies \(e^{-i\int_{S_{\mathcal{L}^{k}}}B^{k}_{n}}=1\) since \(\int B^{k}_{n}\) vanishes for any \(n\)-dimensional submanifold on the leaf. We can also consider the case that \(\Sigma_{n}=[0,1]\times\gamma\) is a "ribbon" of thickened loop \(\gamma\) that lies on two leaves of foliation \(k\). This corresponds to an excitation that can only move on the leaf. They have the mobility of planon.
By combining multiple operators with different mobility, we can obtain operators with more restricted mobility.
### Example: foliated gauge theory for X-cube model
The X-cube model provides a basic example for fracton models in 3+1d. The model has three foliations \(e^{k}=dx^{k}\) where \((x^{k})=(x,y,z)\), and at low energies it can be described by the following foliated field theories: [69, 70]
\[\mathcal{L}_{XC}=\frac{N}{2\pi}\left[dba+b\left(\sum_{k}B^{k}\right)+dB^{k}A^ {k}\right], \tag{2.8}\]
where \(a,b\) are one-form and two-form gauge fields. The first term describes a 3d toric code, the third term describes three foliations of 2d toric codes, while the second term couples the 2d and 3d theories. The couplings are motivated by condensation of string membranes as described in [91]. The fields have the gauge transformation
\[b\to b+d\lambda_{b},\quad B^{k}\to B^{k}+d\lambda^{k}\] \[A^{k}\to A^{k}-\lambda_{b}+d\phi+\alpha^{k},\quad a\to a-\sum_{k} \lambda^{k}+d\rho\;, \tag{2.9}\]
where \(\phi,\rho\) are periodic scalars, \(\lambda\), \(\alpha^{k}\) and \(\lambda_{b}\) are 1-forms that have their own gauge transformations \(\lambda\to\lambda+d\lambda_{0}\), \(\lambda_{b}\to\lambda_{b}+d\lambda_{0}^{\prime}\), \(\alpha^{k}\to\alpha^{k}+d\lambda_{0}^{k}\) with periodic scalars \(\lambda_{0},\lambda_{0}^{\prime}\) and \(\lambda_{0}^{k}=\lambda_{0}^{k}(x^{k})\). The equations of motion for \(a,b,A^{k},B^{k}\) are \(Ndb=0\), \(da+\sum_{k}B^{k}=0\), \(NdB^{k}=0\), and \(b+dA^{k}=0\) on leaf of foliation \(k\). These equations hold away from the operator insertions of \(a,b,A^{k},B^{k}\), respectively. In the presence of insertions of these operators, the equations of motion are modified by delta function forms strictly localized at the insertion. All gauge fields have \(\mathbb{Z}_{N}\) holonomies [92].
#### 2.4.1 Observables
The theory has the following observables that coincide with those of the X-cube model. They are generated by "electric" observables and "magnetic" observables, where the electric and magnetic terminologies are with respect to the gauge fields \(A^{k}\). In other words, the electric observables are given by Wilson lines of \(A^{k}\), while the magnetic observables are described by those of the conjugate variables \(B^{k}\). In the lattice model, the electric excitations correspond to violations of vertex terms in [93] (or the violations of the \(Z\), \(\tilde{Z}\)-terms in the equivalent lattice model shown in Figure 4), while the magnetic excitations correspond to violations of the cube terms in [93] (or violations of the \(X\), \(\tilde{X}\)-terms in Figure 4 in the equivalent lattice model). We will follow the same convention in the remaining of the paper.
The electric observables are:
* Electric lineon: \(e^{i\oint A^{i}-A^{j}}\) for \(i\neq j\). It can be supported on a line in the direction orthogonal to \(x^{i},x^{j}\) axis.
* Electric planon: \(e^{i\oint_{\gamma}A^{k}+i\int_{\Sigma}b}\), where \(\Sigma\) has boundary \(\gamma\), and we can take \(\Sigma\) to be a thin ribbon. The operator can be defined for \(\gamma\) on the leaf of foliation \(k\). For instance, if \(k=x\), then it describes a planon mobile in the \(y,z\) plane. See Figure 1 for illustration.
* The above electric planon can also be described as dipole of electric lineons: \(\int b=\int dA^{k}\) when the surface is on the leaf of foliation \(k\). Thus a pair of electric lineons in the \(x^{i}\) direction separated in the \(x^{k}\) direction is mobile on the \(x^{i},x^{j}\) planes, where \(i,j,k\) are distinct spatial coordinates. See Figure 2.
* Electric fracton: \(e^{i\int n_{1}A^{1}+n_{2}A^{2}+n_{3}A^{3}}\) with integers \(n_{1},n_{2},n_{3}\neq 0\) mod \(N\) satisfying \(n_{1}+n_{2}+n_{3}=0\) mod \(N\), the latter condition is for the operator to be invariant under the gauge transformation \(A^{k}\to A^{k}-\lambda_{b}\) for \(k=1,2,3\).
The "magnetic" observables are:
* Magnetic planon: \(e^{i\int B^{k}}\), whose boundary can end on leaf of foliation \(k\). For example, \(e^{i\int B^{3}}\) describes a magnetic planon that can move in the \(xy\)-plane.
* Magnetic lineon: \(e^{i\int(n_{i}B^{i}+n_{j}B^{j})}\) for \(i\neq j\) and \(n_{i},n_{j}\neq 0\) mod \(N\): can end on line in the direction orthogonal to \(x^{i},x^{j}\), see Figure 3 for a more detailed explanation.
* Magnetic fracton: \(e^{i\int n_{1}B^{1}+n_{2}B^{2}+n_{3}B^{3}}\) with integers \(n_{1},n_{2},n_{3}\neq 0\) mod \(N\). It can only end on temporal direction, and thus the boundary is immobile. We note that the special case \(n_{1}=n_{2}=n_{3}=1\) corresponds to \(e^{i\int a}\).
We remark that the support of the above operators can have corners or hinges. For instance, we can define \(e^{i\int_{\Sigma}(B^{1}+B^{2})}\) on the surface \(\Sigma:\{0<x<1,y=0\}\cup\{0<y<1,x=0\}\cup\{0<x<1,y=1\}\cup\{0<y<1,x=0\}\) at fixed \(t\). There are four hinges along the \(z\) direction. Since \(\int B^{1}\) vanishes on \(\{0<y<1,x=0\}\cup\{0<y<1,x=1\}\) and \(\int B^{2}\) vanishes on \(\{0<x<1,y=0\}\cup\{0<x<1,y=1\}\), the operators equal to the product of \(e^{i\int B^{1}}\) and \(e^{i\int B^{2}}\) each on a pair of disjoint surfaces, and the four surfaces are connected by the four hinges at \((x,y)=(0,0),(0,1),(1,0),(1,1)\) along the \(z\) direction. Since we can deform the operators to smooth out the hinges from \(dB=0\), such that surface becomes a circle on the \(x,y\) plane and extend in the \(z\) direction, the hinges along \(z\) direction do not correspond to non-trivial excitations.
Figure 1: Illustration of electric planons \(e^{i\int b+dA^{1}}\). The \(A^{1}\) field lives on the boundary circles of the cylinder (living on the \(yz\)-plane in this example), while the \(b\) field lives on the side of the cylinder.
Figure 2: Dipole of electric \(x\)-lineons separated in the \(z\) direction is mobile on the \(x,y\) plane. Similarly, dipole of electric \(y\)-lineons separated in the \(z\) direction is mobile on the \(x,y\) plane.
#### 2.4.2 Braiding
The braiding of the excitations can be computed by the correlation function of the corresponding operators in the foliated field theory.
For instance, consider magnetic \(y,z\)-planon \(e^{i\int_{\Sigma}B^{1}}\) whose boundary is \(\gamma=\partial\Sigma\), and the electric \(z\)-lineon \(e^{i\int_{\gamma^{\prime}}(A^{1}-A^{2})}\). Denote \(\Sigma^{\prime}\) to be surface such that \(\gamma^{\prime}=\partial\Sigma^{\prime}\). Integrating out \(A^{1}\) sets
\[B^{1}=-\frac{2\pi}{N}\delta(\Sigma^{\prime})^{\perp}\, \tag{2.10}\]
where \(\delta(\Sigma^{\prime})^{\perp}\) is the delta function two-form localized on \(\Sigma^{\prime}\). This gives the correlation function
\[e^{i\int_{\Sigma}B^{1}}=e^{-\frac{2\pi i}{N}\int_{\Sigma}\delta(\Sigma^{\prime })^{\perp}}=e^{-\frac{2\pi i}{N}\text{Link}(\Sigma,\gamma^{\prime})}\, \tag{2.11}\]
which computes the \(e^{-2\pi i/N}\) braiding between magnetic planon in the \(y,z\) plane and electric lineon in the \(z\) direction (the sign depends on the orientation). For instance, we can take \(\Sigma\) to be extended in \(x>0,y\) directions at fixed \(t=t_{1}\), and \(\gamma^{\prime}\) to be a line in the \(z\) direction piercing the membrane at a different time \(t=t_{2}\neq t_{1}\). Note that we cannot deform \(\gamma^{\prime}\) to pass through the boundary of \(\Sigma\), since it would require an operator that is not gauge invariant, and thus the linking is well-defined.
We also note the braiding can also be derived from the property that \(B^{k}\) is the canonical conjugate variable for \(A^{k}\): the canonical conjugate variable of \(A^{k}\) is
\[\Pi_{i}^{k}=\frac{N}{2\pi}\sum_{j_{1},j_{2}}\epsilon_{ij_{1}j_{2}}B^{k}_{j_{1} j_{2}}\, \tag{2.12}\]
where \(i,j_{1},j_{2}\) are spatial coordinate indices.
#### 2.4.3 Lattice model
The lattice Hamiltonian model for the foliated field theory (2.8) at \(N=2\) is given in [91], which we review in Figure 4. In Appendix A, we review the relation with the standard X-cube
Figure 3: Illustration on magnetic planons. (a) \(B^{1}\) can be integrated over \(dxdy\) and \(dxdz\) in space. The integration can end on the \(yz\)-plane (black faces) because of gauge invariance. It thus describes a quasiparticle that can move on the \(yz\)-plane, i.e., a planon. (b) Similarly \(B^{2}\) describes quasiparticle that can move in the \(xz\)-plane (red faces). (c) Combination of \(B^{1}+B^{2}\) is only mobile along the \(z\)-direction (intersection of the black and blue faces).
model in [54]. The Hamiltonian terms are
\[H_{XC,\text{bulk}}=H_{\text{vertex}}+H_{\text{edge}}+H_{\text{plaquette}}+H_{ \text{cube}}. \tag{2.13}\]
The four types of terms are reviewed in Figure 4.
The correspondence between gauge field and lattice degrees of freedom is as follows: (where we take \(N=2\), and \(X,Z\) are Pauli \(X\) and Pauli \(Z\) operators) [91]
Figure 4: Hamiltonian terms deep inside the X-cube. There are two qubits defined on each link, with different colors, and one qubit defined on each face of the cubic lattice with yellow color. First line: The left three panels (a-c) are called the vertex terms in the X-cube model, each term is a product of four Pauli \(Z^{i}\)’s acted on the colored qubits on the four links on foliation \(i\) surrounding a vertex. The right three panels (d-f) are plaquette terms, each term being a product of \(\tilde{X}\) acting on the face qubit and four Pauli \(X\)’s on the colored link qubits surrounding a plaquette. Second line: the leftmost panel (g) is the cube term, which is a product of \(\tilde{X}\) acting on the face qubits surrounding a cube. The three remaining panels (h-j) are called the edge terms, each of which is a product of two link Pauli \(Z\)’s as well as four yellow \(\tilde{Z}\)’s acting on the four faces surrounding the link. These Hamiltonian terms are taken from from reference [91]. In the figures, we displaced the operators acting the same edge slightly for better readability.
\[e^{i\int dy\ A^{1}}\sim X_{y}^{yz},\quad e^{i\int dz\ A^{1}}\sim X _{z}^{yz},\quad e^{i\int dx\ A^{2}}\sim X_{x}^{xz}\,\] \[e^{i\int dx\ A^{3}}\sim X_{x}^{xy},\quad e^{i\int dy\ A^{3}}\sim X _{y}^{xy},\quad e^{i\int dz\ A^{2}}\sim X_{z}^{yz},\] \[e^{i\int dxdy\ B^{1}}\sim Z_{z}^{yz},\quad e^{i\int dxdz\ B^{1}} \sim Z_{y}^{yz},\quad e^{i\int dydx\ B^{2}}\sim Z_{z}^{xz}\,\] \[e^{i\int dzdx\ B^{3}}\sim Z_{y}^{xy},\quad e^{i\int dzdy\ B^{3}} \sim Z_{x}^{xy},\quad e^{i\int dydz\ B^{2}}\sim Z_{x}^{xz}\, \tag{2.14}\] \[e^{i\int dx\ (A^{1}-A^{2})}\sim X_{x}^{xz},\quad e^{i\int dy\ (A^{ 1}-A^{2})}\sim X_{y}^{yz}\] \[e^{i\int dx\ (A^{3}-A^{1})}\sim X_{x}^{xy},\quad e^{i\int dy\ (A^{ 3}-A^{1})}\sim X_{y}^{xy}X_{y}^{yz}\] \[e^{i\int dx\ (A^{2}-A^{3})}\sim X_{x}^{xy}X_{x}^{xz},\quad e^{i \int dy\ (A^{2}-A^{3})}\sim X_{y}^{xy}\,\]
where the colors green, blue and red represent the foliations \(k=x,y,z\), respectively. The subscripts and superscripts in \(X_{i}^{jk}\) means that it is an operator acting on the edge in the \(i\) direction, and on the degrees of freedom for the foliation whose leaf lies in the \(j,k\) direction. Since \(B^{k}\) is the conjugate momentum of \(A^{k}\) in the foliated field theory (2.8), we represent them as the Pauli \(Z,X\) operators, respectively, and we take the \(A^{k}\) operator to support on the one-simplices (_i.e._ edges) on the lattice, while \(B^{k}\) to support on the two-simplices (_i.e._ plaquettes) on the dual lattice, which are edges on the original lattice that are perpendicular to the corresponding plaquettes.
In Appendix A, we review the equivalence to the lattice model in [54]. In this work, we will refer to both of them as the X-cube model.
## 3 Gapped Boundaries of X-Cube Model
In this section, we consider gapped boundaries of the X-cube model (2.8) on the \(z=0\) plane, with the X-cube model at \(z<0\). We will start by reviewing the general formalism for obtaining gapped boundaries from the foliated field theory, and then present a complete classification of undecorated X-cube gapped boundaries (the meaning of decoration will be explained below), followed by discussions of decorated gapped boundaries. All our examples of decorated gapped boundaries have not been discussed before in the literature [72, 73].
Below we will use \(k=1,2,3\) and planes \(yz,xz,xy\) interchangeably. Variation of action (2.8) on the boundary is,
\[\delta S_{XC}| =\frac{N}{2\pi}\int_{z=0}\left(\sum_{k}B^{k}\delta A_{L}^{k}-a \delta b\right) \tag{3.15}\] \[=\frac{N}{2\pi}\int_{z=0}\left[B^{1}\delta(A^{1}-A^{3})+B^{2} \delta(A^{2}-A^{3})+(B^{3}+B^{1}+B^{2})\delta A^{3}-a\delta b\right].\]
On the \(z=0\) boundary, the useful bulk equations of motion become:
\[da=B^{1}+B^{2},\quad b=dA^{3}. \tag{3.16}\]
Using (3.16), the last two terms in the square bracket of (3.15) cancel, leading to
\[\delta S_{XC}|=\frac{N}{2\pi}\int_{z=0}\left[B^{1}\delta(A^{1}-A^{3})+B^{2} \delta(A^{2}-A^{3})\right]. \tag{3.17}\]
Different undecorated gapped boundaries then correspond to different choices of boundary conditions for the gauge fields, which ensures \(\delta S_{XC}|=0\). We exhaust all of them in subsection 3.1. We can further have decorated gapped boundaries, corresponding to adding Cherns-Simons type terms on the boundary, which will be discussed in subsection 3.2.
We remark that since we cannot choose Dirichlet boundary conditions for canonical conjugate variables, the condensed excitations have trivial statistics. We are choosing Dirichlet boundary condition for half of the conjugate fields (or more generally, a middle dimensional subspace in the space of fields) such that the boundary variation vanishes. The condition of trivial statistics of condensed excitation on the gapped boundaries of fracton topological orders is also discussed in [72].
### Undecorated gapped boundaries
For convenience, we will focus on the case \(N=2\) such that plus and minus signs are equivalent. There are four inequivalent classes of gapped boundaries (the terminology of electric and magnetic excitations are explained near the end of section 2):
* \(B^{1}|=0,\ B^{2}|=0.\) The magnetic planons in the \(yz-\) and \(xz-\)planes are condensed. This corresponds to the smooth boundary found in [72, 73]. Since the mobility of the fracton is constrained by the gauge invariance under the gauge transformations \(a\to a-\sum_{k}\lambda^{k}\) of \(B^{k}\to+d\lambda^{k}\), with the boundary condition \(B^{1}|=B^{2}|=0\), the remaining gauge transformations \(\lambda^{1}|=\lambda^{2}|=0\), and the fracton \(e^{i\int a}\) can be defined on any curve on \(x,y,t\) directions and becomes fully mobile on the interface. This is also consistent with isolated fractons being able to absorb the condensed planons and move on the interface. Similar condensations, but in the bulk, have also been discussed in reference [79].
* \(B^{1}\) and \(B^{2}\) have free boundary condition, and \((A^{1}-A^{3})|=0=(A^{2}-A^{3})|.\) The latter means that the electric lineons in the \(y\)- and \(x\)- directions are both condensed at the interface. This is equivalent to the rough boundary in the literature modulo auxiliary qubits, which we will elaborate in subsection 3.1.1.
* \(B^{1}|=0\), \((A^{2}-A^{3})|=0.\) Magnetic planons in the \(yz\)-plane are condensed and electric lineons in \(x\)-direction are condensed. This corresponds to the anisotropic _me_-boundary found in literature [72, 73].
* \(B^{2}|=0\), \((A^{1}-A^{3})|=0.\) Magnetic planons in the \(xz\)-plane are condensed and electric lineons in \(y\)-direction are condensed. This corresponds to the anisotropic \(em\) boundary.
We note that \((B^{1}+B^{2})|=da|\) is automatically exact, the operator \(e^{i(B^{1}+B^{2})}\) can end on the \(x,y,t\) plane in the bulk without imposing any boundary condition, and thus we do not
need to include boundary condition \((B^{1}+B^{2})|=0\) in the list. All these inequivalent gapped boundaries have been discussed in the previous literature [72, 73].
For general \(N\neq 2\), there can be more gapped boundaries such as \((B^{1}-B^{2})|=0\) and \((A^{1}+A^{2}-2A^{3})|=0\) corresponding to the condensation of magnetic \(z\)-lineons as well as electric \(z\)-lineons.
#### 3.1.1 Lattice model for undecorated gapped boundary
In this part, we present two lattice models corresponding to the second and the last items in the itemization above, using the general method outlined in the introduction section.
We first construct a lattice model for \((A^{1}-A^{3})|=0=(A^{2}-A^{3})|.\) The condensation Hamiltonian is
\[H_{\text{cond.}}=-\lambda\sum_{l\in\text{bdry}}\left(\begin{array}{c}X_{l,x} ^{3}\\ \hline\end{array}+\begin{array}{c}X_{l,y}^{1}X_{l,y}^{3}\\ \hline\end{array}+\begin{array}{c}X_{l,x}^{2}X_{l,x}^{3}\\ \hline\end{array}+\begin{array}{c}X_{l,y}^{3}\\ \hline\end{array}\right) \tag{3.18}\]
The four terms in the parenthesis correspond to \((A^{1}-A^{3})_{x}|=0\), \((A^{1}-A^{3})_{y}|=0\), \((A^{2}-A^{3})_{x}|=0\), \((A^{2}-A^{3})_{y}|=0\), respectively. Taking products of these stabilizers, we realize that all the qubits lying on the boundary are pinned to \(X_{l}=1\). We can therefore remove these link qubits. The face qubit lying on the boundary also satisfies \(\tilde{X}=1\) using the red plaquette operator in fig. 4(d). The remaining Hamiltonian terms live slightly below the interface and are shown in fig. 5.
One can now easily see from the figures that an electric \(z\)-lineon can be freely created or annihilated on the boundary by acting \(X^{1}X^{2}\) on a vertical link right below the interface. So the electric \(z\)-lineons condense on the boundary as in the rough gapped boundary discussed in the literature [72, 73].
Figure 5: Hamiltonian terms for the rough boundary. The black dots label vertices at the \(z=0\) boundary.
### Decorated gapped boundaries
Before moving on to discuss new gapped boundaries of the X-cube model, we will first briefly review how the decoration works for the 3d toric code model to gain some intuitions.
#### 3.2.1 Review of decorated toric code boundary
The Lagrangian for (3+1)d toric code is
\[\mathcal{L}_{TC}=\frac{N}{2\pi}bda. \tag{3.19}\]
Variation of action on the boundary at \(z=0\) is
\[\delta S_{TC}|=\frac{N}{2\pi}\int_{z=0}b\delta a. \tag{3.20}\]
For \(N=2\), there are two undecorated gapped boundaries, see _e.g._[25, 26, 27, 28, 29, 30, 31], corresponding to
* Smooth boundary \(b|=0\), the magnetic fluxes are condensed on the boundary;
* Rough boundary \(a|=0\), charges are condensed on the boundary.
We are also free to add the term \(\frac{k}{4\pi}ada\) with integer \(k\) to the boundary Lagrangian, giving
\[\delta S^{\prime}_{TC}|=\frac{N}{2\pi}\int_{z=0}\big{(}b+\frac{k}{N}da\big{)} \delta a. \tag{3.21}\]
An additional gapped boundary can thus be obtained by imposing \((b+kda/N)|=0\), i.e. on the boundary, magnetic flux strings have charges attached to their endpoints. In the special case \(N=2,\ k=2\), the Chern-Simons term is bosonic, and this decorated boundary has semion on the boundary, and is discussed in _e.g._[26, 27, 28, 29, 30]. When \(N=2,\ k=1\), the boundary has fermions, and this is a boundary of condensed fermionic strings [94].
#### 3.2.2 Decorated boundaries in X-cube model
General terms we consider adding on the boundary are of the \(K\)-matrix Chern-Simons form
\[\mathcal{L}^{\prime}_{XC}=\frac{K_{IJ}}{4\pi}\mathcal{A}^{I}d\mathcal{A}^{J}, \quad\mathcal{A}=(a,A^{1}-A^{3},A^{2}-A^{3},A^{3}), \tag{3.22}\]
where \(K\) is a \(4\times 4\)-dimensional symmetric integer matrix.4
Footnote 4: One might also consider adding terms of the form
\[\mathcal{L}^{\prime\prime}_{XC}=\frac{N}{4\pi}(W_{IJ}\mathcal{A}^{I}\mathcal{ B}^{J}),\quad\mathcal{B}=(b,B^{1},B^{2}). \tag{3.23}\]
For example, if we choose \(K_{12}=K_{21}=-N\) to be the only nonzero entries of \(K\), _i.e._ the decoration is \(\frac{N}{2\pi}ad(A^{1}-A^{3})\)
the new variation of action on the boundary is
\[\delta S_{XC}|=\frac{N}{2\pi}\int_{z=0}\left[(B^{1}-da)\Delta(A^{1}-A^{3})+B^{2} \Delta(A^{2}-A^{3})-d(A^{1}-A^{3})\Delta a\right]. \tag{3.24}\]
Using equation of motion \((B^{1}+B^{2})|=da|\), it simplifies to
\[\delta S_{XC}|=\frac{N}{2\pi}\int_{z=0}\left[B^{2}\Delta(A^{2}-A^{1})-d(A^{1}- A^{3})\Delta a\right]. \tag{3.25}\]
This gives the following additional decorated gapped boundary:
\[B^{2}|=d(A^{1}-A^{3})|,\quad(A^{2}-A^{1})|=a|. \tag{3.26}\]
The first constraint says a string of magnetic \(xz\)-planon, extended from the bulk to the boundary, is dressed by an electric \(y\)-lineon at its endpoint. The second constraint requires that electric lineons are identified with fractons on the interface.
Another simple example is to choose \(K_{22}=K_{33}=N\) and all other entries of \(K\) to be zero. In other words, the decoration is \(\frac{N}{4\pi}(A^{1}-A^{3})d(A^{1}-A^{3})+\frac{N}{4\pi}(A^{2}-A^{3})d(A^{2}- A^{3})\). The new variation of action is
\[\delta S_{XC}|\ =\frac{N}{2\pi}\int_{z=0}\left[(B^{1}+dA^{1}-dA^{3})\Delta(A^ {1}-A^{3})+(B^{2}+dA^{2}-dA^{3})\Delta(A^{2}-A^{3})\right]. \tag{3.27}\]
From here we can derive another new decorated gapped boundary:
\[B^{1}|+d(A^{1}-A^{3})|=0,\quad B^{2}|+d(A^{2}-A^{3})|=0. \tag{3.28}\]
The magnetic \(yz\)-planons condense with electric \(y\)-lineon, while the magnetic \(xz\)-planons condense with electric \(x\)-lineon. Using \(B^{3}|=0\), we can also write them as
\[(B^{1}-B^{3})|=-d(A^{1}-A^{3})|,\quad(B^{2}-B^{3})|=-d(A^{2}-A^{3}). \tag{3.29}\]
This implies that the magnetic \(y\)-lineon and \(x\)-lineon becomes electric \(y\)-lineon and \(x\)-lineon on the boundary, respectively. Subtracting the above two equations, we find
\[(B^{1}-B^{2})|=-d(A^{1}-A^{2})|\, \tag{3.30}\]
which implies that the electric \(z\)-lineon is identified with a magnetic \(z\)-lineon on the boundary, and at \(N=2\) the latter is a single fracton. Thus we can regard the boundary as a "dyonic" lineons condensed boundary.
The discussions for general \(K\) matrix is straightforward and will be omitted.
Gapped Interfaces between X-Cube and Toric Code models
In this section, we discuss the gapped interfaces between X-cube model and 3d toric code model. Let us denote by the subscripts \(L,R\) for the fields that are at \(z<0\) and \(z>0\), respectively, with the interface at \(z=0\). The effective field theory for the two phases X-cube model and toric code model at the two sides of the interface are
\[\begin{split}\mathcal{L}_{XC}&=\frac{N}{2\pi} \left[-b_{L}da_{L}+b_{L}\left(\sum_{k}B_{L}^{k}\right)+dB_{L}^{k}A_{L}^{k} \right],\\ \mathcal{L}_{TC}&=\frac{N}{2\pi}b_{R}da_{R}\;,\end{split} \tag{4.31}\]
where we used XC to indicate the X-cube model, and TC for the toric code model. We use the folding trick to convert the interface into a gapped boundary for the theory \(\mathcal{L}_{XC}-\mathcal{L}_{TC}\), where the sign is from reversing the orientation. Following the similar procedure that led to equation (3.17), the total variation of action on the boundary or folded interface is,
\[\delta S_{XC|TC}|=\frac{N}{2\pi}\int_{z=0}\left[B_{L}^{1}\delta(A_{L}^{1}-A_{ L}^{3})+B_{L}^{2}\delta(A_{L}^{2}-A_{L}^{3})-b_{R}\delta a_{R}\right]\;. \tag{4.32}\]
Different undecorated gapped interfaces thus correspond to choices of boundary conditions for the gauge fields that guarantee \(\delta S_{XC|TC}|=0\). Similar to the section above, decorated gapped interfaces further correspond to adding \(K\)-matrix Chern-Simons type terms using the foliated \(A_{L}^{k}\) fields in X-cube and 1-form \(a_{R}\) gauge field in toric code.
### Undecorated gapped interfaces
Let's count the gapped interfaces without additional decorations. For simplicity, we will take \(N=2\).
There are \(2\times 4=8\) elementary types of undecorated interfaces that are simply tensor products of decoupled undecorated gapped boundaries of toric code (TC) model (as reviewed in section 3.2.1) and undecorated gapped boundaries of X-cube (XC) model (as discussed in section 3.1). Since they are straightforward tensor products of the results in the previous sections, we will not further discuss them.
There also exist six more nontrivial undecorated interfaces that are not tensor products of the gapped boundaries of toric code model and X-cube model:
* \(B_{L}^{1}|=b_{R}|\), \((A_{L}^{1}-A_{L}^{3})|=a_{R}|\), \(B_{L}^{2}|=0\). Magnetic \(yz\)-planons in XC become flux loops in TC, electric \(y\)-lineons in XC become charges in TC, magnetic \(xz\)-planons are condensed on the interface.
* \(B^{1}_{L}|=b_{R}|\), \((A^{1}_{L}-A^{3}_{L})|=a_{R}|\), \(A^{2}_{L}|=A^{3}_{L}|\). Magnetic \(yz\)-planons in XC become flux loops in TC, electric \(y\)-lineons in XC become charges in TC, electric \(x\)-lineons are condensed on the interface.
* \(B^{2}_{L}|=b_{R}|\), \((A^{2}_{L}-A^{3}_{L})|=a_{R}|\), \(B^{1}_{L}|=0\). This is related to the first gapped interface in this itemization by a 90-degree rotation along the \(z\) axis.
* \(B^{2}_{L}|=b_{R}|\), \((A^{2}_{L}-A^{3}_{L})|=a_{R}|\), \(A^{1}_{L}|=A^{3}_{L}|\). This is related to the second gapped interface in this itemization by a 90-degree rotation along the \(z\) axis.
* \((A^{1}_{L}-A^{3}_{L})|=a_{R}|=(A^{2}_{L}-A^{3}_{L})|\), \((B^{1}_{L}+B^{2}_{L})|=b_{R}|\). Electric \(x\)- and \(y\)-lineons in XC are both identified with charges in TC. The composite of magnetic \(xz\)-planon and magnetic \(yz\)-planon in XC, which is a magnetic \(z\)-lineon, becomes the flux loop in TC. In section 4.3.2, we will present a lattice model for this interface.
* \(B^{1}_{L}|=-B^{2}_{L}|=b_{R}|\), \((A^{1}_{L}-A^{2}_{L})=a_{R}|\). The magnetic \(z\)-lineons condense in XC, _i.e._, magnetic \(xz\)- and \(yz\)-planons are identified on the interface and become a mobile excitation on the 2d interface. Furthermore, the magnetic planons are also identified with magnetic fluxes in TC. The electric \(z\)-lineons in XC are identified with electric charges in TC.
### Decorated gapped interfaces
In this subsection we decorate the gapped interfaces by adding \(K\)-matrix Chern-Simons type terms similar to the discussions in section 3.2, but with an updated list of gauge fields,
\[\mathcal{L}^{\prime}_{XC|TC}=\frac{K_{IJ}}{4\pi}\mathcal{A}^{I}d\mathcal{A}^{ J}\,\quad\mathcal{A}=(a_{L},A^{1}_{L}-A^{3}_{L},A^{2}_{L}-A^{3}_{L},A^{3}_{L},a_{R} ). \tag{4.33}\]
As an example, let's consider the case where \(K_{25}=K_{52}=-N\) are the only nonzero entries of the \(K\) matrix, _i.e._ decorating the interface with the action \(\frac{N}{2\pi}\int(A^{1}_{L}-A^{3}_{L})da_{R}\). The variation of the total action produces the interface terms
\[\delta S^{\prime}_{XC|TC}|=\frac{N}{2\pi}\int_{z=0}\big{[}(B^{1}_{L}-da_{R}) \Delta(A^{1}_{L}-A^{3}_{L})+B^{2}_{L}\Delta(A^{2}_{L}-A^{3}_{L})-\big{(}b_{R}- d(A^{3}_{L}-A^{1}_{L})\big{)}\Delta a_{R}\big{]}. \tag{4.34}\]
We will impose boundary conditions on the fields such that the variation vanishes. For \(N=2\), this produces the following gapped interfaces:
* \(B^{1}_{L}|=da_{R}|\) which means magnetic \(yz-\)planon in XC on the interface is decorated by electric charges in TC; \(B^{2}_{L}|=0\) means the magnetic \(xz\)-planons in XC are condensed; and \(b_{R}|=d(A^{3}_{L}-A^{1}_{L})|\), which means magnetic flux loops in TC are dressed with electric \(y\)-lineons in \(XC\) on the interface.
* \(B^{1}_{L}|=da_{R}|\), \(A^{2}_{L}|=A^{3}_{L}|\), \(b_{R}|=d(A^{3}_{L}-A^{1}_{L})|\). Similar to the item above, the only difference lies in that, the electric \(x\)-lineons in the X-cube condense instead of the magnetic \(xz-\)planons.
* \(A_{L}^{1}|=A_{L}^{2}|\) which means electric \(x\)- and \(y\)-lineons are separately condensed in XC; \(a_{L}|=a_{R}|\Rightarrow(B_{L}^{1}+B_{L}^{2})|=da_{R}|\), which means the magnetic fracton in XC is identified with electric charge in TC; \(b_{R}|=d(A_{L}^{3}-A_{L}^{1})|\), magnetic flux loops in TC are dressed with electric \(y\)-lineons in XC on the interface. When passing through this interface, excitations in the electric/magnetic sector of XC becomes the magnetic/electric excitations in TC and vice versa. We will therefore call this interesting interface the \(em\)-exchange interface and present a lattice model for it in section 4.3.1.
* \(B_{L}^{1}|=da_{R}|\) such that magnetic \(yz-\)planon in XC is decorated by electric charges in TC; \((A_{L}^{2}-A_{L}^{3})|=a_{R}\) which means electric \(x\)-lineons in XC are identified with charges in TC; \(B_{L}^{2}|+d(A_{L}^{3}-A_{L}^{1})|=b_{R}|\) which says the magnetic flux loops in TC become magnetic \(xz\)-planon dressed with electric \(y\)-lineon in XC.
The above discussion can be straightforwardly generalized to other decorations.
We remark that since the X-cube model can be obtained from the toric code model by gauging the subsystem one-form symmetry with gauge fields \(B^{k}\)[86], there is a Kramers-Wannier duality type gapped interface given by gauging the subsystem symmetry on half space:
\[B^{1}|=0,\quad B^{2}|=0,\quad a_{L}|=a_{R}|,\quad b_{L}|=b_{R}|. \tag{4.35}\]
This interface can be obtained from decoration by adding the action with the "mixed Chern-Simons term" \(K_{45}=K_{54}=N\). The interface variation of the total action is now
\[\delta S^{\prime}_{XC|TC}|=\frac{N}{2\pi}\int_{z=0}\left[(a_{R}-a_{L})\Delta dA _{L}^{3}+(dA_{L}^{3}-b_{R})\Delta a_{R}+B_{L}^{1}\Delta A_{L}^{1}+B_{L}^{2} \Delta A_{L}^{2}\right]\,, \tag{4.36}\]
where we have used the equations of motion (3.16) in XC. The vanishing of the variation is ensured by the boundary condition (4.35).
### Lattice models
Let us give examples of lattice models for the undecorated and decorated interfaces.
#### 4.3.1 Example: undecorated interface
The first lattice model correspond to the following undecorated interface that we found in foliated field theory.
\[(A_{L}^{1}-A_{L}^{3})|=a_{R}|=(A_{L}^{2}-A_{L}^{3})|,\quad(B_{L}^{1}+B_{L}^{2 })|=b_{R}|. \tag{4.37}\]
The lattice Hamiltonian for the bulk (3+1)d toric code can be defined as in [82, 83], where the qubits live on the links.
Using the field lattice correspondence discussed in eqn. (2.14) and after some simplifications, the terms that impose the constraints in (4.37) are:
\[H_{\text{cond.}}=-\sum_{l\in\text{bdy.}}\big{(}\ \ \bar{\bar{Z}}_{l}X_{l}^{3} \cancel{\cancel{\cancel{\cancel{\cancel{\cancel{\cancel{\cancel{\cancel{ \cancel{\cancel{\cancel{\cancel{\cancel{\cancel{\cancel{\cancel{\cancel{\cancel{\cancel{ \c
excitations on the interface. In other words, a magnetic flux loop of the toric code model moving pass the interface becomes the magnetic \(z\)-lineon in the X-cube model.
#### 4.3.2 Example: decorated interface
We will present a lattice model for the gapped interface
\[a_{L}|=a_{R}|,\quad A_{L}^{1}|=A_{L}^{2}|,\quad b_{R}|=d(A_{L}^{3}-A_{L}^{1})|. \tag{4.39}\]
Again we will choose the lattice model in [91] deep inside the X-cube phase. For the toric code phase we will choose the degree of freedom to live on the faces instead of the links, with the corresponding Hamiltonian terms reviewed in Figure 8.
Imposing the three constraints in (4.39) correspond to adding the following terms on the interface: The condensation terms in Figure 9 can be combined and simplified. The remaining Hamiltonian terms from the two phases near the interface also need to be modified in order to make sure all the terms mutually commute. The final results including the simplified condensation terms are shown in Figure 10, with the degrees of freedom in the two phases naturally truncate beyond the interface.
Physical interpretationLet us investigate what becomes of the bulk excitations when they move across the interface.
* Consider an electric charge of toric code model moves pass through the interface by applying the operator \(\bar{Z}\) on strings of \(x,y\) plaquettes stacked in the \(z\) direction. The
Figure 7: The string operator that moves the magnetic \(z\)-lineon in the X-cube model across the gapped interface and convert it into a small magnetic flux loop of the toric code model. The two black dots label a single vertex living on the interface and are separated only for visualization.
operator creates excitations in the toric code model bulk, and also an excitation on the interface. On the interface, the excitation can be replaced by that created by \(\tilde{Z}\), and thus the electric charge in the toric code model becomes the fracton in the X-cube model. This is consistent with the condition \(a_{L}|=a_{R}|\).
* Consider an electric \(z\)-lineon in the X-cube model moving toward to interface. Such motion can be implemented by applying the string operator given by the product of \(X^{1}X^{2}\) on the \(z\)-edges along the vertical strings. Such string operator creates an excitation in the X-cube model, but does not create any excitation on the interface, since it commutes with the Hamiltonian terms near the interface. Thus the electric \(z\)-lineon moves to the interface and disappear. This is consistent with the condition \((A_{L}^{1}-A_{L}^{2})|=0\).
* Consider creating an electric \(x\)-lineon on the interface by the operator \(X^{2}X^{3}\) on a link along the \(x\)-direction. The excitation, from the violation of Figure 10(d) term, can be
Figure 8: The toric code Hamiltonian with degrees of freedom defined on the faces. The cube term in the leftmost panel (a) corresponds to the usual vertex term, while the other windmill-like panels (b-d) correspond to the usual plaquette terms. For later convenience, we will denote the Pauli matrices that act on toric code qubits as \(\bar{X}\) and \(\bar{Z}\), to distinguish from the Pauli matrices that act on the X-cube qubits.
Figure 9: (a) On each vertical plaquette near the interface, a product of the \(\bar{Z}\) and \(\tilde{Z}\) on the face qubits in TC and XC, respectively. (b) A product of the windmill in Figure 8(c) and the two vertical blue and green \(Z\)’s. This term imposes the \(a_{R}|=a_{L}|\) constraint. (c) Blue/green Pauli \(X\) acted on a single horizontal/green link on the interface. They impose the \(A_{L}^{1}|=A_{L}^{2}|\) constraint. (d) These terms impose the \(b_{R}|=d(A_{L}^{1}-A_{L}^{3})|\) and \(b_{R}|=d(A_{L}^{2}-A_{L}^{3})|\) constraints. In the figures, we displaced the operators acting the same edge or the same plaquette slightly for better readability.
replaced by that created by \(\bar{X}\), and thus the electric lineon on the interface becomes the magnetic flux loop in toric code model.
To summarize, electric charges in the toric code become magnetic fracton in X-cube, while electric lineons in X-cube become magnetic fluxes in toric code. The interface exchanges the electric and magnetic excitations.
## 5 Gapped interfaces in X-Cube Model
In this section, we investigate gapped interfaces in X-cube model. We will give a classification for the undecorated gapped interfaces, and give an example of decorated gapped interface, which exchanges the electric and magnetic excitations, similar to the electromagnetic duality in (2+1)d \(\mathbb{Z}_{N}\) gauge theory.
Figure 10: The remaining Hamiltonian terms on the interface. First line: (a) The product of the cube/vertex term in the toric code model and the cube term in the X-cube model. The color is a combination of the pink color in TC and yellow in XC. (b-d) the windmill/plaquette terms are modified. (b) is a product of the original toric code windmill term in Figure 8(b) with the edge term in Figure 4(h) but with the green \(Z\) removed. (c) is a product of the windmill in Figure 8(c) and the two vertical blue and green \(Z\)’s. This term imposes the \(a_{R}|=a_{L}|\) constraint. (d) is similar to (b) and is a product of toric code windmill 8(d) and X-cube edge 8(j) but with the blue \(Z\) removed. Second line: (e) the red vertex term in X-cube remain the same. (f) The plaquette in X-cube term as in Figure 4(d) remains the same. (g-h) The other two plaquette terms are modified from the original 4(e-f) by mulitplying the toric code face Pauli matrices. (l) The cube term in X-cube remains the same. In the figures, we displaced the operators acting the same edge or the same plaquette slightly for better readability.
### Undecorated interfaces
Following similar procedures as above, we arrive at the following variation of action on the folded interface:
\[\begin{split}\Delta S_{XC|XC}|=\frac{N}{2\pi}\int_{z=0}\left[B_{L}^ {1}\Delta(A_{L}^{1}-A_{L}^{3})+B_{L}^{2}\Delta(A_{L}^{2}-A_{L}^{3})\right.\\ \left.-\,B_{R}^{1}\Delta(A_{R}^{1}-A_{R}^{3})-B_{R}^{2}\Delta(A_{R }^{2}-A_{R}^{3})\right]\,.\end{split} \tag{5.40}\]
Apart from the tensor products of decoupled boundaries of X-cube on the two sides (there are \(4\times 4=16\) of them for \(N=2\) based on the discussions in section 3.1), we can also obtain the following undecorated interfaces for \(N=2\):
1. Transparent gapped interface, \(B_{L}^{k}|=B_{R}^{k}|\), \((A_{L}^{1}-A_{L}^{3})|=(A_{R}^{1}-A_{R}^{3})|\), \((A_{L}^{2}-A_{L}^{3})|=(A_{R}^{2}-A_{R}^{3})|\). Excitations just directly penetrate the interface without transforming.
2. The gapped interface obtained from the transparent interface by 90 degree relative rotation along the \(z\) axis between the two sides of the interface, where \(B_{L}^{1}|=B_{R}^{2}|\), \(B_{L}^{2}|=B_{R}^{1}|\), \((A_{L}^{1}-A_{L}^{3})|=(A_{R}^{2}-A_{R}^{3})|\), \((A_{L}^{2}-A_{L}^{3})|=(A_{R}^{1}-A_{R}^{3})|\).
3. The gapped interface obtained from the the interfaces (i) and (ii) by condensing magnetic planons in a single direction on each side of the interface. There are four of them: * \(B_{L}^{1}|=0\), \(B_{R}^{1}|=0\), \(B_{L}^{2}|=B_{R}^{2}|\), \((A_{L}^{2}-A_{L}^{3})|=(A_{R}^{2}-A_{R}^{3})|\) * \(B_{L}^{2}|=0\), \(B_{R}^{2}|=0\), \(B_{L}^{1}|=B_{R}^{1}|\), \((A_{L}^{1}-A_{L}^{3})|=(A_{R}^{1}-A_{R}^{3})|\). * Add additional rotations, \(B_{L}^{1}|=0\), \(B_{R}^{2}|=0\), \(B_{L}^{2}|=B_{R}^{1}|\), \((A_{L}^{2}-A_{L}^{3})|=(A_{R}^{1}-A_{R}^{3})|\). * Another rotated choice, \(B_{L}^{2}|=0\), \(B_{R}^{1}|=0\), \(B_{L}^{1}|=B_{R}^{2}|\), \((A_{L}^{1}-A_{L}^{3})|=(A_{R}^{2}-A_{R}^{3})|\).
4. Condense one type of magnetic planons on one side, while the other type of magnetic planons on the same side transform into magnetic \(z\)-lineon on the other side. There are again four of them: * \(B_{L}^{1}|=0\), \((A_{L}^{2}-A_{L}^{3})|=(A_{R}^{1}-A_{R}^{3})|=(A_{R}^{2}-A_{R}^{3})|\), \(B_{L}^{2}|=(B_{R}^{1}+B_{R}^{2})|\). * \(B_{L}^{2}|=0\), \((A_{L}^{1}-A_{L}^{3})|=(A_{R}^{1}-A_{R}^{3})|=(A_{R}^{2}-A_{R}^{3})|\), \(B_{L}^{1}|=(B_{R}^{1}+B_{R}^{2})|\). * \(B_{R}^{1}|=0\), \((A_{L}^{1}-A_{L}^{3})|=(A_{L}^{2}-A_{L}^{3})|=(A_{R}^{2}-A_{R}^{3})|\), \(B_{R}^{2}|=(B_{L}^{1}+B_{L}^{2})|\). * \(B_{R}^{2}|=0\), \((A_{L}^{1}-A_{L}^{3})|=(A_{L}^{2}-A_{L}^{3})|=(A_{R}^{1}-A_{R}^{3})|\), \(B_{R}^{1}|=(B_{L}^{1}+B_{L}^{2})|\).
5. Condense one type of electric lineons on one side, and magnetic planons on the same side living on the plane normal to the direction of the condensed lineon transform into magnetic \(z\)-lineons on the other side. There are again four of them: * \((A_{L}^{1}-A_{L}^{3})|=0\), \((A_{L}^{2}-A_{L}^{3})|=(A_{R}^{1}-A_{R}^{3})|=(A_{R}^{2}-A_{R}^{3})|\), \(B_{L}^{2}|=(B_{R}^{1}+B_{R}^{2})|\). * \((A_{L}^{2}-A_{L}^{3})|=0\), \((A_{L}^{1}-A_{L}^{3})|=(A_{R}^{1}-A_{R}^{3})|=(A_{R}^{2}-A_{R}^{3})|\), \(B_{L}^{1}|=(B_{R}^{1}+B_{R}^{2})|\). * \((A_{R}^{1}-A_{R}^{3})|=0\), \((A_{L}^{1}-A_{L}^{3})|=(A_{L}^{2}-A_{L}^{3})|=(A_{R}^{2}-A_{R}^{3})|\), \(B_{R}^{2}|=(B_{L}^{1}+B_{L}^{2})|\). * \((A_{R}^{2}-A_{R}^{3})|=0\), \((A_{L}^{1}-A_{L}^{3})|=(A_{L}^{2}-A_{L}^{3})|=(A_{R}^{1}-A_{R}^{3})|\), \(B_{R}^{1}|=(B_{L}^{1}+B_{L}^{2})|\).
6. The gapped interface obtained from the transparent interface (i) by condensing the electric \(z\)-lineons on each side of the interface. All the lineons are identified, \(a_{L}|=a_{R}|\), \(A_{L}^{1}|=A_{L}^{2}|\), \(A_{R}^{1}|=A_{R}^{2}|\), \((A_{L}^{1}-A_{L}^{3})|=(A_{R}^{1}-A_{R}^{3})|\).
Altogether we get \(16+15=31\) types of undecorated domain walls.
### Example of decorated interface: electromagnetic duality interface
Below we will present one example of decorated interface and other decorations can be analyzed in a way similar to the discussions in the previous sections. We can add the following terms to the interface:
\[\mathcal{L}_{\text{interface}}[A_{L}^{k},A_{R}^{k}]=\frac{N}{2\pi}\big{[}(A_{ L}^{1}-A_{L}^{3})d(A_{R}^{1}-A_{R}^{3})+(A_{L}^{2}-A_{L}^{3})d(A_{R}^{2}-A_{R}^{ 3})\big{]}. \tag{5.41}\]
The interface variation of action is then
\[\delta S_{XC|XC}= \frac{N}{2}\int_{z=0}\big{[}\big{(}B_{L}^{1}+d\left(A_{R}^{1}-A_{ R}^{3}\right)\big{)}\,\Delta\left(A_{L}^{1}-A_{L}^{3}\right)+\left(B_{L}^{2}+d \left(A_{R}^{2}-A_{R}^{3}\right)\right)\Delta\left(A_{L}^{2}-A_{L}^{3}\right)\] \[+\left(-B_{R}^{1}+d\left(A_{L}^{1}-A_{L}^{3}\right)\right)\Delta \left(A_{R}^{1}-A_{R}^{3}\right)+\left(-B_{R}^{2}+d\left(A_{L}^{2}-A_{L}^{3} \right)\right)\Delta\left(A_{R}^{2}-A_{R}^{3}\right)\big{]}\ . \tag{5.42}\]
Thus we can impose the boundary condition
\[B_{L}^{1}|=-d\left(A_{R}^{1}-A_{R}^{3}\right)|,\quad B_{L}^{2}|= -d\left(A_{R}^{2}-A_{R}^{3}\right)|\,\] \[B_{R}^{1}|=d\left(A_{L}^{1}-A_{L}^{3}\right)|,\quad B_{R}^{2}|= d\left(A_{L}^{2}-A_{L}^{3}\right)|. \tag{5.43}\]
We can also use \(B^{3}|=0\) to obtain
\[\left(B_{L}^{1}-B_{L}^{3}\right)|=-d\left(A_{R}^{1}-A_{R}^{3} \right)|,\quad\left(B_{L}^{2}-B_{L}^{3}\right)|=-d\left(A_{R}^{2}-A_{R}^{3} \right)|\,\] \[\left(B_{R}^{1}-B_{R}^{3}\right)|=d\left(A_{L}^{1}-A_{L}^{3} \right)|,\quad\left(B_{R}^{2}-B_{R}^{3}\right)|=d\left(A_{L}^{2}-A_{L}^{3} \right)|. \tag{5.44}\]
In other words, the magnetic \(y\)-lineon becomes the electric \(y\)-lineon, and the magnetic \(x\)-lineon becomes the electric \(x\)-lineon, across the interface. By taking combinations of the above equations,
\[\left(B_{L}^{1}-B_{L}^{2}\right)|=d\left(A_{R}^{1}-A_{R}^{2}\right),\quad \left(B_{R}^{1}-B_{R}^{2}\right)|=d\left(A_{L}^{1}-A_{L}^{2}\right). \tag{5.45}\]
In other words, the magnetic \(z\)-lineon becomes the electric \(z\)-lineon across the interface.
Similarly, the fractons map as
\[\left(n_{1}B_{L}^{1}+n_{2}B_{L}^{2}+n_{3}B_{L}^{3}\right)|=-d \left(n_{1}A_{R}^{1}+n_{2}A_{R}^{2}-(n_{1}+n_{2})A_{R}^{3}\right)\,\] \[\left(n_{1}B_{R}^{1}+n_{2}B_{R}^{2}+n_{3}B_{R}^{3}\right)|=d \left(n_{1}A_{L}^{1}+n_{2}A_{L}^{2}-(n_{1}+n_{2})A_{L}^{3}\right)\right)\, \tag{5.46}\]
where \(n_{1},n_{2},n_{3}\neq 0\) mod \(N\). (At \(N=2\) there is no nontrivial electric fracton.) We note that \(e^{i\int a}\) corresponds to \(n_{1}=n_{2}=n_{3}=1\). Thus the magnetic fractons are exchanged with the electric fractons.
We remark that the electromagnetic duality in (2+1)d \(\mathbb{Z}_{N}\) gauge theory can also be described by similar action on the interface [85]. In such case, for \(\mathbb{Z}_{N}\) gauge fields \(a_{L},a_{R}\) on the left and right of the interface, the 2d interface action is the generator of \(H^{2}(\mathbb{Z}_{N}\times\mathbb{Z}_{N},U(1))\) for the two \(\mathbb{Z}_{N}\) gauge fields [85].
We also note that while \(\mathbb{Z}_{N}\) toric code gauge theory in (2+1)d has electromagnetic duality, the \(\mathbb{Z}_{N}\) toric code gauge theory in (3+1)d does not have electromagnetic duality (for instance, the excitations have different dimensions). Here, we observe an electromagnetic duality in \(\mathbb{Z}_{N}\) X-cube model in (3+1)d. In face, physics in many fracton models often have similar counterparts in non-fracton models in spacetime of one lower dimension, as discussed in _e.g._[89, 95], and here we present another instance.
#### 5.2.1 Fusion rule
Let us compute the fusion rule of this gapped interface in the X-cube model following the method in [17, 18]. Consider two gapped interfaces dividing the spacetime into the left, the middle and the right parts, with fields labelled by \(L,M,R\), respectively. Fusing the interfaces by shrinking the middle region gives
\[\int\mathcal{L}_{\text{interface}}[A^{k}_{L},A^{k}_{M}]+\mathcal{L}_{\text{ interface}}[A^{k}_{M},A^{k}_{R}]=\frac{N}{2\pi}\int\left(\left(A^{13}_{L}+A^{13}_{R} \right)dA^{13}_{M}+\left(A^{23}_{L}+A^{23}_{R}\right)dA^{23}_{M}\right)\, \tag{5.47}\]
where we used the shorthand notation \(A^{13}:=A^{1}-A^{3}\), \(A^{23}:=A^{2}-A^{3}\). after shrinking the middle region, the fields with label \(M\) only live on the interface. Integrating out \(A^{1}_{M},A^{2}_{M}\) impose \(A^{13}_{L}=-A^{13}_{R}\), \(A^{23}_{L}=-A^{23}_{R}\). Thus we find that fusing two electromagnetic duality interfaces gives the charge conjugation symmetry interface.
## 6 Outlook
In this work, we discuss the gapped interfaces of fracton models using foliated field theories and lattice models. Let us mention a few future directions that would be interesting to explore:
* It is straightforward to generalize the discussion to general Abelian gauge groups (and non-Abelian gauge groups that are extensions involving Abelian foliated gauge fields and non-Abelian ordinary gauge fields as in [70, 96]), and other spacetime dimensions.
* General geometry of the interface. In our discussion, we take the interface to be a particular leaf of the foliation. It can be useful to consider interfaces with general shape,
as in the examples of [72]. For instance, is there a geometry that cannot host any gapped interface for the model?
* It would be useful to have a complete classification of the actions for foliated fields decorated on the gapped interfaces. For instance, while we consider Chern-Simons like actions decorating the interface, in higher dimension there can be "beyond group cohomology" actions [97]. There are also other possible decorations such as including foliation one-forms as a background field in the interface actions. It is also interesting to understand the relation with other constructions such as in [98].
* Derive fusion rule and associators of the gapped interfaces in general fracton topological order. For instance, given a description on the interface of the same theory, we can derive the fusion rule as in [17]. The fusion of gapped interfaces with the gapped boundaries also produce other gapped boundaries, and it would be interesting to study the corresponding modules. More generally, what replaces the (higher) fusion category in the fracton topological order?
* It would be interesting to know whether there is a tunneling matrix [44] representation for each gapped interface and what the corresponding consistency relations are. For topological orders, the determination of tunneling matrices request knowledge on the fusion rules of interfaces as well as the self and mutual statistics of excitations. However, because of the restricted mobility, the statistics can be subtle [99, 100, 101].
* Is there dynamical constraint from the gapped interface? For instance, the X cube model can be obtained by gauging subsystem symmetry in the toric code model, and the gapped interface can be viewed as implementing Kramers-Wannier type duality. Similar duality interface from gauging a symmetry can be present in more general quantum systems, as discussed in _e.g._[17, 18, 87], and they impose non-trivial constraint to the dynamics. It would be interesting to study the dynamical consequence of such duality interfaces for gauging subsystem symmetry.
## Acknowledgement
We thank Xie Chen, Tyler Ellison, Ho-Tat Lam, Dan Sehayek, Shu-Heng Shao, Kevin Slagle, and Nathanan Tantivasadakarn for helpful discussions. We thank Kevin Slagle, Nathanan Tantivasadakarn and Dominic J. Williamson for comments on the draft. Z.-X. L. thanks Yu-An Chen for discussions on interfaces between two 3d toric codes. The work of P.-S.H. is supported by Simons Collaboration of Global Categorical Symmetries. Z.-X. L. is supported by the the Simons Collaboration on Ultra-Quantum Matter, which is a grant from the Simons Foundation (651440, Z.-X. L.) from the Simons Foundation.
Presentations of X-Cube Model on Lattice
In this appendix, we review the equivalence between the ground state subspace of the X-cube models in [68] used in the main text, and the X-cube model defined in [54].
Let us consider the subspace in the Hilbert space of the model [68]\(H=H_{\rm vertex}+H_{\rm edge}+H_{\rm plaquette}+H_{\rm cube}\) where each Hamiltonian terms in \(H_{\rm edge}\) and \(H_{\rm plaquette}\) are minimalized in the subspace. The ground states of the model, which minimalize all the Hamiltonian terms, are in this subspace. The minimization conditions can be viewed as "gauge constraints" on the Hilbert space that we enforce exactly to obtain the subspace of the Hilbert space.
1. Using the terms in \(H_{\rm plaquette}\), we can "gauge fix" the degrees of freedom on plaquette to be trivial \(\tilde{Z}=1\), where we replace \(\tilde{X}\) on the plaquette with the product of \(X\) on the four edges surrounding the plaquette, with the color label specified by the plane where the plaquette is on.
2. Since \(\tilde{Z}=1\), the terms in \(H_{\rm edge}\) reduce to the product of two \(Z\) operators for two colors on the same edge. Minimizing such terms identifies the degrees of freedom for all three colors as the same degree of freedom: \(Z_{\rm red}\otimes 1_{\rm blue}=1_{\rm red}\otimes Z_{\rm blue}\) on the \(x\)-edges, \(Z_{\rm red}\otimes 1_{\rm darkgreen}=1_{\rm red}\otimes Z_{\rm darkgreen}\) on the \(y\)-edges, and \(Z_{\rm blue}\otimes 1_{\rm darkgreen}=1_{\rm blue}\otimes Z_{\rm darkgreen}\) on the \(z\)-edges. Let us denote these operators by \(Z\) on the edges of different directions. Similarly, \(X_{\rm red}\otimes X_{\rm blue}=X\) on the \(x\)-edges, \(X_{\rm red}\otimes X_{\rm darkgreen}=X\) on the \(y\)-edges, and \(X_{\rm blue}\otimes X_{\rm darkgreen}=X\) on the \(z\)-edges.
3. The cube terms \(H_{\rm cube}\) becomes product of \(X\) on the edges of the cube. The vertex terms \(H_{\rm vertex}\) are product of \(X\) on the four edges of cross in different directions. This recovers the Hamiltonian of the X-cube model in [54].
|
2304.04984 | Defrosting frozen stars: spectrum of internal fluid modes | The frozen star model provides a classical description of a regularized black
hole and is based upon the idea that regularizing the singularity requires
deviations from the Schwarzschild geometry which extend over horizon-sized
scales, as well as maximally negative radial pressure as an equation of state.
The frozen star has also been shown to be ultra-stable against perturbations; a
feature that can be attributed to the equation of state and corresponds to this
model mimicking a black hole in the limit $\hbar\to 0$ or, equivalently, the
limit of infinite Newton's constant. Here, we ``defrost'' the frozen star by
allowing its radial pressure to be perturbatively less negative than maximal.
This modification to the equation of state is implemented by appropriately
deforming the background metric so as to allow the frozen star to mimic a
quantum black hole at finite $\hbar$ and Newton's constant. As a consequence,
the defrosted star acquires a non-trivial spectrum of oscillatory
perturbations. To show this, we first use the Cowling approximation to obtain
generic equations for the energy density and pressure perturbations of a
static, spherically symmetric background with an anisotropic fluid. The
particular setting of a deformed frozen star is then considered, for which the
dispersion relation is obtained to leading order in terms of the deviation from
maximal pressure. The current results compare favorably with those obtained
earlier for the collapsed polymer model, whose strongly non-classical interior
is argued to provide a microscopic description of the frozen and defrosted star
geometries. | Ram Brustein, A. J. M. Medved, Tom Shindelman | 2023-04-11T05:00:38Z | http://arxiv.org/abs/2304.04984v1 | # Defrosting frozen stars:
###### Abstract
The frozen star model provides a classical description of a regularized black hole and is based upon the idea that regularizing the singularity requires deviations from the Schwarzschild geometry which extend over horizon-sized scales, as well as maximally negative radial pressure as an equation of state. The frozen star has also been shown to be ultra-stable against perturbations; a feature that can be attributed to the equation of state and corresponds to this model mimicking a black hole in the limit \(\hbar\to 0\) or, equivalently, the limit of infinite Newton's constant. Here, we "defrost" the frozen star by allowing its radial pressure to be perturbatively less negative than maximal. This modification to the equation of state is implemented by appropriately deforming the background metric so as to allow the frozen star to mimic a quantum black hole at finite \(\hbar\) and Newton's constant. As a consequence, the defrosted star acquires a non-trivial spectrum of oscillatory perturbations. To show this, we first use the Cowling approximation to obtain generic equations for the energy density and pressure perturbations of a static, spherically symmetric background with an anisotropic fluid. The particular setting of a deformed frozen star is then considered, for which the dispersion relation is obtained to leading order in terms of the deviation from maximal pressure. The current results compare favorably with those obtained earlier for the collapsed polymer model, whose strongly non-classical interior is argued to provide a microscopic description of the frozen and defrosted star geometries.
(1) Department of Physics, Ben-Gurion University, Beer-Sheva 84105, Israel
(2) Department of Physics & Electronics, Rhodes University, Grahamstown 6140, South Africa
(3) National Institute for Theoretical Physics (NITheP), Western Cape 7602, South Africa
[email protected], [email protected], [email protected]
Introduction
The name "frozen star", which was first used by Ruffini [1] to describe objects that are now commonly known as black holes (BHs), has recently been resurrected [2, 3] to describe a particular model for a regularized BH geometry [4, 5]. It is a highly compact object, whose compactness is parametrically close to that of a BH, and yet it manages to evade singularity theorems [6, 7] and "Buchdahl-like" matter bounds [8, 9, 10, 11, 12] by invoking a maximally negative (radial) pressure, \(\ p=-\rho\), 1 while not possessing a trapped surface. It also avoids certain inconsistencies by deviating from the standard general-relativistic solution over horizon-sized scales [13, 14]. As far as we know, the frozen star is the first completely regular and stable solution of Einstein's equations whose compactness is arbitrarily close to that of a Schwarzschild BH. Furthermore, the solution obeys the only universally agreed-upon energy condition, the null energy condition. It is free of any pathology, unlike most, if not all, of the other proposals for regular compact objects.
Footnote 1: Here, we use \(p\) to denote radial pressure, \(q\) for the transverse pressure components and \(\rho\) is the energy density.
Having such a solution at hand is important as it provides a model for a compact object -- differing from a general-relativistic BH -- that can be used to estimate the validity of general relativity by looking at the resulting dynamics and gravitational-waves emissions during a binary-BH merger event. Additionally, it provides a final state in which to address the process of gravitational collapse (a paper that includes such a discussion is already in progress).
An unusual feature of the frozen star that distinguishes it from the gravast
tar [15] and others of the negative-pressure class (_e.g._) [16; 17] is that two of its metric component are identically vanishing, \(\ g_{tt}=g^{rr}=0\\). However, we do not mean to imply that astrophysical BHs are made of an anisotropic classical fluid with maximally negative radial pressure. The anisotropy and maximal negative pressure turned out to be needed so as to connect this solution to the collapsed polymer BH [18], which is another model for a regularized compact object that is based, in large part, on the notion of having a maximally entropic interior [19]. The "fluid" that supports this maximal entropy was found to consist of long, closed, fundamental, highly excited, interacting strings. From a geometric point of view, maximal entropy translates into a mass profile that saturates the Schwarzschild limit at all interior radii, \(\ 2m(r)=r\\), and then \(\ g_{tt}=g^{rr}=0\\) follows from Einstein's equations. The catch is that the polymer model lacks a semiclassical geometry, as maximally entropic is synonymous with strongly non-classical. Hence, the desire for a formal classical description, and so a frozen star model was born. The frozen star is, therefore, merely a classical proxy of a quantum model, the polymer BH, which has strictly non-negative pressure. The main purpose of the frozen star model is to facilitate some calculations that would present a challenge to someone using the quantum model.
The frozen star and polymer models are then meant to provide complementary pictures of the same model for a regular, ultra-compact object. The former allows for precise calculations, whereas the latter provides a microscopic description with connections to string theory. With that in mind, \(g^{rr}\) and \(|g_{tt}|\) should be small enough to maintain the frozen stars' character, but not that much changes if these components are alternatively assigned a
small but finite constant, which we denote as \(\ \varepsilon\ll 1\\). What is maintained is the frozen star's ultra-stability against small perturbations of the metric and fluid densities [4; 2]. This stability is due to the equation of state \(\ p+\rho=0\\). To see this, one can inspect the stability calculation in [2], where the vanishing of the metric components (but not the equation-of-state relation) was relaxed for a portion of the interior. Additionally, the limit \(\ p+\rho\to 0\\) can be taken in the current analysis to verify this conclusion. This ultra-stability was also found for the polymer model in the absence of quantum effects, which corresponds in that case to the absence of closed-string coupling or \(g_{s}^{2}\) corrections [20; 21]. The reason being that, in the limit \(\ g_{s}^{2}\to 0\\) or, equivalently, \(\ \hbar\to 0\\), the polymer model behaves just like a hairless classical BH. It was also shown in [21] that, once stringy quantum effects are taken into account, the polymer model does exhibit a spectrum of excitations and some quantum hair.
The main objective of this paper is to determine the dispersion relation for the oscillatory modes of a appropriately deformed version of the frozen star; the "defrosted star". To have such modes at all, it is clear that the background must be suitably modified to support fluctuations. Technically speaking, this means that the original equation of state needs to be changed such that the radial pressure is no longer exactly equal to the negative of the energy density and the transverse pressure components -- which are identically zero when the star is truly frozen -- no longer vanish. The change that we consider is parametrically small and is supposed to model some microscopic corrections to the equation of state of the constituents. As just discussed, in the polymer model, these corrections can be attributed to quan
tum perturbations that are proportional to the square of the string coupling strength [21].
The spectrum of oscillatory modes for the defrosted star could be relevant to BH merger events. If astrophysical BHs are indeed described by such objects, the internal modes of the star could be excited during a merger event and then act as sources for gravitational-wave emissions. After all, it is the success of the LIGO and Virgo collaboration at detecting and then analyzing the waves from such mergers -- starting with the now world-famous event GW150914 [22] and the data analysis thereof [23] -- that has led to an intensive theoretical effort to understand how departures from general relativity would imprint on the gravitational-wave spectrum of a BH. In this regard, each candidate for a regular BH mimicker can be expected to have its own unique and experimentally verifiable or falsifiable signature (see [24] for further discussion and a catalogue of relevant models).
We also have in mind a previous study that determined the mode spectrum for the collapsed polymer model [21] (also see [25]). We would like to verify that the two models produce compatible results, even though the polymer analysis was, at times, heuristic in nature because of the absence of an interior geometry. It is already notable that, in both cases, non-trivial mode solutions require some out-of-equilibrium physics [26].
The spectral analysis for the collapsed polymer model utilized the Cowling approximation [27], which assumes, for a calculation of the fluid-mode spectrum, that the spacetime modes have fully decoupled. For the sake of consistency and simplicity, we will use the same approximation here. Although it is, in any event, a justifiable assumption insofar as the Cowling
approximation only gets more accurate as the star gets more compact [28].
The rest of the paper is arranged as follows: We start by deriving the required perturbative formalism for an anisotropic fluid, with our only assumptions being that the background geometry is static and spherically symmetric. 2 This part of the analysis culminates in a coupled pair of equations that describe the dynamics of two types of modes: one is associated with fluctuations of the radial velocity and the other with those of the transverse velocity. Some brief background material on the frozen star geometry is then provided; after which the deformations are introduced and explained in some detail. We then return to the dynamical equations for the modes but now with the focus on the solutions for the defrosted or deformed frozen star. The scaling behavior of the modes and their dispersion relation are identified and discussed. The paper ends with a brief overview followed by an appendix that explains in a thorough way how to obtain one of the key equations in the generic portion of the perturbative analysis.
Footnote 2: We also insist on \(3+1\) spacetime dimensions for concreteness.
## 2 Perturbation equations: Generic formalism
In the following, the perturbation equations for the energy density and pressure of a star containing anisotropic fluid are reproduced in the Cowling approximation, meaning that the fluctuations of the metric are neglected. We will be closely following and reviewing the analysis of [29], while providing additional details as needed to ensure compatibility of the generic formalism with that of the defrosted star model.
### Variation of the stress tensor
The background under consideration is, for the meantime, described by a static, spherically symmetric but otherwise generic geometry,
\[ds^{2}\ =\ -e^{2\Phi(r)}dt^{2}+e^{2\Lambda(r)}dr^{2}+r^{2}d\Omega^{2}\;. \tag{1}\]
The corresponding stress (energy-momentum) tensor can be expressed as
\[T_{\mu\nu}\ =\ \rho u_{\mu}u_{\nu}+pk_{\mu}k_{\nu}+q\left(g_{\mu\nu}+u_{\mu}u_{ \nu}-k_{\mu}k_{\nu}\right)\;, \tag{2}\]
where the transverse pressure \(q\) is generally different from the radial pressure \(p\), so that \(\sigma=p-q\neq 0\). Also, \(u^{\mu}\) is the fluid 4-velocity with standard normalization, \(u^{\mu}u_{\mu}=-1\), and \(k^{\mu}\) is a radial unit vector that is defined by \(k^{\mu}k_{\mu}=+1\) such that \(u^{\mu}k_{\mu}=0\;\).
The oscillatory modes of the fluid will be obtained from the perturbation equations that arise from the variation of the conservation equation \(\nabla_{\nu}T_{\mu}^{\;\;\nu}=0\;\). In the Cowling approximation, the variation of interest is
\[\nabla_{\nu}\delta T_{\mu}^{\;\;\nu}\ =\ 0\;, \tag{3}\]
and the variation of Eq. (2) under this approximation yields
\[\delta T_{\mu}^{\nu}\ = \ \ \ \ \ \ \left(\delta\rho+\delta q\right)u_{\mu}u^{\nu}+ \left(\rho+q\right)\left(\delta u_{\mu}u^{\nu}+u_{\mu}\delta u^{\nu}\right) \tag{4}\] \[+ \delta q\delta_{\mu}^{\nu}+\delta\sigma k_{\mu}k^{\nu}+\sigma \delta k_{\mu}k^{\nu}+\sigma k_{\mu}\delta k^{\nu}\;.\]
The velocity \(u^{\mu}\) satisfies some useful identities,
\[\nabla_{\nu}u^{\nu}\ =\ u^{\mu}\nabla_{\nu}u_{\mu}\ =\ u^{\mu}\delta u_{\mu}\ =\ 0\;, \tag{5}\]
and similarly for the radial unit vector \(k^{\mu}\). By choosing, without loss of generality, to work in the comoving frame, we also have \(\ u^{\mu}=u^{t}\delta^{\mu}_{t}\,\ \ k^{\mu}=k^{r}\delta^{\mu}_{r}\ \mbox{and}\ \ u^{t}u_{t}=-k^{r}k_{r}=-1\;.\)
Equation (3) leads to an independent pair of perturbation equations when it is projected, respectively, parallel and perpendicular to \(u^{\mu}\); the latter by way of the projection operator \(\ \mathcal{P}^{\mu}_{\alpha}=\delta^{\mu}_{\alpha}+u^{\mu}u_{\alpha}\.\) Taking the covariant derivative of Eq. (4) and projecting onto \(u^{\mu}\), one obtains a scalar equation of the form
\[0\ =\ u^{\mu}\nabla_{\nu}\delta T^{\nu}_{\mu}\ = -\nabla_{\nu}\delta\rho u^{\nu}-\nabla_{\nu}\left[\left(\left( \rho+q\right)\delta^{\nu}_{\mu}+\sigma k^{\nu}k_{\mu}\right)\delta u^{\mu}\right] \tag{6}\] \[-\left(\rho+q\right)a_{\mu}\delta u^{\mu}-\nabla_{\nu}u^{\mu} \delta\left(\sigma k^{\nu}k_{\mu}\right)\;,\]
where \(\ a_{\mu}=u^{\nu}\nabla_{\nu}u_{\mu}\\) is the 4-acceleration. Whereas, in the direction perpendicular to \(u^{\mu}\), one obtains the following vector equation (see Appendix A for details):
\[0\ =\ \mathcal{P}^{\mu}_{\alpha}\nabla_{\nu}\delta T^{\nu}_{\mu} = \delta\left(\rho+q\right)a_{\alpha}+\left(\rho+q\right)u^{\nu} \left(\nabla_{\nu}\delta u_{\alpha}-\nabla_{\alpha}\delta u_{\nu}\right) \tag{7}\] \[+\nabla_{\nu}\delta q\delta^{\nu}_{\alpha}+u^{\nu}u_{\alpha} \nabla_{\nu}\delta q+\mathcal{P}^{\mu}_{\alpha}\nabla_{\nu}\delta\left(\sigma k _{\mu}k^{\nu}\right)\;.\]
### Density and pressure perturbations
It proves to be useful if the perturbations of the spacelike components of the velocity vector are expressed in terms of a displacement vector \(\xi^{i}\) such that
\[\frac{\partial\xi^{i}}{\partial t}\ =\ \frac{\delta u^{i}}{u^{t}}=v_{i}\;, \tag{8}\]
where \(\ i=\{r,\theta,\phi\}\\) and \(v_{i}\) is a component of the fluid's 3-velocity.
As the velocity is a function of \(t\) and \(r\) only, \(\ a_{\theta}=a_{\phi}=0\\). Then the angular components of Eq. (6) reduce to
\[\left(\rho+q\right)\left(u^{t}\right)^{2}\partial_{t}^{2}\xi_{\theta}+ \partial_{\theta}\delta q\ =\ 0\;, \tag{9}\]
\[\left(\rho+q\right)\left(u^{t}\right)^{2}\partial_{t}^{2}\xi_{\phi}+\partial_ {\phi}\delta q\ =\ 0\;. \tag{10}\]
Differentiating the former equation by \(\phi\) and the latter by \(\theta\), one is led to a consistency condition,
\[\partial_{\phi}\xi_{\theta}\ =\ \partial_{\theta}\xi_{\phi}\;, \tag{11}\]
which implies that \(\xi_{\theta}\) and \(\xi_{\phi}\) can be expressed as angular derivatives of a multipole expansion [30],
\[\xi_{\theta}\ =\ -\sum_{\ell,m}V_{\ell m}\left(r,t\right)\partial_{ \theta}Y_{\ell m}\left(\theta,\phi\right)\;, \tag{12}\] \[\xi_{\phi}\ =\ -\sum_{\ell,m}V_{\ell m}\left(r,t\right)\partial_{ \phi}Y_{\ell m}\left(\theta,\phi\right)\;. \tag{13}\]
The definition of \(\xi^{i}\) enables one to obtain the variation in the energy
density by integrating Eq. (6) over time, which eventually leads to
\[\delta\rho = -\frac{1}{\sqrt{-g}}\partial_{\nu}\left[\sqrt{-g}\left(\left(\rho+q \right)\delta_{i}^{\nu}+\sigma k^{\nu}k_{i}\right)\xi^{i}\right] \tag{14}\] \[-\left[\left(\left(\rho+q\right)\delta_{i}^{\nu}+\sigma k^{\nu}k_ {i}\right)\xi^{i}\right]\partial_{\nu}\left(\ln u^{t}\right)-\left(\rho+q \right)a_{i}\xi^{i}\;.\]
A convenient form for the radial component of the displacement vector is the following:
\[\xi^{r}\;=\;\sum_{\ell,m}e^{-\Lambda}\frac{W_{\ell m}\left(r,t\right)}{r^{2}}Y _{\ell m}\;. \tag{15}\]
To avoid clutter, we will henceforth be suppressing the subscripts \(\ell\) and \(m\) on \(W\) and \(V\), as well as the accompanying factors of \(Y_{\ell m}\) and the summation over \(\ell,m\) in any equation involving \(W\) and/or \(V\). These indices, factors and summations can be easily restored at the end of the calculation.
We next employ \(\;u^{t}=e^{-\Phi\left(r\right)}\;,\;\;a_{\mu}=a_{r}\delta^{r}_{\;\;\mu}= \partial_{r}\Phi\;,\) the previous forms for \(\xi^{i}\) and the equation for energy conservation,
\[\frac{dp}{dr}\;=\;-\partial_{r}\Phi\left(\rho+p\right)-\frac{2\sigma}{r}\;, \tag{16}\]
to recast Eq. (14) as
\[\delta\rho = -\left(\rho+p\right)\left[e^{-\Lambda}\frac{W^{\prime}}{r^{2}}+ \frac{\ell\left(\ell+1\right)}{r^{2}}V\right]-\frac{d\rho}{dr}e^{-\Lambda} \frac{W}{r^{2}}+\frac{2\sigma}{r^{3}}e^{-\Lambda}W \tag{17}\] \[+\sigma\frac{\ell\left(\ell+1\right)}{r^{2}}V\;,\]
where a prime denotes a differentiation by \(r\).
The perturbation of the radial pressure can be obtained by first recalling the relation between the Eulerian (\(\delta\rho\)) and Lagrangian (\(\Delta\rho\)) variations of the
energy density,
\[\delta\rho\;=\;\Delta\rho-\xi^{r}\partial_{r}\rho\;. \tag{18}\]
Given an equation of state of the form \(\;p=p\left(\rho\right)\), the Lagrangian perturbations for the radial pressure and density can be directly related,
\[\Delta p\;=\;\frac{dp}{d\rho}\Delta\rho\;=\;\frac{dp}{d\rho}\left(\delta\rho+ \xi^{r}\partial_{r}\rho\right)\;, \tag{19}\]
and the Eulerian variation of the pressure goes similarly as
\[\delta p = \Delta p-\xi^{r}\partial_{r}p \tag{20}\] \[= \frac{dp}{d\rho}\left(\delta\rho+\xi^{r}\partial_{r}\rho\right)- \xi^{r}\partial_{r}p\] \[= \frac{dp}{d\rho}\;\delta\rho\;,\]
where the last line follows from \(\;\left(dp/d\rho\right)\partial_{r}\rho=\partial_{r}p\;\).
More explicitly,
\[\delta p\;=\frac{dp}{d\rho}\left(-\left(\rho+p\right)\left[e^{- \Lambda}\frac{W^{\prime}}{r^{2}}+\frac{\ell\left(\ell+1\right)}{r^{2}}V\right] +\frac{2\sigma}{r^{3}}e^{-\Lambda}W+\sigma\frac{\ell\left(\ell+1\right)}{r^{ 2}}V\right)\] \[\;\;\;\;\;\;-\frac{dp}{dr}e^{-\Lambda}\frac{W}{r^{2}}\;. \tag{21}\]
Although the Eulerian variation \(\delta p\) is the quantity that determines the form of the oscillatory modes, the Lagrangian variation \(\Delta p\) is the quantity that is directly relevant to the boundary conditions. One can see from Eq. (15) and the first line of Eq. (20) that the difference between the two variations is the same as the sole term in the second line of Eq. (21). Hence, the Lagrangian variation \(\Delta p\) can be obtained directly from Eq. (21) by simply
dropping the second line.
### Dynamical equations for oscillatory modes
We are interested in oscillating modes of the form \(\ W\left(r,t\right)=W\left(r\right)e^{i\omega t}\\) and likewise for \(V\). The dynamical equations for \(W\) and \(V\) follow from Eq. (7) for \(\ \mu=r\\) and \(\ \mu=\theta\\), respectively, and take on the corresponding forms,
\[0\ =\ -\omega^{2}\left(\rho+p\right)e^{\Lambda-2\Phi}\frac{W}{r^{2}}+\partial_{r }\delta p+\left(\delta\rho+\delta p\right)\partial_{r}\Phi+\frac{2}{r}\delta\sigma \tag{22}\]
and
\[0\ =\ \left(\rho+q\right)e^{-2\Phi}\omega^{2}V+\delta q\, \tag{23}\]
where, just like for the \(W\) and \(V\) modes, the \(\ell\), \(m\) indices on \(\delta\rho\), the associated summations and accompanying spherical harmonics are all implied in the above and subsequent equations. So \(\delta\rho\) here is meant to represent \(\delta\rho_{\ell m}Y_{\ell m}\) such that the total Eulerian density perturbation is given by \(\ \sum\limits_{\ell,m}\delta\rho_{\ell m}Y_{\ell m}\\).
The next step in [29] and similar discussions relies on a formal equation-of-state relation between \(\rho\) and \(p\) and also between \(q\) (or \(\sigma\)) and \(p\), as these enable one to calculate \(\frac{\partial\rho}{\partial p}\) and then \(\ \delta\rho=\frac{\partial\rho}{\partial p}\delta p\\) (and similarly for \(\delta q\)). For the frozen star model, one has \(\ \rho+p=0\\) but no such relation exists between \(q\) and \(p\) because the transverse pressure is identically vanishing. Nevertheless, the situation changes for the defrosted star, as the \(\rho\)-\(p\) relation picks up a perturbative correction and we are now able to relate \(q\) and \(p\) by varying (in an Euler-Lagrange sense) the conservation equation (16), which leads to \(\ \frac{\partial q}{\partial p}\\) and thus \(\ \delta q=\frac{\partial q}{\partial p}\delta p\\).
Expressing \(\delta\rho\) and \(\delta q\) in terms of \(\delta p\) and background quantities allows
us to rewrite both of the dynamical equations (22,23) in terms of \(\delta p\), which itself is determined by Eq. (21). The final result is then a pair of coupled equations in terms of a pair of unknown functions, \(W(r)\) and \(V(r)\), for any given values of \(\ell\) and \(m\). Thanks to the appearances of \(\partial_{r}\delta p\) in Eq. (22) and \(W^{\prime}\) in Eq. (21), one can, after diagonalizing, expect to obtain a second-order differential equation for \(W(r)\) and first-order equation for \(V(r)\).
The dynamical equations are to be supplemented by a pair of boundary conditions involving the Lagrangian variation \(\Delta p\) of the radial pressure. In particular, the Lagrangian variation should vanish both at the star's outer surface \(\ r=R\) and at the star's center \(\ r=0\). Importantly, this variation should also remain finite as it approaches the center of the star; that is, for \(r\ll R\).
## 3 Frozen star: A refresher
To formulate the frozen star model, one starts with a metric and stress tensor just like those in Eqs. (1) and (2). The first key ingredient is that the interior is endowed with a maximally negative radial pressure, \(\ p=-\rho\), throughout. Consistency between the Einstein field equations and the energy-conservation equation (16) is what dictates the form of the transverse pressure \(q\).
Next, let us define a mass function \(m(r)\) in the standard way,
\[m(r)\ =\ 4\pi\int\limits_{0}^{r}dx\,x^{2}\rho(x)\ \ \ \mbox{for}\ \ \ r\leq R\;, \tag{24}\]
for which Einstein's equations indicate that
\[e^{-2\Lambda}\;=\;1-\frac{2m(r)}{r}\;. \tag{25}\]
It is the choice of \(m(r)\) that specifies the exact nature of the model within this negative-pressure class. The most popular option is to choose \(m\) such that \(\rho\) is constant, with the result that \(q=p=-\rho\) ; what is known as the gravastar model [15]. In its originally prescribed form, the frozen star was specified by the profile \(m(r)=r/2\), as this choice ensured that each spherical shell within the star saturated the Schwarzschild bound. This would be the second key ingredient and, just like the first, it was chosen to make the frozen star a proxy to the (strongly non-classical) collapsed polymer model. However, as this same mass profile also leads to a pair of apparently singular metric components, \(e^{-2\Lambda}=e^{2\Phi}=0\), we have more recently opted to follow a different path: Namely, we now set \(m(r)=\frac{r}{2}(1-\varepsilon)\) for a small dimensionless and constant \(\varepsilon\), and so \(e^{-2\Lambda}=e^{2\Phi}=\varepsilon\ll 1\). It is implied that \(\varepsilon\) is the smallest dimensionless scale in the model.
In this relaxed version of the frozen star, the energy density and pressure components go as
\[8\pi G\rho = \frac{1-(rf)^{\prime}}{r^{2}}\;=\;\frac{1-\varepsilon}{r^{2}}\;, \tag{26}\] \[8\pi Gp = -\frac{1-(rf)^{\prime}}{r^{2}}\;=\;-\frac{1-\varepsilon}{r^{2}}\;,\] (27) \[8\pi Gq = \frac{(rf)^{\prime\prime}}{2r}\;=\;0\;. \tag{28}\]
Not much has changed from the \(\varepsilon=0\) limit. In fact, a stability study in
[2] reveals that the ultra-stability of the model against \(r,t\)-dependent perturbations persists irrespective of the inclusion of non-zero metric components. The critical feature for ensuring stability is rather the condition \(\ p=-\rho\). Whether \(\varepsilon\) is zero or small but finite is besides the point.
Some final notes about the frozen star: It is necessary to modify the metric over a thin translational layer near (and including) the outer boundary so that the star's metric and its first two derivatives match smoothly to the Schwarzschild metric in the exterior [2]. Additionally, the metric near the center of the star needs to be regularized so as to ensure that the energy density remains finite at \(\ r=0\)[3]. This regularization process is mandatory for the frozen star because, unlike a Schwarzschild BH, any would-be singularity is not protected by cosmic censorship.
The width of the transitional surface layer is taken to be very small, of order of the string or Planck length scale, so that its inclusion in our analysis would lead to highly suppressed corrections. We can then safely ignore the outer layer except that its presence is relevant to the outer-surface boundary condition. In our case, the value of \(\Delta p\) should be set equal to some non-zero value at \(\ r=R\), thus allowing it to decay to zero only after passing through the translational layer where it would match on to the external geometry. The mathematical details of this process are inconsequential to the current analysis but would follow along the lines of [2].
Similarly, the regularized core has a similarly sized width and could also be ignored except for the following observation (again pertaining to boundary conditions): A particularly relevant feature of the core regularization process in [3] is that \(\rho\), \(-p\) and \(-q\) all tend to the same constant value at \(\ r=0\),
yet the rate in which they approach that value is somewhat arbitrary. We then have the freedom to send the combinations \(\sigma=p-q\) and \(\rho+p\) of the deformed metric (see the next section) to zero at an arbitrarily fast rate. This is pertinent to the boundary condition at the center because, as one can see from the first line of Eq. (21), every term in the Lagrangian variation \(\Delta p\) is preceded by just such a combination. Meaning that the \(r=0\) regularity condition can always be easily satisfied for the defrosted star model. 3
Footnote 3: In [3], the regularization method was applied to the undeformed version of the frozen star model, but it is a straightforward exercise to include deformations and obtain similar results.
### Briefly on stability
Before discussing the deformation of the frozen star, let us consider how the dynamical equations (22) and (23) can be used to learn about the star's stability in the undeformed case. Starting with Eq. (22), one immediately sees that the \(W\) term vanishes because of the factor of \(\rho+p=0\) and the \(a_{r}\) term because the equation of state dictates that \(\delta\rho+\delta p=\)0 or, more simply, because \(a_{r}\) itself is vanishing for constant \(\varepsilon\).
As we are assuming an object of fixed mass, it follows that \(\delta\rho=0\) and then likewise for the equal and opposite variation \(\delta p\). To see that \(\delta q\) also vanishes, one can use \(\rho+p=0\) to rewrite the conservation equation (16) as
\[0\;=\;r^{2}\partial_{r}p+2r\sigma\;=\;\partial_{r}(r^{2}p)-2rq\;, \tag{29}\]
which confirms that both \(q\) and \(\delta q\) vanish because \(r^{2}p\) must be constant by virtue of Eq. (27) and \(\delta p=0\).
Since \(\delta q=0\), Eq. (23) reduces to
\[0\;=\;(\rho+q)\,e^{-2\Phi}\omega^{2}V\;, \tag{30}\]
meaning that either \(\omega\) or \(V\) is vanishing. No matter which, the angular components of the velocity perturbations are vanishing, as these are given by the time derivative of \(V\).
This leaves the radial component of the velocity perturbations, which are determined by the time derivative of \(W\). However, the dynamical equations tell us nothing about \(W\) for the undeformed star because it carries a vanishing factor of \(\rho+p\). The way around this is to actually deform the star (as we do next) and then consider the limit as the deformation parameter goes to zero. What one finds is that \(\omega\) does indeed go to zero in this limit, meaning that the time derivative of \(W\) and thus the radial component of the velocity perturbations must also tend to zero. Note, though, that it can never be established that \(W\) is itself vanishing. Fortunately, there is no reason that it has to. Only the time derivative of \(W\) has physical meaning.
## 4 Defrosting a frozen star
The frozen star geometry is unusually stable against perturbations, even when \(\varepsilon=|g_{tt}|=g^{rr}\) differs from zero. For the star to sustain dynamical modes, we have found that at least two modifications are required. The first requirement is that the equation of state deviates from \(p=-\rho\), which necessitates that \(|g_{tt}|\neq g^{rr}\). The second is that at least one of \(g_{tt}\) and \(g^{rr}\) have some radial dependence. To implement these requirements,
we introduce a perturbative, dimensionless and positive parameter \(\ \gamma\ll 1\) which controls the difference between the frozen and defrosted stars.
### Deformed background geometry
Let us first re-express the relevant metric components as
\[-g_{tt} = \varepsilon+\gamma\left(\frac{r}{R}\right)^{a}\, \tag{31}\] \[g^{rr} = \varepsilon+\gamma\left(\frac{r}{R}\right)^{b}\, \tag{32}\]
where the relation between the star's outer surface \(R\) and mass \(M\) will be determined shortly. It will be shown later that the constants \(a\) and \(b\) are fixed, for consistency, such that \(\ a=2\) and \(\ b=0\).
We assume that \(\ \varepsilon\ll\gamma\\) and so neglect \(\varepsilon\) in what follows, leaving us with
\[-g_{tt} = \gamma\left(\frac{r}{R}\right)^{a}\, \tag{33}\] \[g^{rr} = \gamma\left(\frac{r}{R}\right)^{b}. \tag{34}\]
Furthermore, only the leading-order terms in \(\gamma\) will ever be considered. We assume that regularization procedures similar to those used in [2] and [3] have been implemented but, as already discussed, these are expected to have no tangible affect on our analysis except to trivialize the enforcement of the central boundary condition and modify the outer boundary condition.
Let us now consider two components of Einstein's equations,
\[r^{2}\rho\ =\ \left[r\left(1-g^{rr}\right)\right]^{\prime}\, \tag{35}\]
\[r^{2}p\;=\;g^{rr}-1+rg^{rr}\left[\ln|g_{tt}|\right]^{\prime}\;, \tag{36}\]
where the convention \(\;8\pi G=1\;\) has now been adopted. Then, for the deformed frozen star,
\[r^{2}\rho\;=\;1-(1+b)\;\gamma\left(\frac{r}{R}\right)^{b}\;, \tag{37}\]
\[r^{2}p\;=\;-1+(1+a)\;\gamma\left(\frac{r}{R}\right)^{b}\;, \tag{38}\]
which can be combined into
\[p\;=\;\rho\left(-1+(a-b)\;\gamma\left(\frac{r}{R}\right)^{b}\right) \tag{39}\]
and
\[r^{2}(\rho+p)\;=\;(a-b)\;\gamma\left(\frac{r}{R}\right)^{b}\;, \tag{40}\]
thus making the deviation from \(\;\rho+p=0\;\) quite clear. Satisfying the null energy condition requires that \(\;a\geq b\;\), which we will assume.
Via the energy conservation equation (16), it can be shown that \(q\) is no longer vanishing,
\[r^{2}q\;=\;\frac{1}{4}\left(a^{2}+ab+2b\right)\;\gamma\left(\frac{r}{R} \right)^{b}\;, \tag{41}\]
and consequently that
\[q\;=\;\frac{1}{4}\left(\frac{a^{2}+ab+2b}{a-b}\right)(\rho+p)\;. \tag{42}\]
We can use the above equation to determine \(\frac{\partial p}{\partial q}\), which will be needed
later on for the calculation of the perturbation \(\delta q\),
\[\frac{\partial q}{\partial p} = \frac{1}{4}\left(\frac{a^{2}+ab+2b}{a-b}\right)\left(\frac{\partial \rho}{\partial p}+1\right) \tag{43}\] \[= -\frac{1}{4}\left(a^{2}+ab+2b\right)\ \gamma\left(\frac{r}{R} \right)^{b}\;,\]
where the variation \(\frac{\partial\rho}{\partial p}\) can be obtained by inverting Eq. (39).
The following relation will also be required:
\[\partial_{r}\Phi\;=\;\frac{a}{r}\;. \tag{44}\]
The radius of a defrosted star of mass \(M\) is larger by a factor of order \(\frac{\gamma}{R}\) from its Schwarzschild size. The way to see this is to match \(\;g^{rr}=\gamma\left(\frac{r}{R}\right)^{b}\;\) to its Schwarzschild form \(\;g^{rr}=1-\frac{2GM}{r}\;\) at \(\;r=R\;\), which yields \(\;R=\frac{2GM}{1-\gamma}\simeq 2GM(1+\gamma)\;\). One can use this to indeed verify that the total mass of the star is the same mass \(M\) as its undeformed counterpart. With \(E\) denoting the deformed star's mass,
\[E = \int\limits_{0}^{R}dr\;4\pi r^{2}\;\rho(r)\;=\;\frac{1}{2G}\int \limits_{0}^{R}dr\left(1-(b+1)\gamma\left(\frac{r}{R}\right)^{b}\right) \tag{45}\] \[= \frac{1-\gamma}{2G}R\;=\;M\;,\]
where Eq. (37) (with a restored factor of \(8\pi G\) on its left side) has been used in the top line.
### Perturbations
We start this discussion by recalling the pressure perturbation in Eq. (21),
\[r^{2}\delta p\;=\;(\rho+p)e^{-\Lambda}W^{\prime}-\left[\partial_{r}(r^{2}p)-2rq \right]e^{-\Lambda}\frac{W}{r^{2}}-\ p\ \ell\left(\ell+1\right)V\;. \tag{46}\]
To obtain this form from the previous one, we have used that the factors in front of \(e^{-\Lambda}W^{\prime}\) and \(e^{-\Lambda}W\) are linear in \(\gamma\) (the latter by using the conservation equation) and that, as shown later, 4 the leading-order contribution to \(V\) is also linear in \(\gamma\). These observations allowed us, in particular, to set \(\frac{\partial p}{\partial\rho}=-1\), recalling that our calculations are performed to leading order in \(\gamma\).
Footnote 4: See the scaling relations that immediately follow Eq. (61).
Substituting the background quantities for the deformed geometry from the previous section, we find that, to leading order in \(\gamma\),
\[r^{2}\delta p\;=\;(a-b)\gamma\left(\frac{r}{R}\right)^{b}\ \ \left[e^{-\Lambda}\frac{W^{\prime}}{r^{2}}+\frac{a}{2}e^{- \Lambda}\frac{W}{r^{3}}\right]+\ \ell\left(\ell+1\right)\frac{V}{r^{2}}\;. \tag{47}\]
The two other perturbations \(\delta\rho\) and \(\delta q\) can be related to \(\delta p\) by using Eqs. (39), (40) and (43). From Eq. (39),
\[\delta\rho\;=\;\frac{\partial\rho}{\partial p}\delta p\;\simeq\;-\delta p\;, \tag{48}\]
and then from Eq. (40),
\[\delta\rho+\delta p\;\simeq\;-(a-b)\gamma\left(\frac{r}{R}\right)^{b}\delta p\;, \tag{49}\]
and finally, from Eq. (43),
\[\delta q\;=\;\frac{\partial q}{\partial p}\delta p\;\simeq\;-\frac{1}{4}\left(a^{2 }+ab+2b\right)\gamma\left(\frac{r}{R}\right)^{b}\delta p\;. \tag{50}\]
As expected, the perturbation in the transverse pressure is subleading to the radial perturbation, \(\;\delta q\sim\gamma\;\delta p\;\), same as for the deformed background components, \(\;q\sim\gamma\;p\;\).
## 5 Frozen star: Dynamical equations and dispersion relations
The initial step here is to rewrite the dynamical equations (22,23) for \(W\) and \(V\) in terms of the deformed frozen star geometry. Recall the suppression of the angular indices, accompanying spherical harmonics and summations.
First, multiplying Eq. (22) by \(r^{2}\) and then rewriting, we find that
\[- \frac{\omega^{2}}{\gamma^{2}\left(\frac{r}{R}\right)^{a+b}}\left( \left(a-b\right)\,\gamma\left(\frac{r}{R}\right)^{b}\right)e^{-\Lambda}\frac{ W}{r^{2}} \tag{51}\] \[+ \partial_{r}(r^{2}\delta p)+\frac{1}{2}ar\left(\delta\rho+\delta p \right)-2r\delta q\;=\;0\;.\]
Meanwhile, for Eq. (23), the resulting expression is
\[\frac{\omega^{2}}{\gamma^{2}\left(\frac{r}{R}\right)^{a}}\;\gamma\;\frac{V}{ r^{2}}+\delta q\;=\;0\;. \tag{52}\]
We may now substitute the expressions for \(\delta q\) from Eq. (50) and for \(\delta\rho+\delta p\)
from Eq. (49) into the two previous equations, giving
\[- \frac{\omega^{2}}{\gamma^{2}\left(\frac{r}{R}\right)^{a+b}}\left( \left(a-b\right)\,\gamma\left(\frac{r}{R}\right)^{b}\right)e^{-\Lambda}\frac{W} {r^{2}}\] \[+ \partial_{r}(r^{2}\delta p)-\frac{1}{2}a(a-b)\gamma\left(\frac{r}{ R}\right)^{b}r\delta p+\frac{1}{2}\left(a^{2}+ab+2b\right)\gamma\left(\frac{r}{R} \right)^{b}r\delta p\;=\;0\]
and
\[\frac{\omega^{2}}{\gamma^{2}\left(\frac{r}{R}\right)^{a}}\;\gamma\;\frac{V}{r ^{2}}-\frac{1}{4}\left(a^{2}+ab+2b\right)\gamma\left(\frac{r}{R}\right)^{b} \delta p\;=\;0\;. \tag{54}\]
An inspection of Eq. (47) reveals the scaling \(\;r^{2}\delta p\sim V/r^{2}\;\), whereas Eq. (54) rather implies \(\;r^{a+b}\delta p\sim V/r^{2}\;\). The conclusion is that
\[a+b\;=\;2\;. \tag{55}\]
Similarly, by comparing the radial dependence of the term \(\partial_{r}(r^{2}\delta p)\) in Eq. (53) with any of the other terms involving \(\delta p\), one observes that \(\;r\sim r^{b+1}\;\), so that
\[a = 2\;, \tag{56}\] \[b = 0\;. \tag{57}\]
These values for \(a\) and \(b\) simplify the dynamical equations and the expression for \(\delta p\) by a considerable amount. Respectively,
\[- 2\frac{\omega^{2}R^{2}}{\gamma^{2}}\gamma^{3/2}\frac{W}{r^{4}}+ \partial_{r}(r^{2}\delta p)\;=\;0\;, \tag{58}\]
\[\frac{\omega^{2}R^{2}}{\gamma^{2}}\frac{V}{r^{2}}-r^{2}\delta p\;=\;0\;, \tag{59}\]
\[r^{2}\delta p\;=\;2\gamma^{3/2}\left[\frac{W^{\prime}}{r^{2}}+\frac{W}{r^{3}} \right]+\;\ell\left(\ell+1\right)\frac{V}{r^{2}}\;. \tag{60}\]
Defining a dimensionless frequency \(\widetilde{\omega}\),
\[\omega^{2}\;=\;\gamma^{2}\frac{1}{R^{2}}\widetilde{\omega}^{2} \tag{61}\]
and rescaling the perturbations as \(\widetilde{W}=\sqrt{\gamma}W\;\), \(\gamma\widetilde{V}=V\;\) and \(\;\gamma\widetilde{\delta}p=r^{2}\delta p\;\), we then have
\[\partial_{r}\widetilde{\delta}p\;=\;2\widetilde{\omega}^{2}\frac{\widetilde{ W}}{r^{4}}\;, \tag{62}\]
\[\widetilde{\delta}p\;=\;\widetilde{\omega}^{2}\frac{\widetilde{V}}{r^{2}}\;, \tag{63}\]
\[\widetilde{\delta}p\;=\;2\frac{\partial_{r}\left(r\widetilde{W}\right)}{r^{3 }}+\;\ell\left(\ell+1\right)\frac{\widetilde{V}}{r^{2}}\;. \tag{64}\]
We conclude that the the oscillation frequencies are non-relativistic, governed by a scale of \(\gamma/R\) which is parametrically smaller than the relativistic frequency scale; that is, \(\;\omega\sim\frac{\gamma}{R}\ll\frac{1}{R}\;\).
We can use Eqs. (62-64) to obtain a single "wave equation" for \(\widetilde{\delta}p\) (or, equivalently, for \(\widetilde{W}\) or \(\widetilde{V}\)),
\[-\widetilde{\omega}^{2}\widetilde{\delta}p+\frac{1}{r^{3}}\partial_{r}\left( r^{5}\partial_{r}\widetilde{\delta}p\right)+\ell(\ell+1)\widetilde{\delta}p\;=\;0\;. \tag{65}\]
To find the spectrum resulting from Eq. (65), we need to impose a new boundary condition to replace the one at the center of the star. Recall that
the center-of-the-star condition is trivially satisfied in our case because of the regularization process in the core. The new condition is set by the scaling of the radial dependence of the perturbations modes \(V_{\ell m}\) and \(W_{\ell m}\). This will be applied for \(\ r\ll R\), but far enough from the center so that \(r\) falls within the bulk of the frozen star, outside of the regularized core.
Let us first consider the expansion of \(V_{\ell m}\) in Eqs.(12) and (13). As in the multipole expansion of any such quantity,
\[V_{\ell m}\ \sim\ V_{\ell}\;r^{\ell}\;, \tag{66}\]
where the relevant expansion range is \(\ \ell\geq 2\;\).
Now combining Eqs. (62) and (63), we obtain
\[\partial_{r}\left(\frac{\widetilde{V}_{\ell m}}{r^{2}}\right)\ =\ \frac{2}{r^{4}} \widetilde{W}_{\ell m}\;, \tag{67}\]
from which it follows that
\[\widetilde{W}_{\ell m}\ \sim\ \widetilde{W}_{\ell}\;r^{\ell+1} \tag{68}\]
and then
\[\widetilde{W}_{\ell m}\ =\ \frac{\ell-2}{2}\widetilde{V}_{\ell m}\;. \tag{69}\]
Additionally, it follows that \(\ \widetilde{\delta p}\sim r^{\ell-2}\;\).
Using these scaling relations, we observe that
\[\frac{1}{r^{3}}\partial_{r}\left(r^{5}\partial_{r}\widetilde{\delta p}\right) \ =\ (\ell+2)(\ell-2)\widetilde{\delta p}\;, \tag{70}\]
and then conclude from this equation and Eq. (65) that
\[\widetilde{\omega}^{2}\;=\;2\ell^{2}+\ell-4\;. \tag{71}\]
Assuming a wavelength of order \(R\), we can also deduce a speed of propagation (squared) directly from the spectrum, \(\;v^{2}\sim\gamma^{2}(2\ell^{2}+\ell-4)\;\). However, from an _internal_ (I) or proper-time perspective, one factor of the redshift \(\gamma\) should be dispensed with, leaving \(\;v_{I}^{2}\sim\gamma(2\ell^{2}+\ell-4)\;\). The dispersion relation (71) is in agreement with that of [21], which considered the quasinormal-mode problem for the closely related polymer model. There, the square of the real part of the frequencies scaled with the coupling strength of closed strings \(g_{s}^{2}\); the smallest non-classical, dimensionless parameter in the framework and the smallest dimensionless one besides \(\;\epsilon=l_{s}/R\;\). Now recall that \(\gamma\) played precisely the same role for the deformed frozen star, as \(\varepsilon\) -- the redshift of the undeformed star -- is the only smaller dimensionless parameter. Another way of seeing that \(\gamma\) and \(g_{s}^{2}\) play the same role in their respective models is that they both provide a direct measure of \(\frac{\Delta R}{R}\). From the polymer point of view, this relative scaling is already clear in [21], but see also [31, 32] for further discussion.
The solutions for the perturbations can be obtain by using Eq. (70) to solve for \(\widetilde{\delta p}\),
\[\widetilde{\delta p}_{\ell}\;=\;C_{1}\ r^{l-2}+C_{2}\ r^{-(l+2)}\;. \tag{72}\]
The boundary condition at \(\;r\ll R\;\) requires that \(\;C_{2}=0\;\), so that
\[\widetilde{\delta p}_{\ell}\;=\;C_{1}\ r^{l-2}\;. \tag{73}\]
Then, from Eq. (63), it follows that
\[\widetilde{V}_{\ell}\;=\;C_{1}\ \frac{1}{\widetilde{\omega}^{2}}\ r^{l}\;, \tag{74}\]
whereas Eq. (62) leads to
\[\widetilde{W}_{\ell}=C_{1}\ \frac{l-2}{2}\ \frac{1}{\widetilde{\omega}^{2}}\ r^{l+1}. \tag{75}\]
Note that \(\widetilde{W}_{\ell}\) vanishes at leading order for \(\ \ell=2\).
An interesting feature of the spectrum in [21] was that the imaginary part of the mode frequencies (_i.e._, the inverse of the damping time) scaled as \(v^{2}\), and so \(\ \Im(\omega)\ll\Re(\omega)\). This behavior is not obvious at this point in the current analysis. But this is because we have yet to provide a mechanism for any of the modes to couple to and then escape into the external spacetime, which is what is required for the modes to dissipate energy and thus experience damping. However, it is easy to see why such a scaling is an inevitable feature of any ultra-compact object that is using non-relativistic fluid modes to source gravitational waves. As explained in [26], from an external observer perspective, these modes have exceptionally long wavelengths \(\lambda\sim R/v\) when compared to the size \(R\) of the object, reducing the transmission cross-section through a surface of area \(R^{2}\) by a factor of \(\ \lambda^{2}/R^{2}\sim v^{2}\). This cross-section determines, in turn, the power loss \(\ \frac{dE}{dt}\sim\lambda^{2}/R^{2}\sim v^{2}\), which then sets the damping time scale as \(\ 1/\tau\sim v^{2}\), and so \(\ \mathrm{Im}(\omega)\sim v^{2}\).
Overview
We have calculated the oscillatory mode spectrum for a deformed version of the frozen star model: the defrosted star. The deformations were a mathematical necessity; otherwise, the ultra-stability of the frozen star geometry would have doomed the perturbative modes from the get go. The deformations do, however, make sense when one considers that gravitational waves are generally emitted as the result of some out-of-equilibrium event like the merger of a binary system. One expects the disturbed object to eventually settle back into its original equilibrium state. It would be interesting to understand how this process works in the case of a frozen star.
We have found a spectrum for the star that is in agreement with that of the polymer model. In both cases, the square of the real part of the frequency scales with a small dimensionless parameter; the deformation parameter \(\gamma\) in the case of the defrosted version of the frozen star and the closed-string coupling strength \(g_{s}^{2}\) for the collapsed polymer. These are seemingly very different but have two important similarities: (1) They both represent the second-smallest dimensionless parameter in their respective frameworks and the smallest one that is also non-classical (effectively so, in the case of the star) and (2) they both scale as \(\frac{\Delta R}{R}\). We expect that this is really the same parameter but different observers will assign it a different meaning depending on their vantage point. For instance, anyone who believes that BHs are purely classical, geometric objects would have no room for string coupling in her "story". This reasoning falls very much in line with what Hawking called the "Principle of Ignorance" [33].
Missing in our analysis is a direct calculation of the damping time. This
could be done by changing the boundary conditions at the surface of the star from a vanishing wave to a standing wave, which is then matched to an outgoing wave in the exterior. This calculation is technically quite involved and requires the incorporation of the metric perturbations, meaning that the Cowling approximation can no longer be applied. We do hope to perform such a calculation in the future. As explained in the main text, it can be expected on physical grounds that the damping time scales as \(1/\gamma^{2}\), a result that was indeed substantiated in [21]. This is well worth confirming for the defrosted star, as the implication is a particularly long lifetime for the emitted modes and, with it, an excellent chance of detection. Moreover, if \(\gamma\) is indeed related to the coupling strength of closed strings, the lifetime of the fluid modes provides an unexpected window into fundamental string theory.
## Acknowledgments
We thank Daniela Doneva and Stoytcho Yazadjiev for clarifying comments on their paper. The research is supported by the German Research Foundation through a German-Israeli Project Cooperation (DIP) grant "Holography and the Swampland" and by VATAT (Israel planning and budgeting committee) grant for supporting theoretical high energy physics. The research of AJMM received support from an NRF Evaluation and Rating Grant 119411 and a Rhodes Discretionary Grant SD07/2022. AJMM thanks Ben Gurion University for their hospitality during his visit.
## Appendix A Perpendicular projection of the perturbation
Here, it is shown in some detail how to obtain Eq. (7), the perpendicular projection of the covariant derivative of the perturbation of the stress tensor (3).
Using the properties of the 4-velocity (see Eq. (5)) and that the Cowling approximation is in effect, we have
\[\mathcal{P}_{\alpha}^{\mu}\nabla_{\nu}\delta T_{\mu}^{\nu} =\nabla_{\nu}\left(\delta\rho+\delta q\right)\underbrace{\left( \delta_{\alpha}^{\mu}+u^{\mu}u_{\alpha}\right)u_{\mu}}_{=0}u^{\nu}\] \[\quad+\left(\delta\rho+\delta q\right)\left(\left(\delta_{\alpha }^{\mu}+u^{\mu}u_{\alpha}\right)u^{\nu}\nabla_{\nu}u_{\mu}+\underbrace{\left( \delta_{\alpha}^{\mu}+u^{\mu}u_{\alpha}\right)u_{\mu}}_{=0}\overbrace{\nabla _{\nu}u^{\nu}}^{=0}\right)\] \[\quad+\left(\delta_{\alpha}^{\mu}+u^{\mu}u_{\alpha}\right)\left[ \nabla_{\nu}\left(\rho+q\right)\left(\delta u_{\mu}u^{\nu}+u_{\mu}\delta u^{ \nu}\right)+\left(\rho+q\right)\nabla_{\nu}\left(\delta u_{\mu}u^{\nu}+u_{\mu }\delta u^{\nu}\right)\right]\] \[\quad+\left(\delta_{\alpha}^{\mu}+u^{\mu}u_{\alpha}\right)\nabla _{\nu}\delta q\delta_{\mu}^{\nu}+\mathcal{P}_{\alpha}^{\mu}\nabla_{\nu}\delta \left(\sigma k_{\mu}k^{\nu}\right)\] \[= \left(\delta\rho+\delta q\right)\left(a_{\alpha}+u_{\alpha}u^{ \nu}\underbrace{u^{\mu}\nabla_{\nu}u_{\mu}}_{=0}\right)\] \[\quad+\nabla_{\nu}\left(\rho+q\right)\left(\left(\delta_{\alpha }^{\mu}+u^{\mu}u_{\alpha}\right)\delta u_{\mu}u^{\nu}+\delta u^{\nu}\underbrace {\left(\delta_{\alpha}^{\mu}+u^{\mu}u_{\alpha}\right)u_{\mu}}_{=0}\right)\] \[\quad+\left(\rho+q\right)\left(\left(\delta_{\alpha}^{\mu}+u^{ \mu}u_{\alpha}\right)\nabla_{\nu}\delta u_{\mu}u^{\nu}+\left(\delta_{\alpha}^{ \mu}+u^{\mu}u_{\alpha}\right)\nabla_{\nu}u_{\mu}\delta u^{\nu}\right)\] \[\quad+\left(\rho+q\right)\left(\delta_{\alpha}^{\mu}+u^{\mu}u_{ \alpha}\right)\left(\delta u_{\mu}\overbrace{\nabla_{\nu}u^{\nu}}^{=0}+u_{ \mu}\overbrace{\nabla_{\nu}\delta u^{\nu}}^{=0}\right)\] \[\quad+\nabla_{\nu}\delta q\delta_{\alpha}^{\nu}+u^{\nu}u_{\alpha} \nabla_{\nu}\delta q+\mathcal{P}_{\alpha}^{\mu}\nabla_{\nu}\delta\left(\sigma k _{\mu}k^{\nu}\right)\]
\[=\delta\left(\rho+q\right)a_{\alpha}+\nabla_{\nu}\left(\rho+q\right) \left(\delta u_{\alpha}u^{\nu}+u_{\alpha}\underbrace{u^{\mu}\delta u_{\mu}}_{=0}u^ {\nu}\right)\] \[\quad+\left(\rho+q\right)\left(u^{\nu}\nabla_{\nu}\delta u_{\alpha }+u_{\alpha}u^{\mu}u^{\nu}\nabla_{\nu}\delta u_{\mu}+u_{\alpha}u^{\mu}\nabla_{ \nu}u_{\mu}\delta u^{\nu}+\nabla_{\nu}u_{\alpha}\delta u^{\nu}\right)\] \[\quad+\nabla_{\nu}\delta q\delta_{\alpha}^{\nu}+u^{\nu}u_{\alpha} \nabla_{\nu}\delta q+\mathcal{P}_{\alpha}^{\mu}\nabla_{\nu}\delta\left(\sigma k _{\mu}k^{\nu}\right)\;. \tag{76}\]
Next, we implement our choice to work in the comoving frame, so that \(u^{t}u_{t}=-1\), and recall the time independence of any background quantity. Then,
\[0 = \mathcal{P}_{\alpha}^{\mu}\nabla_{\nu}\delta T_{\mu}^{\nu}=\delta \left(\rho+q\right)a_{\alpha}+\underbrace{\partial_{t}\left(\rho+q\right)}_{= 0}\delta u_{\alpha}u^{t} \tag{77}\] \[\quad+\left(\rho+q\right)\left(u^{\nu}\nabla_{\nu}\delta u_{ \alpha}-u^{\mu}\overbrace{\delta_{\alpha}^{t}\delta_{t}^{\nu}}^{=\delta_{ \alpha}^{\nu}}\nabla_{\nu}\delta u_{\mu}\underbrace{\overbrace{-\delta_{t}^{ \mu}\delta_{\alpha}^{t}\nabla_{\nu}u_{\mu}\delta u^{\nu}+\nabla_{\nu}u_{\alpha} \delta u^{\nu}}_{=0}}\right)\] \[\quad+\nabla_{\nu}\delta q\delta_{\alpha}^{\nu}+u^{\nu}u_{\alpha} \nabla_{\nu}\delta q+\mathcal{P}_{\alpha}^{\mu}\nabla_{\nu}\delta\left(\sigma k _{\mu}k^{\nu}\right)\] \[\quad=\;\delta\left(\rho+q\right)a_{\alpha}+\left(\rho+q\right)u^ {\nu}\left(\nabla_{\nu}\delta u_{\alpha}-\nabla_{\alpha}\delta u_{\nu}\right)\] \[\quad+\nabla_{\nu}\delta q\delta_{\alpha}^{\nu}+u^{\nu}u_{\alpha} \nabla_{\nu}\delta q+\mathcal{P}_{\alpha}^{\mu}\nabla_{\nu}\delta\left(\sigma k _{\mu}k^{\nu}\right)\;.\] |
2303.16670 | The Greisen Function and its Ability to Describe Air-Shower Profiles | Ultrahigh-energy cosmic rays are almost exclusively detected through
extensive air showers, which they initiate upon interaction with the
atmosphere. The longitudinal development of these air showers can be directly
observed using fluorescence detector telescopes, such as those employed at the
Pierre Auger Observatory or the Telescope Array. In this article, we discuss
the properties of the Greisen function, which was initially derived as an
approximate solution to the electromagnetic cascade equations, and its ability
to describe the longitudinal shower profiles. We demonstrate that the Greisen
function can be used to describe longitudinal air-shower profiles, even for
hadronic air showers. Furthermore we discuss the possibility to discriminate
between hadrons and photons from the shape of air-shower profiles using the
Greisen function. | Maximilian Stadelmaier, Jakub Vícha, Vladimír Novotný | 2023-03-29T13:23:27Z | http://arxiv.org/abs/2303.16670v3 | # The Greisen Function and its Ability to Describe Air-Shower Profiles
###### Abstract
Ultrahigh-energy cosmic rays are almost exclusively detected through extensive air showers, which they initiate upon interaction with the atmosphere. The longitudinal development of these air showers can be directly observed using fluorescence detector telescopes, such as those employed at the Pierre Auger Observatory or the Telescope Array. In this article, we discuss the properties of the Greisen function, which was initially derived as an approximate solution to the electromagnetic cascade equations, and its ability to describe the longitudinal shower profiles. We demonstrate that the Greisen function can be used to describe longitudinal air-shower profiles, even for hadronic air showers. Furthermore we discuss the possibility to discriminate between hadrons and photons from the shape of air-shower profiles using the Greisen function.
## I Introduction
Extensive air showers are created by cosmic rays upon interaction with the atmosphere [1; 2]. They can be detected at the ground using surface detector arrays, or directly observed at night using fluorescence detector telescopes. To reconstruct the shower development and shower observables, a profile function needs to be fitted to the detector data. Gaisser and Hillas proposed an empiric function [3] to describe the longitudinal development of proton air showers as an alternative to the Constant Intensity Cut method [4; 5], which is used in surface detector experiments to take into account the atmospheric attenuation of particles in air showers from different zenith angles. It was shown in [6] that the Gaisser-Hillas (GH) function can be used to approximate a system of particles being created and absorbed in an extended Heitler-Matthews model [7], and can be adjusted to very closely match the Greisen function, which is an approximation for the solutions to the electromagnetic cascade equations [8]. Both the Pierre Auger Observatory and the Telescope Array use the GH function to describe their fluorescence detector data [9; 10].
In this article, we will discuss the Greisen function and its properties, such as its connection to the shower age. We will demonstrate the usability of the Greisen function as an alternative to the GH function to fit longitudinal shower profiles and present its performance to reconstruct the depth of the shower maximum as well as the primary energy using Monte Carlo (MC) simulations of air showers.
## II The Greisen Function
The average longitudinal development of electromagnetic air showers can be very well described analytically [11]. This description holds in good approximation also for hadronic showers, initiated by ionized nuclei, which make up the upper end of the cosmic-ray energy spectrum [12]. The solutions to the cascade equations derived by Rossi and Greisen under Approximation A1 to describe extensive air showers were used to motivate important properties in the context of air-shower physics, such as the shower age
Footnote 1: Approximation A is the high-energy approximation to the cascade equations, in which only bremsstrahlung and pair production are considered as relevant processes.
\[s=\frac{3t}{t+2\ln(E_{0}/E_{\rm cut})} \tag{1}\]
that describes the development of a shower, initiated by a primary particle of the energy \(E_{0}\), after \(t\) radiation lengths, considering only the particles above an energy of \(E_{\rm cut}\). Note that \(s=1\) at \(t=\ln(E_{0}/E_{\rm cut})\); this value is usually assigned with the shower maximum. For electromagnetic showers a reasonable choice for \(E_{\rm cut}\) is close to \(\approx 87\,\)MeV, which is the energy above which electromagnetic particles on average lose more energy in radiative shower processes than to scattering and ionization. Furthermore, it was demonstrated that relative rate of change2\(\lambda_{1}\) in the number \(N\) of particles in a shower as a function of the surpassed radiation lengths \(t\),
Footnote 2: We use the notation \(\lambda_{1}\), even though we do not mean to suggest that this quantity is to be understood as a (wave) length, to adhere to historic convention.
\[\lambda_{1}=\frac{1}{N(t)}\frac{\partial N(t)}{\partial t}, \tag{2}\]
is similar for all air showers at high primary energies. [8] introduced the approximation
\[\lambda_{1}\simeq\frac{1}{2}\left(s-1-3\ln s\right), \tag{3}\]
which is in good agreement with the exact solution. Originally, the parameter \(s\) describes the spectra \(n\) of electromagnetic particles in a shower, that is approximately given by \(n_{\gamma}\sim n_{\mathrm{e}^{\pm}}\sim E^{-(s+1)}\) for particles at energies \(E\ll E_{0}\), but from Eq. (3) there exists a relation between \(\lambda_{1}\) and \(s\).
It is straightforward to combine Eqs. (1) to (3) and to solve the resulting expression for \(N(t)\) by integration. This yields
\[N(t)=N_{0}\exp\big{[}t\,\big{(}1-\tfrac{3}{2}\ln s\big{)}\big{]}, \tag{4}\]
with a constant \(N_{0}\). The maximum number \(N_{\mathrm{max}}\) of particles above the energy of \(E_{\mathrm{cut}}=98\,\)MeV in a cascade initiated by a particle of energy \(E_{0}\) was derived in [13] under Approximation B3 and found to be
Footnote 3: Approximation B of the cascade equations augments Approximation A by a term concerning Coulomb scattering.
\[N_{\mathrm{max}}=\frac{0.31}{\sqrt{\ln(E_{0}/E_{\mathrm{cut}})}}\frac{E_{0}}{ E_{\mathrm{cut}}}. \tag{5}\]
Eq. (4) has its maximum at \(t_{\mathrm{max}}=\ln(E_{0}/E_{\mathrm{cut}})\), which evaluates to \(N_{0}\,E_{0}/E_{\mathrm{cut}}\). Thus, solving for \(N_{0}\) using Eq. (5) yields the Greisen function, which reads
\[N(t)=\frac{0.31}{\sqrt{\beta}}\exp\big{[}t\,\big{(}1-\tfrac{3}{2}\ln s\big{)} \big{]}, \tag{6}\]
using the short notation \(\beta=\ln(E_{0}/E_{\mathrm{cut}})=t_{\mathrm{max}}\). The Greisen function, introduced in [14], is thus an approximate solution to the electromagnetic cascade equations, given in [11], combining aspects of both Approximation A and Approximation B. There was no strict derivation given by Kenneth Greisen himself, but _a-posteriori_ derivations (such as the one presented here) were provided in [15] and [16].
In the following, we have to overcome two major shortcomings of the Greisen function as written in Eq. (6). Firstly, the Greisen function in its classical form cannot accurately describe the point of the first interaction of a cosmic ray with the atmosphere, since by construction the cascade is initiated always at \(t=0\). Secondly, the scale of the Greisen function is only accurate for average electromagnetic showers. We will introduce a parameter \(\epsilon\) to account for this issue and demonstrate that the Greisen function generalized this way is able to describe the longitudinal profile of hadronic showers and the corresponding shower-to-shower fluctuations.
## III The modified Greisen function
The classical Greisen function, which is given in Eq. (6), assumes that a shower starts at \(t=0\). Furthermore, the Greisen function is technically only able to describe electromagnetic showers. In this section, we introduce minor modifications to the function to describe the longitudinal development of both hadronic and electromagnetic air showers.
Firstly, we introduce a non-zero point of the first interaction at a slanted atmospheric depth \(X_{1}\), that will be described by \(t_{1}=X_{1}/X_{0}\) using the electromagnetic radiation length4\(X_{0}\simeq 37\,\)g cm\({}^{-2}\). Thus, the shower age \(s\) will be given as
Footnote 4: Equivalently, we use \(t=X/X_{0}\) and \(t_{\mathrm{max}}=X_{\mathrm{max}}/X_{0}\).
\[s=\frac{3t^{\prime}}{t^{\prime}+2\beta}\,\Theta(t^{\prime}), \tag{7}\]
with \(t^{\prime}=t-t_{1}\) and the Heaviside function \(\Theta\). To maintain the property of the shower age, which is supposed to be 1 at the maximum of the shower, and to keep number of radiation lengths required to reach the maximum of the shower the same as before, we redefine \(\beta\) in accordance with the previous modification as
\[\beta=\ln(E_{0}/E_{\mathrm{cut}})=t_{\mathrm{max}}-t_{1}. \tag{8}\]
Finally, we introduce the factor \(\epsilon\), which is defined in units of energy deposit per step length, and which can be interpreted as the effective energy loss per particle and step length at the shower maximum5. Thus, the modified Greisen profile reads as
Footnote 5: Here we ignore the factor of 0.31 from Eq. (5).
\[N(t)=\frac{\epsilon}{\sqrt{\beta}}\exp\big{[}(t-t_{1})\,\big{(}1-\tfrac{3}{2} \ln s\big{)}\big{]}\,, \tag{9}\]
with \(N(t)=0\) for \(t\leq t_{1}\). For the sake of simplicity, here and in the following, in the text we abbreviate the energy deposit \(\mathrm{d}E/\mathrm{d}X\) with the symbol \(N\), analogously to the number of particles.
## IV Calibration of the Greisen profile
If Eq. (9) is used to describe the longitudinal profiles of (hadronic) showers in terms of deposited energy rather than a number of particles, it is necessary to examine viable (effective) numerical values of the energy \(E_{\mathrm{cut}}\) as well as of \(\epsilon\).
We investigate the behaviour of \(\epsilon\) and \(E_{\mathrm{cut}}\) using the MC values of \(X_{\mathrm{max}}\), \(X_{1}\), \(E_{0}\), and the maximum energy deposit \(N(t_{\mathrm{max}})\) of the longitudinal profiles of simulated air showers. The simulations were produced using the Sibyll 2.3d[17], Epos-LHC[18], and the Qgsjet II-04[19] models of hadronic interactions with different primary particles at different primary energies. All simulations were produced in the Conex event generator [20] at version v7.60. We produced 1000 simulated showers
at primary energies of \(10^{18.5}\,\mathrm{eV}\), \(10^{19}\,\mathrm{eV}\), and \(10^{19.5}\,\mathrm{eV}\) with gamma-ray, proton, and iron nuclei primary particles, each.
Shower-to-shower fluctuations of (hadronic) air showers will severely affect the maximum number of particles produced in a shower as well as the absolute depth of the shower maximum. These fluctuations, however, can be accurately reproduced by the behaviour of the Greisen function. Rewriting Eq. (8) in terms of the slanted atmospheric depth of the shower maximum \(X_{\mathrm{max}}\) and the average radiation length \(X_{0}\),
\[\beta=\frac{X_{\mathrm{max}}-X_{1}}{X_{0}}, \tag{10}\]
we find that6
Footnote 6: Note that the explicit dependence on \(E_{0}\) is cancelled by the implicit dependence of the average \(X_{\mathrm{max}}\) on the primary energy.
\[E_{\mathrm{cut}}=E_{0}\,\mathrm{e}^{-\beta}=E_{0}\exp{\left[-\frac{X_{ \mathrm{max}}-X_{1}}{X_{0}}\right]}. \tag{11}\]
We choose an effective value for the radiation length of \(X_{0}=40\,\mathrm{g}\,\mathrm{cm}^{-2}\) as a compromise for the different considered primary particles.
In a similar manner, using the numerical maximum of a given shower profile according to Eq. (9), the parameter \(\epsilon\) can be identified as
\[\epsilon=N(t_{\mathrm{max}})\,\sqrt{\beta}\,\mathrm{e}^{-\beta}. \tag{12}\]
As can be seen in Fig. 1, we observe a strong correlation for the MC values of \(\epsilon\) and \(E_{\mathrm{cut}}\) (and thus \(\beta\)), assuming Greisen-like longitudinal profiles. Furthermore, we show in Fig. 8 and Fig. 9 that the distributions of \(\epsilon\) and \(E_{\mathrm{cut}}\) as well as their relation are approximately the same even for all three hadronic interaction models. This behaviour indicates that shower-to-shower fluctuations of hadronic showers are not at all random, but follow certain regularities. For example, a deeper than average value of \(X_{\mathrm{max}}\) corresponds to smaller than average value of \(N(t_{\mathrm{max}})\) (and vice-versa), if all other parameters are fixed. Furthermore, we expect larger values of \(\beta\) for photon-induced showers than for protons or iron nuclei. Between different primaries there is a gradual transition in the shape of the shower from very hadronic (iron-like) to proton-like and lastly electromagnetic showers, where the behaviour of \(\epsilon\) and \(E_{\mathrm{cut}}\) can be described by a power law,
\[\frac{\epsilon}{\mathrm{PeV/g\,cm^{-2}}}\simeq\left(\frac{E_{\mathrm{cut}}}{1 0^{16.8}\,\mathrm{eV}}\right)^{0.97}, \tag{13}\]
with residuals on average within \(0.2\%\). Note that if expressed in terms of \(X_{\mathrm{max}}\) and \(X_{1}\) (cf. Eq. (11)), \(E_{\mathrm{cut}}\) is not an explicit parameter of the Greisen function. Eq. (13) thus expresses the universal relation between the maximum energy deposit and the extent of the shower in terms of radiation lengths after the first interaction.
Even though \(\epsilon\) appears as a pre-factor in the modified Greisen function, the numerical values of \(\epsilon\) from a best fit are independent of the integrated profile (cf. Eqs. (11) and (13) and Fig. 9) and thus of the energy of the primary particle. Because \(E_{\mathrm{cut}}\) does not depend on the primary energy, \(\epsilon\) solely depends on the shape of the shower. The integrated profile (and thus the calorimetric energy deposit) is mainly governed by \(\beta\) (cf. Eq. (10)). For smaller values of \(\epsilon\) (e.g. photon-like showers), the shower takes longer to reach its maximum in terms of radiation lengths. In case of showers with a significant amount of hadronization, where multiple cascades are effectively in superposition (i.e. iron-like showers), we expect larger numerical values for \(\epsilon\), corresponding to an earlier shower maximum. From superposition and the Heitler-Matthews model7[7] one would expect \(\epsilon_{\mathrm{Fe}}/\epsilon_{\mathrm{p}}\simeq 50\) (cf. Eqs. (9) and (12)); however, the average ratio obtained from simulations is much smaller. From the ratio \(\epsilon_{\mathrm{Fe}}/\epsilon_{\mathrm{p}}\simeq 10\), we estimate a difference in \(\beta\) of about \(2.4\) radiation lengths between the average proton and iron shower to reach the shower maximum. This is in accordance with simulations, which imply8\(\langle X_{\mathrm{max}}\rangle_{\mathrm{p}}-\langle X_{\mathrm{max}}\rangle_{ \mathrm{Fe}}\simeq 100\,\mathrm{g}\,\mathrm{cm}^{-2}\).
Footnote 8: The effect of the depth of the first interaction is neglected.
## V Fitting simulated data
To test the ability of the Greisen function to describe longitudinal profiles of air showers, we examine and fit
Figure 1: The behaviour of \(\epsilon\) and \(E_{\mathrm{cut}}\) calculated from the Monte-Carlo values of \(X_{1}\), \(X_{\mathrm{max}}\), \(N(t_{\mathrm{max}})\), and \(E_{0}\) from simulated air showers produced with the Sibyll2.3d model of hadronic interactions. The markers show data points from individual shower simulations, the line shows a log-linear regression.
simulated shower profiles. The simulation library contains the same configuration of primary energies, particles, and hadronic interaction modules as mentioned before. We compare the results against results from fitting the same showers to the very commonly used GH function, which in terms of the slanted depth \(X\) reads as
\[\begin{split} N(X)=N_{\mathrm{max}}\,\bigg{(}\frac{X-X_{1}}{X_{ \mathrm{max}}-X_{1}}\bigg{)}^{\frac{X_{\mathrm{max}}-X_{1}}{\Lambda}}\\ \times\exp\bigg{[}\frac{X_{\mathrm{max}}-X}{\Lambda}\bigg{]}, \end{split} \tag{14}\]
with \(N(X)=0\) for \(X\leq X_{1}\), the depth of the shower maximum \(X_{\mathrm{max}}\), the maximum value \(N_{\mathrm{max}}\) of the function, and a characteristic length \(\Lambda\), which is related but not equal to the electromagnetic interaction length. Approximately one expects \(\Lambda\simeq 3\,X_{0}/2\) (cf. Eq. (13) and [15; 16]).
To describe the shape of the shower profiles, we use \(N_{\mathrm{max}}\), \(X_{\mathrm{max}}\), \(X_{1}\), and \(\Lambda\) as free parameters for the GH function, and \(\epsilon\), \(X_{\mathrm{max}}\), \(X_{1}\), and \(X_{0}\) for the Greisen function. We estimate the uncertainty of the individual MC data points to \(0.3\,\mathrm{PeV}/(\mathrm{g\,cm}^{-2})\). The tails of the profiles (\(\mathrm{d}E/\mathrm{d}X\leq 0.8\,\mathrm{PeV}/(\mathrm{g\,cm}^{-2})\)) are not used in any of the fits. The constant threshold for the tails as well as the uncertainty for the data points were estimated so that \(\chi^{2}/\mathrm{ndf}\) is \(\sim 1\) for a mixture of proton and iron showers with primary energies of \(10^{19}\,\mathrm{eV}\). Examples of fitted profiles of the photon, proton, and iron nucleus induced showers are given in Fig. 2.
The \(\chi^{2}\)-distributions for the Greisen and GH functions fitted to simulated data from Sibyll2.3d showers at a primary energy of \(10^{19}\,\mathrm{eV}\) are depicted in Fig. 3. Additional \(\chi^{2}\)-distributions from showers simulated with different hadronic interaction models and primary energies are given in Fig. 10 (note that the estimated uncertainty of the individual data points, as well as the limit for fitting the tails was not adjusted for the different primary
Figure 2: Example shower profiles with corresponding best fits. The showers were initiated by a (_top to bottom_) gamma-ray, proton, and iron primary particle with a primary energy of \(10^{19}\,\mathrm{eV}\) each, using the Sibyll2.3d model of hadronic interactions. The best fit values corresponding to both the Greisen function (_red_) and the Gaisser-Hillas function (_orange dashed_) are given in each panel along with the MC values of \(X_{\mathrm{max}}\). Additionally, the best fit values of \(X_{\mathrm{max}}\) are depicted by vertical lines. The profile tails (_gray data points_) were disregarded for the fits. Below each panel, the relative deviation \(\delta=(f-d)/d\) of the function value \(f\) and the simulated profile data \(d\) is shown for both functions in the respective color.
Figure 3: \(\chi^{2}\)-distributions for the Greisen and the GH functions fitted to simulated Sibyll2.3d air showers with primary energies of \(10^{19}\,\mathrm{eV}\). The distributions for the Greisen (GH) functions are shown as a full (dashed) line, for each of the three primary particles. The respective mean of each distribution is indicated by a vertical line.
energies). We find that on average for all energies and hadronic interaction models, the average values of \(\chi^{2}/{\rm ndf}\) are smaller for the Greisen function fit, than for the fit using a GH function, thus implying a better match of the function to the profile data.
Furthermore, we investigate the ability of the Greisen function to recover the depth of the shower maximum as well as the calorimetric energy deposit of the shower. The distributions of the residuals \(X_{\rm max}^{\rm rec}-X_{\rm max}^{\rm MC}\) as a function of the Conex MC values of \(X_{\rm max}\) are depicted in Fig. 4. We observe that \(X_{\rm max}\) can be recovered very accurately from the simulated profile data using both functions, with an average precision of about \(4\,{\rm g}\,{\rm cm}^{-2}\) for both fit functions. The performance of the Greisen function of "finding" the right depth of the shower maximum is thus approximately equal as of the GH function.
The calorimetric energy deposit of the shower can be obtained by integrating the fitted profile function from \(X_{1}\) up to \(\infty\). To obtain the calorimetric energy deposit from the best-fit of both functions, we integrate numerically9 from the respective best-fit value of \(X_{1}\) to \(2000\,{\rm g}\,{\rm cm}^{-2}\lesssim\infty\). The relative residuals of the recovered calorimetric energy deposit \(E_{\rm cal}\) with respect to the simulated calorimetric energy deposit \(E_{\rm cal}^{\rm MC}\) are depicted in Fig. 5 as a function of the Conex MC values of \(X_{\rm max}\). The accuracy and precision of the recovered calorimetric energy to estimate the primary energy is the same for the Greisen and the GH function for all primary particles (and all hadronic interaction models). The performance of the Greisen function to estimate the primary energy of the particle initiating the shower is thus equal to the performance of the GH function.
Footnote 9: The result for the calorimetric energy deposit as obtained from the fitted and numerically integrated Greisen function is approximately the same as using the formula given in Eq. (10).
Additionally, we present two-dimensional distributions of \(X_{1}\) and \(X_{0}\) obtained from the Greisen function fit, as well as \(X_{1}\) and \(\Lambda\) from the GH function fit in Fig. 11.
Given the fact that \(\epsilon\) is independent of the primary energy of the shower, and that the MC distributions of \(\epsilon\) (cf. Fig. 9 (_left_)) are dependent on the type of primary particle, it is tempting to examine the primary-mass sensitivity of the best-fit values of \(\epsilon\). In Fig. 6 we show the two-dimensional distributions of the best-fit results of \(X_{\rm max}\) and \(\epsilon\). To remove the direct dependence on the primary energy, instead of the true \(X_{\rm max}\), here we use
\[X_{\rm max}^{19}:=X_{\rm max}-58\,{\rm g}\,{\rm cm}^{-2}\,\lg\left(E_{\rm cal }/10^{19}\,{\rm eV}\right), \tag{15}\]
assuming a constant decadal elongation rate of approximately \(58\,{\rm g}\,{\rm cm}^{-2}\), and using \(E_{\rm cal}\) as obtained from the numerical integration of the best-fit function.
From Fig. 6 it is obvious that the separation of the distributions of individual primary particles increases when the best-fit values of \(\epsilon\) are considered alongside with \(X_{\rm max}\). Numerically, the means of the distributions of \(X_{\rm max}^{19}\) obtained from a fit to proton and iron shower profiles are approximately 1.5 average standard deviations apart; on the diagonal line, which combines the information of \(\epsilon\) and \(X_{\rm max}\), the means of the proton and iron distributions are separated by almost 1.9 average standard deviations10. In case of photon-hadron separation, the distance improves from 1.4 to 1.6. Thus, using Conex simulations, we see a clear improvement in terms of the separation of primary particles when employing the combination of \(\epsilon\) and \(X_{\rm max}\) from the Greisen function over \(X_{\rm max}\) only. Additional indicators for photon-like showers are the obtained values for \(\chi^{2}\) and \(X_{0}\) (cf. Figs. 3 and 11). The Greisen function thus might be useful when trying to identify ultrahigh-energy photons in fluorescence detector data.
Footnote 10: For the estimation, see Eq. (11).
To compare against the behaviour of the GH function, in Fig. 12 we show the two-dimensional distributions of \(X_{\rm max}^{19}\) and \(N_{\rm max}^{19}=N_{\rm max}/(E_{\rm cal}/10^{19}\,{\rm eV})\), as obtained from the GH function fitted to simulated air-shower data. As can be seen from Fig. 12, \(N_{\rm max}^{19}\) does not yield additional information about the mass of the primary particle on its own, while \(\epsilon\) does.
## VI Fitting the Greisen function with fixed shape
To boost the performance of the fitting procedure given only poor data, the GH function can be used with constraints to fix the shape of the function to an expected shape of the profile data [21]. These constraints are realized by reparametrizing the GH function in terms of "\(L\)" and "\(R\)" and then constraining these new parameters, which define the width and skewness of the function, respectively (see [21] for details). However, the average values of \(L\) and \(R\) depend on the primary particle of the shower [22; 23]. Using the Greisen function, the shape of the profile can be determined simply by fixing or constraining the parameter \(\epsilon\). Moreover, one can easily choose whether the function should resemble an average gamma-ray, proton, or iron shower, depending on the corresponding value of \(\epsilon\) (cf. Fig. 9).
To demonstrate, we fix \(\epsilon=10^{-6.2}\,{\rm PeV}/({\rm g}\,{\rm cm}^{-2})\) as a compromise between iron-like and proton-like shower profiles and fit the simulated data with only three free parameters, namely \(X_{1}\), \(X_{\rm max}\), and \(X_{0}\). As can be seen from Fig. 7, the \(X_{\rm max}\) bias is minimal for proton showers, using the Greisen function with an hadron-like fixed shape, but increases for gamma-ray-induced showers and heavy nuclei. The estimated calorimetric energy deposit, however, is almost unaffected (the bias for hadronic showers changes by \(\approx 0.5\%\)), when using only three free parameters. Lastly, the best-fit values of \(X_{\rm max}\) and \(X_{1}\) become highly correlated for the Greisen function with fixed \(\epsilon\), as it is expected from Eqs. (10) and (12).
## VII Discussion and Summary
In this article, we present the Greisen function in its original form and discuss its relation to the shower age parameter \(s\). Furthermore, we present a way to derive the function from literature. We show that with slight modifications the Greisen function can be rewritten to match individual simulated air-shower profiles, even from hadronic primaries. We confirm this statement using simulated air showers from different primary particles at different energies, and using different hadronic interaction models. Contrary to popular belief, the Greisen function matches the simulated air-shower profiles even somewhat better than the most commonly used Gaisser-Hillas profile function. In contrary to the Gaisser-Hillas function, which was introduced as an alternative to the Constant Intensity Cut method, the Greisen function was derived to describe the full longitudinal profiles of air showers.
We analyse the performance of the Greisen function
Figure 4: Distributions of the residuals of the reconstructed values of \(X_{\rm max}\) as a function of the Convex MC values of \(X_{\rm max}\) using the Greisen function (_left_) and the GH function (_right_). The showers were simulated with primary energies of \(10^{18.5}\,\mathrm{eV}\), \(10^{19}\,\mathrm{eV}\), and \(10^{19.5}\,\mathrm{eV}\), using the Sibyll2.3d model of hadronic interactions. The overall mean and standard deviation of the distribution is given in the upper right corner. The distributions of residuals for the individual primary particles are colored accordingly.
Figure 5: Distributions of the difference of the recovered calorimetric energy \(E_{\rm cal}\) and the simulated calorimetric energy deposit \(E_{\rm cal}^{\rm MC}\) as a function of the Convex MC values of \(X_{\rm max}\) using the Greisen function (_left_) and the GH function (_right_). The showers were simulated with primary energies of \(10^{18.5}\,\mathrm{eV}\), \(10^{19}\,\mathrm{eV}\), and \(10^{19.5}\,\mathrm{eV}\), using the Sibyll2.3d model of hadronic interactions. The overall mean and standard deviation of the distribution is given in the upper right corner. The distributions of residuals for the individual primary particles are colored accordingly.
Figure 6: Two-dimensional distributions of the best-fit values of \(X_{\rm max}^{19}\) and \(\epsilon\) using the Greisen function to fit simulated longitudinal profiles of showers with primary energies of \(10^{18.5}\,\mathrm{eV}\), \(10^{19}\,\mathrm{eV}\), and \(10^{19.5}\,\mathrm{eV}\). The curved lines show the estimated \(1\sigma\) extent of the respective distributions. All showers were simulated using the Sibyll2.3d model of hadronic interactions.
to recover air-shower observables from simulated profile data assuming an ideal detector. In this analysis, we show that the Greisen function yields approximately the same performance as the Gaisser-Hillas function to determine the calorimetric energy deposit and the depths of the shower maxima from simulated showers at different primary energies using different hadronic interaction models.
We identified the shape-parameter \(\epsilon\) of the Greisen function, which is primary-mass sensitive and can help distinguishing different types of primary particles, additionally to the slanted depth of the shower maximum \(X_{\rm max}\). Lastly, we demonstrate that fixing \(\epsilon\) can elegantly fix the shape of the Greisen function and thus shower profiles can be fitted even with only three free parameters. In this case, while the recovered values of \(X_{\rm max}\) are slightly biased, the accuracy and precision of the obtained calorimetric energy deposit are unaffected.
We conclude that the Greisen function proves itself useful to describe air-shower profiles and bears additional potential for photon-hadron separation as well as mass-composition studies, and could thus be used in the search for light particles in air-shower data.
## VIII Acknowledgements
The authors would like to thank Alexey Yushkov, Eva Santos, and Darko Veberic for fruitful comments and discussion. This work was partially supported by the Ministry of Education Youth and Sports of the Czech Republich and by the European Union under the grant FZU researchers, technical and administrative staff mobility, registration number CZ.02.2.69/0.0/0.0/18_053/0016627. This work was supported by the Czech Science Foundation - Grant No. 21-02226M.
## IX Code availability
The analysis code for this article is available upon request under:
gitlab.com/stadelmaier/greisen_fct_fit
Figure 7: The \(X_{\rm max}\)-bias (_top_) and \(E_{\rm cal}\)-bias (_middle_) as a function of the Convex MC value of \(X_{\rm max}\), as well as the correlation of \(X_{\rm max}\) and \(X_{1}\) using a fixed-shape Greisen function with only three free parameters to fit simulated air-shower data. The showers were simulated using the Sibyll2.3d model of hadronic interactions with primary energies of \(10^{18.5}\,\mathrm{eV}\), \(10^{19}\,\mathrm{eV}\), and \(10^{19.5}\,\mathrm{eV}\). |
2305.12798 | Word Embeddings Are Steers for Language Models | Language models (LMs) automatically learn word embeddings during pre-training
on language corpora. Although word embeddings are usually interpreted as
feature vectors for individual words, their roles in language model generation
remain underexplored. In this work, we theoretically and empirically revisit
output word embeddings and find that their linear transformations are
equivalent to steering language model generation styles. We name such steers
LM-Steers and find them existing in LMs of all sizes. It requires learning
parameters equal to 0.2% of the original LMs' size for steering each style. On
tasks such as language model detoxification and sentiment control, LM-Steers
can achieve comparable or superior performance compared with state-of-the-art
controlled generation methods while maintaining a better balance with
generation quality. The learned LM-Steer serves as a lens in text styles: it
reveals that word embeddings are interpretable when associated with language
model generations and can highlight text spans that most indicate the style
differences. An LM-Steer is transferrable between different language models by
an explicit form calculation. One can also continuously steer LMs simply by
scaling the LM-Steer or compose multiple LM-Steers by adding their
transformations. Our codes are publicly available at
\url{https://github.com/Glaciohound/LM-Steer}. | Chi Han, Jialiang Xu, Manling Li, Yi Fung, Chenkai Sun, Nan Jiang, Tarek Abdelzaher, Heng Ji | 2023-05-22T07:52:04Z | http://arxiv.org/abs/2305.12798v2 | # LM-Switch: Lightweight Language Model
###### Abstract
In recent years, large language models (LMs) have achieved remarkable progress across various natural language processing tasks. As pre-training and fine-tuning are costly and might negatively impact model performance, it is desired to efficiently adapt an existing model to different _conditions_ such as styles, sentiments or narratives, when facing different audiences or scenarios. However, efficient adaptation of a language model to diverse conditions remains an open challenge. This work is inspired by the observation that text _conditions_ are often associated with selection of certain words in a context. Therefore we introduce LM-Switch, a theoretically grounded, lightweight and simple method for generative language model conditioning. We begin by investigating the effect of conditions in Hidden Markov Models (HMMs), and establish a theoretical connection with language model. Our finding suggests that _condition_ shifts in HMMs are associated with linear transformations in word embeddings. LM-Switch is then designed to deploy a learnable linear factor in the word embedding space for language model conditioning. We show that LM-Switch can model diverse tasks, and achieves comparable or better performance compared with state-of-the-art baselines in LM detoxification and generation control, despite requiring no more than 1% of parameters compared with baselines and little extra time overhead compared with base LMs. It is also able to learn from as few as a few sentences or one document. Moreover, a learned LM-Switch can be transferred to other LMs of different sizes, achieving a detoxification performance similar to the best baseline. We will make our code available to the research community following publication. 1
Footnote 1: Please be advised that this paper contains potentially controversial results and examples to some readers, included solely for research purpose to explore model capabilities.
## 1 Introduction
In recent years, large language models (LLMs) have made significant progress in various natural language processing (NLP) tasks such as machine translation, sentiment analysis, schema induction and summarization [4; 19; 24; 39; 35]. LLMs are typically pre-trained on a unified text corpus. However, there are various scenarios where it is desirable to steer a language model's generation according to different _conditions_, such as stances, styles, and sentiments. Examples of these cases include tailoring a language model's output for desired communication goals, creating personalized and relatable content to connect with target audiences, or to mitigate biases, manage risks, and ensuring fair and unbiased representation. When facing these diverse needs, retraining or fine-tuning is not only inefficient [4] but can also negatively impact their performance [56]. This work is then motivated by the need for efficiently adapting an existing LM given diverse _conditions_ while fully taking advantage of its generation power.
There has been increasing attention on controlling LM generations. Besides directly training an LM on domain-specific datasets [57; 25; 55], other techniques are proposed for guiding LM at decoding time. These attempts include superposing attribute classifiers (such as sentiment and toxicity) as constraints when sampling tokens [22; 5; 28; 50; 23], treating decoding as an optimization problem [21], or grafting complex adaptor modules onto existing LMs [13]. Despite these efforts, due to the large size of parameters in LMs to be adapted, and extra computation burden while decoding, efficient conditioning of LMs still remains an open question. Recently, prompting with instructions emerges as a novel method for LM interaction [4; 35]. However either that the performance relies on the quality of very large LLMs, and/or needs to pre-train an instruction-controlled LM deliberately on related corpus [58; 40], which prevents scaling up to larger demands.
To address these challenges, we introduce LM-Switch, a theoretically grounded yet empirically straightforward and lightweight plug-in for efficient and versatile conditioning over language models, by only transforming the word embedding space. This work is inspired by the observation that diverse levels of conditions in text, including appropriateness, sentiments, and stances, are tied to the specific choice of words used within a given context. We start from theoretically investigating the effect of conditions on text distributions. Specifically, we measure the effect of adding conditions into Hidden Markov Models, and establish association between condition shifts and word embedding transformations in LMs. This inspires the design of our proposed method, LM-Switch, where we insert a learnable linear bias to the LM word embeddings. Specifically, the embedding \(\mathbf{e}_{v}\) of each word \(v\in\mathcal{V}\) is replaced with \(\mathbf{e}_{v}+\epsilon W\mathbf{e}_{v}\). Here \(W\) is the "switch" matrix determining the effect of LM conditioning, while \(\epsilon\) acts as a "switching value" to indicate polarity and intensity, e.g., +1 for positive and -1 for negative sentiment.
Empirically, we demonstrate that LM-Switch is capable of achieving comparable or better performance on language model detoxification and generation control, despite using a much smaller model size and less decoding time. It offers several other advantages over existing approaches. It is able to learn from as few as one article or dozens of sentences. Another benefit is the ability to transfer a learned LM-Switch to other LMs with different sizes. On detoxification task, this transfer achieves performance as good as the best baseline. Moreover, LM-Switch is supported by a theoretical guarantee of linearity, which enables continuous and compositional control. This allows for dealing with multitude diverse and nuanced situations, such as personalized or customized generation, without the need of re-training for each scenario. Moreover, we are able to interpret the LM-Switch and display the most indicative words associated with a condition. Broader impacts are discussed in Appendix A. In summary, this paper makes the following contributions:
* We propose LM-Switch, a theoretically supported and lightweight method for language model conditioning.
* We empirically demonstrate the effectiveness of LM-Switch on applications such as LM detoxification and controlled generation.
Figure 1: An overview of LM-Switch. **(a)**: LM-Switch applies a linear factor \(\epsilon W\mathbf{e}_{v}\) to each word embedding for language model conditioning. **(b)**: During training, we use a positively switched model \(M(\epsilon W)\) to maximize likelihood on positively labelled texts, view versa. **(c)**: For generation, one only needs to specify a switch value \(\epsilon\), and then proceed with normal decoding.
* We also highlight and prove the benefits of LM-Switch including data efficiency, transferability, continuous and compositional control and interpretability.
## 2 Related Work
**Control of Language Models** has been of growing interest in recent years, motivated by the increasing capabilities of LMs. This area originates from the need to leverage the generation capabilities of large language models, while avoiding the need for time-consuming and costly retraining or fine-tuning. These attempts include applying attribute classifiers or heuristic constraints at decoding time [22; 5; 28; 50], treating the generation process as an optimization problem over the embedding or token sequences [21], or post-editing the output [23]. These techniques are often computationally expensive in searching the output, and rely on the availability and quality of suitable external classifiers. More recently, prompting-based control for large language models receives much attention, with the control achieved by input prompt engineering to guide the model's generation. However, this method often rely on the quality and avaiablility of large language models [4; 35], and may also necessitate the deliberate training [40; 58]. It can also be challenging to design effective prompts for complex or nuanced control goals. Probably most closely related to our work are attempts in discovering "steering" vectors or tokens [45; 26], which might originate from similar work in image generation [14; 12]. Different from our model, these efforts focus on other applications such as multi-task learning and sentence recovery, and the learned vectors (instead of matrices as in our work) are not shown to be transferable or interpretable, nor enabling flexible control.
**Controllable Text Generation** is a broader topic involving generating text according to various control objectives such as dialogue history, personality, format or knowledge [54; 7; 17; 51; 36; 29; 48; 53; 57]. Different from prior work which often requires training a task-specific model, our model mainly focuses on providing plug-and-play conditioning over a diverse range of off-the-shelf language models. Note that some methods among these papers do not support users to provide text suffixes, or "prompting" [25; 38; 30; 37], incompatible to the evaluation setting of this study.
**Language Model Detoxification** Motivated by the goal to address the systematic biases embedded in language models, there are efforts in conducting language model de-biasing or de-toxification [31; 16]. Approaches span all aspects of the language model pipeline. A line of work focuses on automatically obtaining cleaner data [1; 46; 6]. Another line of work modifies the model workflow design to explicitly accommodate the bias factors [46; 42; 52; 34; 49]. The most related line of work to the herein proposed method involves manipulating embedding space such as Principle Component Analysis and Nullspace Projection [27; 3; 41]. The evaluation in these settings [15; 32; 33] mostly consists of quiz-question checking for stereotypical misbeliefs. More related to our method are those mentioned in language model control [22; 5; 28; 50; 21], which constrains or guides text generation according to a classifier. A unique contribution in our work is that the learned LM-Switch can be transferred to also detoxify other off-the-shelf language models without costly training process.
## 3 LM-Switch: Motivation and Formulation
In this section, we provide a theoretical inspiration of LM-Switch. Hidden Markov Model (HMM) is a widely used framework for analyzing discrete stochastic processes. Because of its generality (being able to model arbitrary distributions), intuitiveness and interpretability (containing a structured state space), it has long been used as a primary choice when modeling language distribution. Our theoretical analysis shows that under some assumptions switching between conditions is equivalent to a linear transform in word embedding spaces. This observation then inspires the derivation of our proposed model, LM-Switch.
### Preliminaries: HMM and LM
Hidden Markov ModelsHidden Markov Models (HMMs) [2] is a discrete stochastic process with a set of \(n\) states \(\mathbf{S}\) and a set of \(m\) observations or emissions \(\mathbf{O}\), with arbitrary indexing of \(\mathbf{S}\) and \(\mathbf{O}\). The distribution for the time step \(t=0\) is determined by initial state distribution \(s_{0}\sim\pi\). For each later time step \(t\geq 1\), the state transition probabilities are represented by a matrix \(\mathbf{T}\), where \(T(s,s^{\prime})=P(s_{t+1}=s^{\prime}|s_{t}=s)\) denotes the probability of transitioning from state \(s\) to state \(s^{\prime}\). At each time step one observation \(o_{t}\) is emitted, with the emission probabilities represented by
a matrix \(\mathbf{B}\), with \(B(s,o)=P(o_{t}=o|s_{t}=s)\). A sequence of observations can be denoted as \(\mathbf{o}=\{o_{1},o_{2},\dots,o_{T}\}\). The probability distribution over sequences \(\mathbf{o}\) then follows formula:
\[P(o_{1},\cdots,o_{T};\pi)=\pi^{\top}\left(\prod_{t=0}^{T-1}\text{diag}(\mathbf{ p}(o_{t}))T\right)\mathbf{p}(o_{T}), \tag{1}\]
where \(\mathbf{p}(o)\) is a \(|\mathcal{S}|\)-dim vector indicating \(P(o\mid s)\) for all states \(s\in\mathcal{S}\).
Language ModelsIn generative language models, the sequence is generated word-by-word by a conditional probability \(P(o_{t}\mid o_{1},\cdots,o_{t-1})\). The common technique to model this probability is to first calculate inner product between a contextual vector \(\mathbf{c}(o_{1},\cdots,o_{t-1})\) and word embeddings \(\mathbf{E}=(\mathbf{e}_{o},\cdots)\in\mathbb{R}^{d\times|\mathcal{O}|}\), namely, \(\mathbf{l}=\mathbf{c}(o_{1},\cdots,o_{t-1})^{\top}\mathbf{E}\). Here, \(\mathbf{l}\) is known as the word _logits_, which then usually passes through a softmax operator to get a distribution over words. For simplicity of analysis, in this work we assume a linear formulation and let conditional probability \(P(o_{t}|o_{1},\cdots,o_{t-1})=\mathbf{c}(o_{1},\cdots,o_{t-1})^{\top}\mathbf{ e}_{o_{t}}\). By chain-rule, multiplying the conditional probabilities will give us the full probability: \(\prod_{t=1}^{T}P(o_{t}\mid o_{1},\cdots,o_{t-1})=P(o_{1},\cdots,o_{T})\).
We are then interested in the situation where a language model is good enough to represent an equivalent distribution with HMM. Assuming full column-rank for \(\mathbf{E}\) and \(\mathbf{p}(o)\), we have the following connection between LM and HMM:
**Proposition 1**.: _There exist projection matrices \(R_{1}\) and \(R_{2}\) so that \(R_{1}^{\top}R_{2}=I_{n}\) and_
\[\mathbf{c}(o_{1},\cdots,o_{t-1})^{\top}=\left(\frac{\pi^{\top}\prod_{t^{\prime }=1}^{t-1}\text{diag}(\mathbf{p}(o^{\prime}_{t}))T}{\pi^{\top}\left(\prod_{t^{ \prime}=1}^{t-2}\text{diag}(\mathbf{p}(o^{\prime}_{t}))T\right)\mathbf{p}(o_{ t-1})}\right)R_{1}^{\top},\mathbf{e}_{o}=R_{2}\mathbf{p}(o). \tag{2}\]
### Conditioned Hidden Markov Model
In this study, we aim to model the influence of _conditions_ in text generation. This section describes how we incorporate conditions in HMMs. Conventionally, people assume a \(d\)-dimensional state representation \(\phi_{s}\) for every state \(s\), and \(d\)-dimensional \(\psi_{o}\) for each observation \(o\), so that they can compute the probabilities \(T(s,s^{\prime})=\phi_{s}^{\top}A\phi_{s}^{\prime}\), \(B(s,o)=\phi_{s}^{\top}\psi_{o}\) and \(\pi(s)=\phi_{\pi}^{\top}\phi_{s}\) for some \(\phi_{\pi}\). We also use matrices \(\Phi,\Psi\) to denote the stacked representations \(\Phi=(\phi_{s}|s\in\mathcal{S}),\Psi=(\psi_{o}|o\in\mathcal{O})\). Here we introduce an additional _condition_ component in state representations, so that \(\phi_{s}\) can be partitioned into two sub-vectors: \(\phi_{s}=\begin{pmatrix}\phi_{s,\text{semantic}}\\ \phi_{s,\text{condition}}\end{pmatrix}\). Here \(\phi_{s,\text{semantic}}\in\mathbb{R}^{d_{s}}\) represents the \(d_{s}\)-dim semantic information, and \(\phi_{s,\text{condition}}\in\mathbb{R}^{d_{c}}\) the \(d_{c}\)-dim condition information related to state \(s\). Then we assume that the transition probability \(T(s,s^{\prime})\) comes from both semantic relations and conditional similarities between \(s^{\prime}\) and \(s\): \(T(s,s^{\prime})=\phi_{s,\text{semantic}}^{\top}A^{\prime}\phi_{s^{\prime}, \text{semantic}}+\phi_{s,\text{condition}}^{\top}\phi_{s^{\prime},\text{ condition}}\).
We also make the following assumptions regarding the state representations:
**Assumption 1**.: _State representations \(\phi\) also satisfy the following properties:_
_1. Values for each dimension are uniformly normalized to a constant: \(\forall i\in[1..d],\sum_{s\in\mathcal{S}}\phi_{s,i}^{2}=C\)._
_2. Dimensions are linearly independent: \(\forall i,j\in[1..d]\) and \(i\neq j\), \(\sum_{h\in\mathcal{H}}\phi_{h,i}\phi_{h,j}=0\)._
_3. Dimensions are also conditionally independent: if \(i,j\in[1..d],k\in[d_{s}+1..d]\) are not all the same, \(\sum_{s\in\mathcal{S}}\phi_{s,i}\phi_{s,j}\phi_{s,k}=0\)._
The validity of the assumption is discussed in Appendix J. Then we present the result below revealing that, shifting from one initial condition to another is equivalent to a linear transformation in word embedding space, which accords with our motivating observation that conditions are associated with selection of words in contexts:
**Theorem 1**.: _Assume assumption 1 holds. Suppose there are two initial distributions \(\pi=\phi_{\pi}^{\top}\Phi,\pi^{\prime}=\phi_{\pi}^{\top}\Phi\), so that \(\phi_{\pi}\) and \(\phi_{\pi^{\prime}}\) only differ in their condition-parts: \(\phi_{\pi,\text{semantic}}=\phi_{\pi^{\prime},\text{semantic}}\). Also suppose the elements in \(\phi_{\pi,\text{condition}}\) are non-zero. Then there exists an matrix \(W\) so that, by transforming word embeddings from \(E\) to \(WE\), the LM which originally simulates the text distribution starting with \(\pi\) will now turn to be equivalent to a distribution initiating from \(\pi^{\prime}\)._
### LM-Switch Formulation
Inspired by the discovery in the section above, we propose LM-Switch to apply a linear transform in the word embedding space. LM-Switch is conceptually simple and straightforward to implement. An illustration of LM-Switch is presented in Figure 1(a). Specifically, let \(M\) be a fixed language model with fixed parameters. We replace its each output word embeddings \(\mathbf{e}_{v}\) with \(\mathbf{e}_{v}+\epsilon W\mathbf{e}_{v}\), and call the resulting language model \(M^{\prime}=M(\epsilon W)\) a "switched model". Here the "switch matrix" \(W\) is the only learnable parameters determining the effect of LM-Switch, and \(\epsilon\) is a manually adjustable scalar indicating the polarity and intensity of the "switch value". Without loss of generality, we arbitrarily pick a small value \(\epsilon_{0}=1e-3\) as the default switch value.2 We use \(P(\mathbf{o}|\epsilon W)\) to denote the probability of \(M^{\prime}\) generating sequence \(\mathbf{o}=(o_{1},\cdots,o_{T})\). Figure 1(b, c) shows the training and generation process of LM-Switch. During training, we use the positively switched model \(M(\epsilon W)\) to fit the positively labelled texts, with maximal likelihood as the training objective. When negatively texts are available, we also fit them with \(M(-\epsilon W)\). When generating with LM-Switch, the user only needs to specify a switch value \(\epsilon\) and then decode language model. More details are in Appendix D.
Footnote 2: Using \(\epsilon W\) achieves the equivalent effect as \(k\epsilon\cdot k^{-1}W)\) when \(k\neq 0\), so the absolute value of \(\epsilon\) itself is only meaningful when also considering the magnitude of \(W\).
### Linearity Properties
The conceptually simple design of LM-Switch makes it an architecture-agnostic plug-in to diverse language models. We demonstrate that LM-Switch maintains a linearity guarantee, regardless of the model architecture applied to. The linearity enables it to achieve desired capacities of continuous and compositional control. More specifically, even if we only train with a few discrete values of \(\epsilon\) as discussed in Section 3.3, our model allows for fine-grained adjustment of switch value, and decoding with configuration \(M\left(k\epsilon W\right)\) as long \(k\) is not too far off from \([-1,1]\). Moreover, if two LM-Switchs \(W_{1},W_{2}\) are learned, their effect can be combined by decoding with \(M(\epsilon_{1}W_{1}+\epsilon_{2}W_{2})\), where \(\epsilon_{1},\epsilon_{2}\) are individual switch values for \(W_{1},W_{2}\). Proofs of the two theorems are provided in Appendix C.
**Assumption 2**.: _We assume a bound on the following values: all word embeddings are bounded by \(\|\mathbf{e}_{v}\|_{2}\leq 1\); all contextual vectors are bounded by \(\|\mathbf{c}(o_{1},\cdots,o_{i})\|_{2}\leq 1\); \(W\) has its norm bounded by \(\|W\|_{2}\leq D\)._
**Theorem 2**.: _(Continuous Control) Let \(\lambda_{\max}\) be the maximum eigen-value of \(W\). When varying \(\epsilon\)'s value, The switched model's distribution is close to a linear interpolation from \(M\) to \(M^{\prime}\):_
\[\|P(\cdot\mid k\epsilon,W)-(P(\cdot)(1-k)+kP(\cdot\mid\epsilon,W))\,\|_{1} \leq 2|k(1-k)|\epsilon^{2}L^{2}\lambda_{\max}(e^{\lambda_{\max}}-1) \tag{3}\]
**Theorem 3**.: _(Compositional Control) If we add two switching matrices \(W_{1},W_{2}\) together and use it as a new switching matrix, their switching effects on distributions are approximately linearly combined:_
\[\|P(\cdot\mid\epsilon,W_{1}+W_{2})-(P(\cdot\mid\epsilon,W_{1})+P(\cdot\mid \epsilon,W_{2})-P(\cdot))\,\|_{1}\leq 10\epsilon dL^{2}D^{2} \tag{4}\]
## 4 Applications
In this section we delve into a range of natural language applications: language detoxification, sentiment control, and political stance control. These tasks span multiple linguistic levels: lexical, semantic, pragmatic, etc. We follow [28] and use GPT2-Large 3 as the backbone language model.
Footnote 3: [https://huggingface.co/gpt2-large](https://huggingface.co/gpt2-large)
Footnote 4: [https://bit.ly/3cv05py](https://bit.ly/3cv05py)
### Language Detoxification
It is known that large pretrained LMs might generate toxic content that appears in the pre-training distribution [43; 8], such as inaccurate information, harmful stereotypes, and unethical content. Language model detoxification is the task of mitigating or avoiding these generations, in order to enable safe usage of language models.
**Setting:** Following [28], we use Jigsaw Unintended Bias in Toxicity Classification Kaggle challenge4 as the training dataset. For evaluation, we use 10K nontoxic prompts from the RealToxicityPrompts
dataset [8]. We randomly generate 25 sentences of up to 20 tokens using nucleus sampling [11] with \(p=0.9\). Then the toxicity scores (in range \([0,1]\)) of generations are evaluated using Perspective API 5. Two metrics are reported: the average of maximal toxicity for each prompt ("Avg. max. toxicity"), and the probability of generating \(>0.5\) toxicity at least once for each prompt ("Toxicity prob."). We also evaluate generation quality in terms of fluency (perplexity score measured by a GPT2-large) and diversity (Dist-{1, 2, 3}: the portion of distinct {1, 2, 3}-grams). When decoding, we use a switch value of \(5\epsilon_{0}\) for generation, which is selected by a balance of scores and quality. Ablation study on switch values can be found in Appendix E.
Footnote 5: [https://perspectiveapi.com](https://perspectiveapi.com)
**Baselines: DEXerts** trains positive and negative label classifiers, and uses the difference in two classifiers' scores to offset the LM's original logits. **DAPT**[10] simply further pretrains the language model on the non-toxic subset (filtered by Perspective API) of OpenWebText Corpus (OWT) [9]. **PPLM**[5] learns to use the gradients of the label classifier to update the LM's hidden representations. **GeDi**[20] is a model that uses Bayesian rule for class-conditioned LM generation. **MuCoLa**[22] models the text generation as an optimization problem regarding the classifier scores. **PromptT5**[40] T5 is a pre-trained LM optimized for prompt-based task solving, and we use "Complete this sentence so that it embodies a {positive/negative} sentiment:" to prompt T5. Finally, vanilla **GPT2** is also adopted as a unguided baseline.
**Results and Analysis:** We present the results in Table 1. Despite the simple design, LM-Switch achieves the best detoxification scores on both metrics, reducing Avg. max. toxicity by \(>6\%\) absolute percentages. It is also noteworthy that LM-Switch also demonstrates reasonable balance on fluency (2nd lowest perplexity score) and diversity (same-level Dist-k scores with baselines).
### Sentiment Control
We also evaluate LM-Switch's performance on an extensively studied generation task controlled by sentiment. This ability can be found useful when tailoring persuasive and emotionally appealing messages to specific target audiences in marketing or advertising, or to create personalized and engaging user experience in chatbot systems.
**Setting:** We follow the setting in [28] and use Stanford Sentiment Treebank (SST-5) [44] as training data, where we use texts with labels 1\(\sim\)2 as negative samples, and those with 4\(\sim\)5 labels as positive samples. For evaluation, we use the HuggingFace's sentiment classifier [47]. The generation prompts are a subset of OpenWebText Corpus filtered by the sentiment analysis classifier. Models are applied on these prompts for 25 times to generate up to 20 tokens. We then measure the average percentage of positive generations for each prompt as the "Positivity" score. Similar to the detoxification task, we use \(5\epsilon_{0}\) for positive sentiment and \(-5\epsilon_{0}\) for negative sentiment control.
**Baselines:** Besides the baselines used in detoxification, two variants of DEXerts: DEXerts (pos) and DEXerts (neg) which only use one of the two classifiers for guiding generation are also listed.
**Results:** Table 2 presents the full results. Here the scores are more mingled with no single model being the best on all metrics. LM-Switch, despite a much simper and smaller model, takes 2nd to 3rd place in all sentiment metrics and achieves reasonable balance on fluency and diversity.
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline \multirow{2}{*}{**Model**} & \multicolumn{2}{c}{**Toxicity\(\downarrow\)**} & \multicolumn{2}{c}{**Fluency**} & \multicolumn{2}{c}{**Diversity\(\uparrow\)**} \\ & Avg. max. toxicity & Toxicity prob. & Output & ppl.\(\downarrow\) & Dist-1 & Dist-2 & Dist-3 \\ \hline GPT-2 (original) & 0.527 & 0.520 & 25.45 & 0.58 & 0.85 & 0.85 \\ \hline PPLM (10\%) & 0.520 & 0.518 & 32.58 & 0.58 & 0.86 & 0.86 \\ DAPT & 0.428 & 0.360 & 31.21 & 0.57 & 0.84 & 0.84 \\ GeDi & 0.363 & 0.217 & 60.03 & 0.62 & 0.84 & 0.83 \\ DEXerts & 0.302 & 0.118 & 38.20 & 0.56 & 0.82 & 0.83 \\ DEXerts (GPT3) & 0.293 & 0.111 & - & - & - & - \\ PromptT5 & 0.320 & 0.172 & 354.71 & 0.58 & 0.76 & 0.70 \\ MuCoLa & 0.308 & **0.088** & 29.92 & 0.55 & 0.82 & 0.83 \\ \hline LM-Switch & **0.249\(\pm\)**0.007 & **0.089\(\pm\)**0.009 & 28.26 & 0.55 & 0.84 & 0.84 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Language model detoxification results. \(\pm\) denotes standard deviation on 3 random seeds.
**Continuous and Compositional Control:** Another advantage of LM-Switch is that we can perform a continuous and compositional control, as predicted in Theorem 2 and 3. A visualization is shown in Figure 2. Specifically, in Figure 1(a) we plot the distribution shift when adjusting sentiment switch \(\epsilon\). We also curve the maximal likelihood estimated Beta distribution. In Figure 1(b) we observe that LM-Switch can compositionally control sentiment and toxicity, even though there exists mutual influence between these two factors (e.g., a negative sentiment might also lead to more toxic comments).
### Political Stance and Agenda: Case Study
We also case-study the application of political stance control of LM. This application can be beneficial for generating diverse and balanced perspectives in genres such as news articles, and also increasing the likelihood of the content being well-received by aligning with values of the target audience. We study two case scenarios. The first scenarios is _pro-Russian_ v.s. _anti-Russian_, where we collect 744 English tweets and manually label them as either "pro-Russia" (454 tweets) or "anti-Russia" (290 tweets). After training, we prompt our model to generate from both stances on a list of topics. In the second scenario, we select 5 pairs of news articles with contrastive political stances from Gound
\begin{table}
\begin{tabular}{c c c c c c c c} \hline \hline & & \multicolumn{3}{c}{**Sentiment Positivity** / \%} & \multicolumn{2}{c}{**Fluency**} & \multicolumn{2}{c}{**Diversity**\(\uparrow\)} \\
**Target** & **Model** & Positive & Neutral & Negative & & & \\ & & prompts & prompts & prompts & Output & ppl\(\downarrow\) & Dist-1 & Dist-2 & Dist-3 \\ \hline \multirow{6}{*}{**Positive\(\uparrow\)**} & LM-Switch & \multicolumn{2}{c}{90.70\(\pm\)2.51} & 41.23\(\pm\)6.33 & 41.20 & 0.46 & 0.78 & 0.83 \\ \cline{2-8} & DExperts & \multicolumn{2}{c}{94.46} & 36.42 & 45.83 & 0.56 & 0.83 & 0.83 \\ & DEXperts (pos) & \multicolumn{2}{c}{79.83} & 43.80 & 64.32 & 0.59 & 0.86 & 0.85 \\ & GeDi & \multicolumn{2}{c}{86.01} & 26.80 & 58.41 & 0.57 & 0.80 & 0.79 \\ & DAPT & \multicolumn{2}{c}{77.24} & 14.17 & 30.52 & 0.56 & 0.83 & 0.84 \\ & PPLM (10\%) & \multicolumn{2}{c}{52.68} & 8.72 & 142.11 & 0.62 & 0.86 & 0.85 \\ & PromptT5 & \multicolumn{2}{c}{68.12} & 15.41 & 362.30 & 0.58 & 0.78 & 0.72 \\ \hline \multirow{6}{*}{**Negative\(\downarrow\)**} & GPT-2 (original) & 99.08 & 50.02 & 0.00 & 29.28 & 0.58 & 0.84 & 0.84 \\ \cline{2-8} & PromptT5 & \multicolumn{2}{c}{69.93} & 25.78 & 450.68 & 0.60 & 0.78 & 0.70 \\ & PPLM (10\%) & \multicolumn{2}{c}{89.74} & 39.05 & 181.78 & 0.63 & 0.87 & 0.86 \\ & DAPT & \multicolumn{2}{c}{87.43} & 33.28 & 32.86 & 0.58 & 0.85 & 0.84 \\ & GeDi & \multicolumn{2}{c}{39.57} & 8.73 & 84.11 & 0.63 & 0.84 & 0.82 \\ & DEXperts (neg) & \multicolumn{2}{c}{61.67} & 24.32 & 65.11 & 0.60 & 0.86 & 0.85 \\ & DEXperts & \multicolumn{2}{c}{35.99} & 3.77 & 45.91 & 0.60 & 0.84 & 0.83 \\ \cline{2-8} & LM-Switch & \multicolumn{2}{c}{54.84\(\pm\)8.01} & 8.02\(\pm\)2.32 & 57.74 & 0.48 & 0.78 & 0.80 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Results on sentiment control task. The upper half displays positive control task and requires higher positivity score, and vise versa for the lower half. While no one model achieves best performance on all metrics, LM-Switch takes 2nd to 3rd place in all positivity metrics despite using a much simpler design and smaller parameter size. \(\pm\) denotes standard deviation on 3 random trials.
News 6. In each pair, we train one LM-Switch to learn each article, and then generate on the topic to see their differences. Excerpt examples of generation are shown in Table 3 with indicative text spans manually bolded. Appendix I describes the detailed setting and more results. We can observe differences in their wording ("invasiong" v.s. "geopolitical earthquake") and agenda selection ("action against India" v.s. "India's crackdown on foreigners").
Footnote 6: [https://ground.news](https://ground.news)
## 5 Analysis
### Data, Parameter and Computational Efficiency
Thanks to its simple design, LM-Switch enjoys efficiency in multiple perspectives. First, as demonstrated in Section 4.3, our model is capable of learning from only one article. As a more rigorous study, we vary the detoxification dataset size from 30 to 10k and measure LM-Switch's performance in Figure 3(b). We see that as few as 30 data points still enable LM-Switch to achieve high detoxification scores (0.322), but also induces a high perplexity as LM-Switch overfits. When dataset size exceeds 3k LM-Switch acquires a good balance between detoxification and generation quality. We
Figure 3: Measuring the transferability and data efficiency of LM-Switch.
\begin{table}
\begin{tabular}{p{142.3pt} p{284.5pt}} \hline \hline
**Stance** & **Generations** \\ \hline \multirow{4}{*}{Anti-Russia} & Russia’s annexation of Crimea was an **invasion of Ukraine’s sovereign territory**, but Russia insists that Ukraine’s Crimea People’s Republic is legally Russian territory. \\ \cline{2-3} & **NATO expansion “has nothing to do” with Europe, but Putin wants war**. And while he might start war over Ukraine right away, his true motives for fighting **may not be limited to his ‘interest’ in Ukraine**. \\ \hline \multirow{4}{*}{Pro-Russia} & Russia’s annexation of Crimea was nothing short of a **geopolitical earthquake**: it has been the **biggest geopolitical event of the year.** \\ \cline{2-3} & NATO expansion under pressure. There is growing pressure on NATO and Washington to **halt the military buildup planned for Central Asia and for Russia,** which **almost certainly lead to a new military confrontation**. \\ \hline \hline \multirow{4}{*}{Times of India} & EU diplomat seeks **action against India** on at least 1 issue \\ \cline{2-3} & The European Union’s chief diplomat seeks **action against India** on at least 1 issue, ahead of its talks with the European Union. \\ \cline{2-3} & The EU diplomat said **the view that Europe should embrace India to address its growth is a “laundromat” by his description** and he said he will raise the issue with his counterparts in Delhi. \\ \hline \multirow{4}{*}{Reuters} & The EU diplomat said that the EU should have a sanctions policy but that the sanctions rules need to be tightened to be more rigorous. \\ \cline{2-3} & EU diplomat had his visa revoked after **he raised concerns he could be targeted by India’s crackdown on foreigners** following an attack on an Indian diplomatic post in Kabul, it emerged on Tuesday. \\ \hline \hline \end{tabular}
\end{table}
Table 3: The language model generations for two example scenarios conditioned on political stances.
would also like to point readers to other types of efficiency in Appendix H, where our model only uses 1% of baseline's parameter size, and uses a low computation overhead during decoding.
### Transferring a Learned LM-Switch to Another Model
A much desired property of LM-Switch, because of its theoretical soundness, is its transferability to other language models. Details and derivations of LM-Switch transfer are in Appendix F, but intuitively explaining, we work by identifying a linear mapping \(H\) from target LM word embeddings to source LM word embeddings. Then the matrix \(H^{\top}WH\) can be inserted to the target LM as LM-Switch. Figure 3(a) shows the performance after we transfer the LM-Switch learned on GPT2-large to LMs of other sizes, ranging from gpt2 (124M) to GPT-J-6B (6B). We can see a uniform improvement in transferred LM-Switchs, with GPT2 and GPT2-medium getting similar scores (0.307 and 0.308) to the best baseline (DE experts).
### Interpretability
Finally, we investigate the parameters of LM-Switch and how they correlate with LM word embeddings. This study provides a lens through which to examine the connection between conditions and word choices. In the detoxification experiment, we conduct a SVD decomposition of the learned \(W\). Among \(S,V,D\), the \(D\) component can be interpreted as a ranked list of most "magnified" row dimension in the transformation \(W\). We then take its first 9 rows, and list most influenced words in Table 4. Dimensions 2, 4, 6 are filtered out as they only match non-English tokens. Although offensive to read, this table helps us understand what kind of words are most related to the toxicity and thus suppressed by LM-Switch in generation. More details are explained in Appendix G.
## 6 Conclusions and Future Work
In this work, we show the promise and efficacy of LM-Switch, a theoretically grounded, simple and lightweight approach for the conditioning of generative language models. Leveraging the insights from Hidden Markov Models and their relationship with language models, LM-Switch can model diverse tasks and achieve comparable or superior performance to baselines in language model detoxification and generation control. It is particularly notable that LM-Switch requires significantly fewer parameters and decoding time, allows for continuous and compositional control, and can be transferred to other language models. We show that LM-Switch can also be used for interpreting the wording in conditions. For future research, it is worth examining the boundary of LM-Switch: what can and cannot be modelled and why. Additionally, theoretical relation between LM-Switch and other techniques such as prompt engineering and prompt tuning is worth further study.
#### Limitations
One limitation of LM-Switch is that it works on word embeddings and focuses on conditions related to wording. This restricts its capability to deal with more complex tasks, such as syntactic trees or persuasive techniques that involve logical reasoning. Additionally, our model is dependent on word embeddings, so the model cannot work with language model APIs that do not provide direct access to these embeddings.
\begin{table}
\begin{tabular}{c l} \hline \hline
**Dim.** & **Matched Words** \\ \hline
0 & mor, bigot, Stupid, retarded, coward, stupid, loser, clown, dumb, Dumb, losers, stupidity, garbage \\ \hline
1 & stupid, idiot, Stupid, idiots, jerk, pathetic, suck, buff, stupidity, mor, damn, ignorant, fools, dumb \\ \hline
3 & idiot, godd, damn, \\ \hline
5 & Balk, lur, looms, hides, shadows, Whites, slippery, winds \\ \hline
7 & bullshit, fiat, shit, lies, injust, manipulation \\ \hline
8 & disabled, inactive, whip, emo, partisan, spew, bombed, disconnected, gun, failing, Republicans \\ \hline \hline \end{tabular}
\end{table}
Table 4: Detected tokens that are most influenced by LM-Switch on detoxification task.
## Acknowledgements
This work was supported in part by US DARPA KAIROS Program No. FA8750-19-2-1004, AIDA Program No. FA8750-18-2-0014, SemaFor Program No. HR001120C0123, INCAS Program No. HR001121C0165 and MIPS Program No. HR00112290105. The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for Government purposes notwithstanding any copyright notation here on.
|
2302.01025 | Semantic Coherence Markers for the Early Diagnosis of the Alzheimer
Disease | In this work we explore how language models can be employed to analyze
language and discriminate between mentally impaired and healthy subjects
through the perplexity metric. Perplexity was originally conceived as an
information-theoretic measure to assess how much a given language model is
suited to predict a text sequence or, equivalently, how much a word sequence
fits into a specific language model. We carried out an extensive
experimentation with the publicly available data, and employed language models
as diverse as N-grams, from 2-grams to 5-grams, and GPT-2, a transformer-based
language model. We investigated whether perplexity scores may be used to
discriminate between the transcripts of healthy subjects and subjects suffering
from Alzheimer Disease (AD). Our best performing models achieved full accuracy
and F-score (1.00 in both precision/specificity and recall/sensitivity) in
categorizing subjects from both the AD class and control subjects. These
results suggest that perplexity can be a valuable analytical metrics with
potential application to supporting early diagnosis of symptoms of mental
disorders. | Davide Colla, Matteo Delsanto, Marco Agosto, Benedetto Vitiello, Daniele Paolo Radicioni | 2023-02-02T11:40:16Z | http://arxiv.org/abs/2302.01025v1 | # Semantic Coherence Markers
###### Abstract
In this work we explore how language models can be employed to analyze language and discriminate between mentally impaired and healthy subjects through the perplexity metric. Perplexity was originally conceived as an information-theoretic measure to assess how much a given language model is suited to predict a text sequence or, equivalently, how much a word sequence fits into a specific language model. We carried out an extensive experimentation with the publicly available data, and employed language models as diverse as N-grams --from 2-grams to 5-grams-- and GPT-2, a transformer-based language model. We investigated whether perplexity scores may be used to discriminate between the transcripts of healthy subjects and subjects suffering from Alzheimer Disease (AD). Our best performing models achieved full accuracy and F-score (\(1.00\) in both precision/specificity and recall/sensitivity) in categorizing subjects from both the AD class and control subjects. These results suggest that perplexity can be a valuable analytical metrics with potential application to supporting early diagnosis of symptoms of mental disorders.
diagnosis of dementia, perplexity, automatic language analysis, language models, early diagnosis, mental and cognitive disorders
## 1 Introduction
_This paper is the (significantly) abridged version of the article "Semantic coherence markers: The contribution of perplexity metrics" ([https://doi.org/10.1016/j.artmed.2022.102393](https://doi.org/10.1016/j.artmed.2022.102393), [1]), which also contains references to employed data and to the implementation of the described work._
In economically developed societies the burden of mental disturbances is becoming more evident, with negative impact on people's daily life and huge cost for health systems. Whereas for many psychotic disorders no cures have been found yet, the treatment of people at high risk for developing schizophrenia or related psychotic disorders is acknowledged to benefit from early detection and intervention [2]. To this end, a central role might be played by approaches aimed at analyzing thought and communication patterns in order to identify early symptoms of mental disorder [3].
The analysis of human language has recently emerged as a research field that may be helpful to analyze for diagnosing and treating mental illnesses. Recent advances in NLP technologies
allow accurate language models (LMs) to be developed. These can be thought of as probability distributions over text sequences, that can be used to estimate in how far a text is coherent with (or, more precisely, predictable through) such language models. In order to measure the distance between an actual sequence of tokens and the probability distribution we propose using _perplexity_, a metric that is well-known in literature for the intrinsic evaluation of LMs. In this work we report results on a simple experiment, aimed at assessing whether the perplexity can help in discriminating healthy subjects from people suffering from mental disorders.
Although in literature perplexity is not new as a tool to compare the language of healthy and diagnosed subjects, we report experimental results favorably comparing with those in literature. Moreover, as far as we know, no previous work has compared perplexity scores computed through LMs as diverse as GPT-\(2\) and N-grams to the ends of discriminating healthy subjects from subjects afflicted by Alzheimer Disease. This difference has practical consequences for applications, mostly due to the different computational effort required both to train and employ such models, and to the descriptive power of the learned models.
## 2 Related Work
In the last decade, advances in NLP techniques have allowed the construction of approaches to automatically deal with tasks such as linguistic analysis and production, including also many of the aforementioned linguistic levels. These approaches have identified markers that can help differentiate patients with psychiatric disorders from healthy controls, and predict the onset of psychiatric disturbances in high risk groups at the level of the individual patient.
Although originally conceived to assess how language models are able to model previously unseen data, perplexity can be used to compare (and discriminate) text sequences produced by healthy subjects or by people suffering from language-related disturbances. To provide a hint of this approach, perplexity is a positive number that --given a language model and a word sequence-- expresses how unlikely it is for the model to generate that given sequence. A richer description of the perplexity is provided in Section 3. In [4] N-grams of part of speech (POS) tags were employed to identify patterns at the syntactic level. Then, two LMs were acquired (one from patients' data and the other from data from healthy controls): the categorization of a new, unseen (that is, not belonging to either set of training data) sample was then performed through the perplexity computed with the two LMs over the sample. The considered sample was then categorized as produced by a healthy subject (patient) if the LM acquired from healthy subjects (patients) data attained smaller perplexity than the other language model. Perplexity has been recently proposed as an indicator of cognitive deterioration [5]; more specifically, the content complexity in spoken language has been recorded in physiological aging and at the onset of Alzheimer's disease (AD) and mild cognitive impairment (MCI) on the basis of interview transcripts. LMs used in this research were built by exploiting 1-grams and 2-grams information; as illustrated in next section (please refer to Equation 2), such models differ in the amount of surrounding information employed. Perplexity scores were computed on ten-fold-cross-validation basis, whereby participants' transcripts were partitioned into ten parts; a model was then built by using nine parts and was tested on the tenth. This procedure was repeated ten times so that each portion of text was used exactly once as the test set. Four examination
waves with an observation interval of more than 20 years were performed, and correlations of the perplexity score of transcriptions dating to the beginning of the experiment were found with the score from the dementia screening instrument in participants that lately developed MCI/AD.
Perplexity has been employed as a predictor for Alzheimer Disease (AD) on the analysis of transcriptions from DementiaBank's Pitt Corpus, that contains data from both healthy controls and AD patients [6]. More precisely, in [7] two neural language models, based on LSTM models, were acquired, one built on the healthy controls and the other trained on patients belonging to the dementia group. A leave-one-speaker-out cross-validation was devised and, according to this setting, a language model \(\mathcal{M}_{-s}\) was created for each speaker \(s\) by using all transcripts from the speaker's group but those of \(s\). Data from speaker \(s\) was then tested on both \(\mathcal{M}_{-s}\), thus providing a perplexity score \(p_{own}\), and on the language model built upon the transcripts from the whole group to which the speaker did not belong to, thus obtaining the perplexity score \(p_{other}\). The difference between the perplexity scores \(\Delta_{s}=p_{own}-p_{other}\) was computed as a description for the speaker \(s\). The classification of each speaker was then performed by setting a threshold ensuring that both groups obtained equal error rate. The authors achieved \(85.6\%\) accuracy on \(499\) transcriptions, and showed that perplexity can also be exploited to predict a patient's Mini-Mental State Examination (MMSE) scores. The approach adopted in this work is the closest to our own work we could find in literature; however it also differs from ours in some aspects. First, we investigated how reliable perplexity is in assessing the language of healthy subjects. That is, we analyzed how perplexity scores vary within the same individual, as an initial step toward assessing if perplexity is suitable for examining text excerpts/transcripts that (like in the case of the Pitt Corpus) were collected through multiple interviews and tests, spanning over years. Additionally, we were concerned with evaluating all excerpts from a single individual to predict the AD diagnosis at the subject level, rather than in predicting the class for each and every transcript. In order to assess the perplexity as a tool to support the diagnosis, we analyzed only data from subjects for which at least two transcripts were available.
Following the approach presented in [7], perplexity has been further investigated for the categorization of healthy subjects and AD patients [8]. In particular, different LMs have been acquired on both control and AD subjects' transcriptions from the Pitt Corpus [6]. Such LMs have been employed to evaluate in how far differences in perplexity scores reflect deficits in language use. Our approach differs from this one. Firstly, we explored two different sorts of LMs (N-grams and GPT-2 models, fine tuned with \(5\), \(10\), \(20\) and \(30\) epochs) so to collect experimental evidence on the level of accuracy recorded by different LMs used to compute the perplexity scores. Secondly, four different decision rules were compared based on average perplexity scores from control and impaired subjects, along with their respective standard deviations. Moreover, while in [8] the categorization is performed at the transcript level, our focus is on the categorization of subjects.
## 3 Background on Perplexity
Most approaches rely on a simple yet powerful descriptive (and predictive) theoretical framework which is known as _distributional hypothesis_. The distributional hypothesis states that words that
occur in similar contexts tend to convey similar meanings [9]. Several techniques may devised to acquire the distributional profiles of terms, usually in the form of dense unit vectors of real numbers over a continuous, high-dimensional Euclidean space. In this setting each word can be described through a vector, and each such vector can be mapped onto a multidimensional space where distance (such as, e.g., the Euclidean distance between vectors) acts like a proxy for similarity, and similarity can be interpreted as a metric. As a result, words with similar semantic content are expected to be closer than words semantically dissimilar. Different metrics can be envisaged, herein, to estimate the semantic proximity/distance of words and senses [10].
Language Models (LMs) are a statistical inference tool that allows estimating the probability of a word sequence \(W=\{w_{1},\ldots,w_{k}\}\)[11; 12]. Such probability can be computed as
\[p(W)=\prod_{i=1}^{k}p(w_{i}|w_{1},\ldots,w_{i-1}), \tag{1}\]
which is customarily approximated as
\[p(W)\approx\prod_{i=1}^{k}p(w_{i}|w_{i-N+1},w_{i-N+2},\ldots,w_{i-1}). \tag{2}\]
In the latter case only blocks of few (exactly \(N\)) words are considered to predict the whole \(W\): we can thus predict the word sequence based on N-grams, that are blocks of two, three or four preceding elements (bi-grams, tri-grams, four-grams, respectively). In general N-gram models tend to obtain better performance as \(N\) increases, with the drawback of making harder the estimation of \(P(w_{N}|W_{1,N-1})\). Another issue featuring these models stems from the fact that when increasing the context size, it becomes less likely to find sequences with the same length in the training corpus. In order to deal with N-grams not occurring in the training corpus, called out-of-vocabulary N-grams, language models have to add an additional step of regularization to allow a non-zero probability to be associated to previously unseen N-grams [13; 14]. The probabilities assigned by language models are the result of a learning process, in which the model is exposed to a particular kind of textual data. The goal of the learning process is to train the model to predict word sequences that closely resemble the sentences seen during training.
As mentioned, LMs are basically probability distributions of word sequences: perplexity was originally conceived as an intrinsic evaluation tool for LMs, in that it can be used to measure how likely a given input sequence is, given a LM [12]. This measure is defined as follows. Let us consider a word sequence of \(k\) elements, \(W=\{w_{1},\ldots,w_{k}\}\); since we are interested in evaluating the model on unseen data, the test sequence \(W\) must be new, and not be part of the training set. Given the language model LM, we can compute the probability of the sentence \(W\), that is LM\((W)\). Such a probability would be a natural measure of the quality of the language model itself: the higher the probability, the better the model. The average log probability computed based on the model is defined as
\[\frac{1}{k}\log\prod_{i=1}^{k}\text{LM}(W)=\frac{1}{k}\sum_{i=1}^{k}\log \text{LM}(W),\]
which amounts to the log probability of the whole test sequence \(W\), divided by the number of tokens in sequence. The perplexity of sequence \(W\) given the language model LM is computed as
\[\text{PPL}(\text{LM,W})=\text{exp}\{-\frac{1}{k}\sum_{i=1}^{k}\log\text{LM}(w_{i} |w_{1:i-1})\}. \tag{3}\]
It is now clear why low PPL values (corresponding to high probability values) indicate that the word sequence fits well to the model or, equivalently, that the model is able to predict that sequence.
Neural language models are language models based on neural networks. Such models improve on the language modeling capabilities of N-grams by exploiting the ability of neural networks to deal with longer histories. Additionally, neural models do not need regularization steps for unseen N-grams and address the data sparsity curse of N-grams by dealing with distributed representation. The predictive power of neural language models is higher than N-grams language models given the same training set. Despite the great improvement of neural language models on NLP tasks, these models are affected by training time higher than N-grams language models.
## 4 Experiments
The experimentation presented in this Section is concerned with answering one chief question: Whether the language of a specific class of subjects, diagnosed as suffering from disorders impacting on common linguistic abilities, can be automatically distinguished from that of healthy controls solely based on perplexity accounts. In this experiment we have used the Pitt Corpus, from which we selected the transcripts of responses to the Cookie Theft stimulus picture [15], which includes transcripts from patients with dementia diagnosis (n = \(194\)) and healthy controls (n = \(99\)).1
Footnote 1: The code for replicating the experiments is available at [https://github.com/davidecolla/semantic_coherence_marke_rs](https://github.com/davidecolla/semantic_coherence_marke_rs),[16].
### Compared LMs
Different experimental setups have been designed in order to compare perplexity as computed by language models acquired by training with two different sorts of architectures: N-grams, and GPT-2.
#### 4.1.1 N-grams
Since N-grams implement the simplest language model with context, where each word is conditioned on the preceding \(N\)-\(1\) tokens only, we adopted N-grams for the first experimental setup. For the sake of clarity we introduce the formalization for Bigrams; such formulation can be further generalized to any \(N\).
We define the probability of a sequence of words \(W_{1,n}=\{w_{1},w_{2},\ldots,w_{n}\}\) as:
\[P(W_{1,n})=\prod_{i=1}^{n}P(w_{i}|w_{i-1}),\]
where the probability of each Bigram is estimated by exploiting the Maximum Likelihood Estimation (MLE) [17, Chap. 3].2 According to the MLE, we can estimate probability of the Bigram \((w_{i-1},w_{i})\) as:
Footnote 2: In this setting, stopwords are customarily not filtered, as providing useful sequential information.
\[P(w_{i}|w_{i-1})=\frac{C(w_{i}|w_{i-1})}{C(w_{i-1})} \tag{4}\]
where \(C(w_{i}|w_{i-1})\) is the number of occurrences of the Bigram \((w_{i-1},w_{i})\) in the training set, while \(C(w_{i-1})\) counts the occurrences of the word \(w_{i-1}\) only. It is worth mentioning that training Bigrams on a limited vocabulary may lead to cases of out-of-vocabulary words, i.e., unseen words during the training process. Out-of-vocabulary words pose a problem in calculating the probability of the sentence in which they are involved: in such cases we are not able to compute the probability of the Bigram involving the unknown word, thus undermining the probability of the whole sequence. We addressed the unseen N-grams issue through the interpolated Kneser-Ney Smoothing technique, which belongs to the family of interpolation strategies, and is based on the absolute discounting technique [14]. In the present setting we experimented with N-grams ranging from \(2\)- to \(5\)-grams; the Kneser-Ney discounting factor \(d\) was set to \(0.1\).3 The vocabulary was closed on each experiment: that is, the N-grams models employed in each experiment were acquired with the vocabulary obtained from the concatenation of the transcripts herein. Since the perplexity is bounded by the vocabulary size, fixing the cardinality of the vocabulary allows obtaining comparable perplexity scores from N-gram models trained across different corpora.
Footnote 3: To compute N-grams we exploited the Language Modeling Module (_Im_) package from NLTK version 3.6.1, [https://www.nltk.org/api/nltk.lm.html](https://www.nltk.org/api/nltk.lm.html).
#### 4.1.2 Gpt-\(2\)
The second experimental setup that we designed exploits the GPT-\(2\) neural model, in particular we used the GPT-\(2\) pre-trained model available via the Hugging Face Transformers library.4 In this setting, the input text has been preprocessed by the pre-trained tokenizer and grouped into blocks of \(1024\) tokens. The pre-trained model is specialized as Causal Language Model (CLM) on the input texts, that is, predicting a word given its left context. Since the average log-likelihood for each token is returned as the loss of the model, the perplexity of a text is computed according to Equation 3.
Footnote 4: [https://huggingface.co/gpt2](https://huggingface.co/gpt2)
### Evaluation of the PPL-Based Categorizazion
While the reliability associated to PPL has been extensively investigated in [1], we presently investigate whether perplexity scores on the speech text transcripts allow discriminating patients from healthy controls. Publicly available data from the Pitt Corpus were used.5 These data were
gathered as part of a larger protocol administered by the Alzheimer and Related Dementias Study at the University of Pittsburgh School of Medicine [6]. In particular, we selected the descriptions provided to the Cookie Theft picture, which is a popular test used by speech-language pathologists to assess expository discourse in subjects with disorders such as dementia.
#### 4.2.1 Materials
The dataset is composed of \(552\) files arranged into Control (\(243\) items) and Dementia (\(309\) items) directories. These correspond to multiple interviews to \(99\) control subjects, and to \(219\) subjects with dementia diagnosis. Text documents herein were transcribed according to the CHAT format,6 so we pre-processed such documents to extract text. In so doing, the original text was to some extent simplified: e.g., pauses were disregarded, like hesitation phenomena, that were not consistently annotated [18, 19].
Footnote 6: [https://talkbank.org/manuals/CHAT.pdf](https://talkbank.org/manuals/CHAT.pdf).
To the ends of collecting enough text to be analyzed, we dropped the interviews of subjects that participated in only one interview. We ended up with material relative to \(74\) control subjects (for which overall \(218\) transcripts were collected), and to \(77\) subjects with dementia diagnosis (overall \(192\) transcripts).
The statistics describing number of tokens, number of unique tokens and type-token ratio for the transcripts employed in the Experiment 3 are presented in Table 1.
#### 4.2.2 Procedure
This experiment is aimed at testing the discriminative features of perplexity scores: more specifically, we tested a simple categorization algorithm to discriminate between mentally impaired and healthy subjects. We adopted the experimental setup from the work in [7]: two language models \(LM_{C}\) and \(LM_{AD}\) were acquired by employing all transcripts from Control and Alzheimer's disease groups, respectively. Such models are supposed to grasp the main linguistic traits of both groups speeches, thus representing the typical language adopted by subjects belonging to Control and AD classes. For both groups we adopted a leave-one-subject-out setting, whereby language models were refined with files from all other subjects within the same group except for one, which was used for testing. For each subject \(s\) we acquired the model \(LM_{s}\) on the transcripts from the same group of \(s\), except for those of the subject \(s\). Each transcript in the corpus was then characterized by two perplexity scores \(P_{C}\) and \(P_{AD}\), expressing the scores obtained through language models acquired on Control and AD groups, respectively. More
\begin{table}
\begin{tabular}{l||c|c|c|c|c} Class & AVG Tokens & AVG Unique Tokens & Participants & Transcripts & TTR \\ \hline Control & \(437\) & \(26\) & \(74\) & \(218\) & \(0.07\) \\ Alzheimer’s Disease & \(409\) & \(25\) & \(77\) & \(192\) & \(0.08\) \\ \end{tabular}
\end{table}
Table 1: Statistics describing the transcripts employed in Experiment 3. For each class we report the average number of tokens per interview, the average number of unique tokens per interview, the number of participants, the overall number of transcripts and the type-token ratio (TTR).
precisely, if a subject \(s\) was a member of the AD class, the scores \(P_{C}\) for its transcripts were obtained through \(LM_{C}\), while the scores \(P_{AD}\) were computed by exploiting \(LM_{s}\). Vice versa, if the subject \(s\) was from the Control group, the scores \(P_{C}\) for her/his transcripts were obtained through \(LM_{s}\), while the scores \(P_{AD}\) were computed by exploiting \(LM_{AD}\). Additionally, since we were interested in studying the scores featuring each subject, we synthesized the perplexity scores \(P_{C}\) and \(P_{AD}\) of each subject with the average of her/his transcripts scores, thus obtaining \(\overline{P}_{C}\) and \(\overline{P}_{AD}\).
In order to discriminate AD patients from healthy subjects, we adopted a threshold-based classification strategy. Three different approaches were explored to estimate such threshold:
1. in the first setting we used the average perplexity scores characterizing all control subjects employed in the training process;
2. in the second setting we computed the threshold as the average perplexity score of all the subjects belonging to the AD class;
3. in the third setting we estimated two different thresholds by exploiting the difference \(\overline{P}_{AD}-\overline{P}_{C}\), by initially following the approach reported in [7] and [8].
For each subject, the threshold estimation process was computed through a leave-one-subject-out setting, and repeated for the three approaches from (i) to (iii). In the first setting the threshold was estimated on all the subjects from the control group except for the test subject \(s\): for each subject \(s\) we computed the threshold as the average of \(\overline{P}_{C}\) scores for all subjects in the control group except for \(s\) --if \(s\) was from the healthy controls group--. In case the perplexity score \(\overline{P}_{C}\) for the subject \(s\) was higher than the healthy controls threshold, we marked the subject as suffering from AD; as healthy otherwise. Similarly, in the second setting we computed the threshold as the average of \(\overline{P}_{AD}\) scores for all subjects in the AD group except for \(s\). In case the perplexity score \(\overline{P}_{AD}\) for the subject \(s\) was higher than the average of AD class threshold, we marked the subject as healthy; as suffering from AD otherwise. The rationale underlying the first two settings is that each subject may be characterized more accurately by LMs acquired on transcript from the same group: in other words, we expected lower perplexity scores to be associated to control (AD) subjects, rather than subjects belonging to the other class, with LMs trained or fine-tuned on transcripts from control (AD) subjects.
Following the literature, in the third setting we characterized each subject with the difference \(D=\overline{P}_{AD}-\overline{P}_{C}\). We defined two thresholds, \(\overline{D}_{AD}\) which was computed as the average of all the difference scores from patients in the AD group and \(\overline{D}_{C}\), defined as the average of all the difference scores from healthy controls. In both cases we considered all the patients belonging to the group except for the test subject \(s\) (\(s\) was held out with the only purpose to rule out her/his contribution from \(\overline{D}_{AD}\) or \(\overline{D}_{C}\)). Different from literature --where equal error rate is used--, we employ \(\overline{D}_{AD}\) and \(\overline{D}_{C}\) as compact descriptors for the classes \(AD\) and \(C\), respectively. The rationale underlying this categorization schema is that a subject is associated to the class that exhibits most similar perplexity score to her/his own. We categorize a subject \(s\) by choosing the class associated to the threshold (either \(\overline{D}_{AD}\) or \(\overline{D}_{C}\)) featured by smallest margin with the \(D\) value associated to the subject \(s\), according to the following formula:
\[\operatorname{class}(s)=\operatorname*{argmin}_{x\in\{C,AD\}}\big{|}\,D- \overline{D}_{x}\,\big{|}\,. \tag{5}\]
This setting (involving \(\overline{D}_{AD}\) and \(\overline{D}_{C}\)) will be referred to as \(\overline{D}\).
Furthermore, we refined the decision rule \(\overline{D}\) to account for standard deviation information. Together with the average \(\overline{D}_{AD}\) and \(\overline{D}_{C}\), we computed also \(\sigma_{AD}\) and \(\sigma_{C}\) as the standard deviations of the difference scores \(D\) for impaired and control groups. We explored the \(3\sigma\) rule, which is a popular heuristic in empirical sciences: it states that in populations that are assumed to be described by a normally distributed random variable, over \(99.7\%\) values lie within three standard deviations of the mean, \(95.5\%\) within two standard deviations, and \(68.3\%\) within one standard deviation [20]. On this basis we explored the three options by adding \(1\), \(2\) and \(3\) standard deviations to average scores: the best results were obtained by employing \(2\) standard deviations. Our thresholds were then refined as follows:
\[\overline{D}_{AD}^{*} = \overline{D}_{AD}+2\cdot\sigma_{AD}\text{, and}\] \[\overline{D}_{C}^{*} = \overline{D}_{C}-2\cdot\sigma_{C}.\]
The updated decision rule for categorization was then reshaped as
\[\operatorname{class}(s)=\operatorname*{argmin}_{x\in\{C,AD\}}\Big{|}\,D- \overline{D}_{x}^{*}\,\Big{|}\,. \tag{6}\]
This setting, involving \(\overline{D}_{AD}^{*}\) and \(\overline{D}_{C}^{*}\), will be referred to as \(\overline{D}^{*}\).
A twofold experimental setting has been devised, including experiments with N-grams and GPT-\(2\), adopting a window size set to \(20\) in order to handle shorter text samples (the shortest text in the training data contains only \(23\) tokens). In the case of N-grams, the models were acquired for \(2\)-grams to \(5\)-grams; the GPT-\(2\) model was fine-tuned employing \(5\), \(10\), \(20\) and \(30\) epochs.
#### 4.2.3 Evaluation Metrics
To evaluate the results we adopted the Precision and Recall metrics (specificity and sensitivity) along with their harmonic mean, F1 score, and accuracy. Precision (specificity) is defined as \(P=\frac{TP}{TP+FP}\), while Recall (sensitivity) is defined as \(R=\frac{TP}{TP+FN}\). While precision provides an estimation of how precise a categorization system is, recall indicates how many results were identified out of all the possible ones. \(F_{1}\) measure is then used to provide a synthetic value of Precision and Recall, whereby the two measures are evenly weighted through their harmonic mean: \(F_{1}=2\cdot\frac{P\cdot R}{P+R}\)
Accuracy was computed as \(ACC=\frac{TP+TN}{P+N}\), that is as the fraction of correct predictions (the sum of TP and TN) over the total number of records examined (the sum of positives and negatives, P and N).
Finally, in order to record a synthetic index to assess accuracy and F1 scores on the two groups at stake, we used the harmonic mean among these three values. It was computed as
\[\operatorname{HM}(\operatorname{Acc.},\operatorname{F1}_{AD},\operatorname{ F1}_{C})=\frac{n}{\sum\limits_{i=1}^{n}\frac{1}{x_{i}}}=\left(\frac{\sum \limits_{i=1}^{n}x_{i}^{-1}}{n}\right)^{-1}.\]
where \(n\) was set to the number of \(x_{i}\) values being averaged.
#### 4.2.4 Results
The overall accuracy scores are presented in Figure 1, while detailed figures across different experimental conditions are presented in Table 3, in A.
Let us start by reporting the results from N-gram models. The overall most effective strategy is \(\overline{D}^{*}\) (Eq. 6), based on a threshold using the difference between AD patients and healthy controls, extended with the \(3\sigma\) rule. The best performing model is based on Bigrams, and obtained \(.93\) accuracy, \(.92\) F1 score on the AD class, and \(.93\) F1 score on the C class. The models employing PPL scores from the control group (indicated as \(P_{C}\) in Figure 1 and in Table 3) obtained the lowest accuracy scores in all conditions, well below the random guess, while the accuracy yielded by the \(\overline{P}_{AD}\) strategy is always above \(.5\). In general we observe that increasing the length of the Markovian assumption reduces the accuracy of N-gram models for all decision rules (employing more context seems to be slightly detrimental for such models), with the
Figure 1: Plot of the accuracy scores for the third experiment on the categorization of AD/control subjects. The histograms in the top sub-figure show the accuracy on N-grams, while the histograms at the bottom report results obtained through GPT-2 models. Different colors correspond to N-gram of differing order and to different fine-tuning epochs, respectively. The histograms illustrate the scores obtained through \(\overline{D}^{*}\), \(\overline{D}\), \(\overline{P}_{C}\) and \(\overline{P}_{AD}\) decision rules, respectively.
exception of the \(\overline{D}\) strategy.
The results obtained by the GPT-2 models reveal overall higher accuracy, ranging from \(.71\) for the best model acquired with \(5\) epochs of fine-tuning to \(1.00\) for all further fine-tuning steps. The same profile describes the F1 scores recorded on the sub-tasks focused on AD and control subjects, respectively, varying from around \(0.69\) for the best model acquired with \(5\) epochs of fine-tuning (\(\overline{D}\) strategy on the AD class) to \(1.00\) for all other models and sub-tasks. If we consider the efficacy of thresholding strategies and associated decision rules, the refined difference rule \(\overline{D}\) is the best performing strategy for GTP-2 based models, as witnessed by the rightmost column in Table 3. Such scores report the harmonic mean among accuracy, F1 score on categorization of AD subjects and on categorization of control subjects. A compact view on data from the same column is provided in Table 2, illustrating the best strategy for each model at stake.
To frame our results with respect to literature, let us start from the accuracy of the baseline clinical diagnosis obtained in the first version of the study by Becker and Colleagues [6]: it was \(86\%\), and after considering follow-up clinical data this datum raised to \(91.4\%\), with a \(0.988\) sensivity and \(0.983\) specificity. This is what subsequent literature considered as the gold standard against which to compare experimental outputs. We recall that such data are particularly relevant as human evaluation included various analytical steps, such as medical and neurologic history and examination, semistructured psychiatric interview, and neuropsychological assessments. Experimental results provided in subsequent work approach those ratings by employing solely transcripts of descriptions to a rather simple picture. A relevant work attained \(85.6\%\) accuracy through LSTM based models [7] in the categorization of individual transcripts. Such results were then replicated and improved in the work by [8], where the best reported model experimentally obtained a \(0.872\) accuracy.
### General Discussion
Provided that our experimental results seem to outperform the accuracy scores reported in literature, we realized that a short, controlled elicitation task can potentially outperform natural linguistic data obtained from speakers. The quality of our results needs be checked in different
\begin{table}
\begin{tabular}{c|c|c} N-gram models & categorization strategy & mean HM score \\ \hline
2-grams & \(\overline{D}^{*}\) & \(0.93\) \\
3-grams & \(\overline{D}^{*}\) & \(0.91\) \\
4-grams & \(\overline{D}^{*}\) & \(0.89\) \\
5-grams & \(\overline{D}^{*}\) & \(0.89\) \\ \hline GPT-2 models: epochs & categorization strategy & mean HM score \\ \hline
5 epochs & \(\overline{D}\) & \(0.71\) \\
10 epochs & \(\overline{D},\overline{D}^{*}\) & \(1.00\) \\
20 epochs & \(\overline{D},\overline{D}^{*}\) & \(1.00\) \\
30 epochs & \(\overline{D},\overline{D}^{*}\) & \(1.00\) \\ \end{tabular}
\end{table}
Table 2: Study to compare the effectiveness of the thresholding and categorization strategies for each LM. The top scoring strategy is reported for each model.
settings (further languages, varied experimental conditions: much experimental work thus still needs to be done), but this fact provides evidence that specialists may be effectively assisted by systems employing a technology based on language models and perplexity scores. Also, by comparing language models as different as N-grams and models based on the more recent GPT-\(2\), we observed that Bigrams outperform a GPT-\(2\) model fine tuned for \(5\) epochs. This fact may provide insights on the possible trade-off between accuracy of the results and computation time and costs.
While perplexity proved to be overall a viable tool to investigate human language, we found consistent differences in the outputs of the models at stake, mostly stemming from intrinsic properties of the LMs, from the amount of context considered by the models, from the size of available training data, and from the amount of training employed to refine models themselves. One first datum is that even though N-grams can be hardly compared to GPT-\(2\)-based models, nonetheless it may be helpful trying to discern the scenarios in which such models provide better results. It was somehow surprising that in our experiment the accuracy level attained by the best-performing N-gram model (\(2\)-grams) achieved a \(0.93\) harmonic mean improving on the best GPT-\(2\)-based model (HM\(=\)\(0.73\); please refer to Table \(3\)), fine tuned for \(5\) epochs and employing the \(\overline{D}\) decision rule.
This result may be understood in the light of the rather regular language used for the descriptions to the Cookie Theft picture, that thereby turned out to be less demanding for the N-gram LMs. In these respects, a lesson learned is that N-grams can be employed in scenarios where the task is less difficult on lexical and linguistic accounts: in some instances of such problems adopting N-gram models may be convenient (considering both training and testing efforts) with respect to the more complete and computationally expensive Transformer models. Few data may be useful to complete this note on the trade-off between accuracy and computational effort. Our experiments were performed on machinery provided by the Competence Centre for Scientific Computing [21]. In particular, we exploited nodes with 2x Intel Xeon Processor E5-2680 v3 and 128GB memory. Reported experiments took around \(8\) hours for each GPT-\(2\) setting and about \(12\) minutes for all the N-gram models.
## 5 Conclusions
The study reported in this work explored how suited perplexity is to support automatic linguistic analysis for clinical diagnoses. The diagnosis of dementia is a complex process that is long and labor intensive, involving a neuropsychiatric evaluation that includes medical and neurologic history and examination, semistructured psychiatric interview, and neuropsychological assessments [22, 23]. Being able to define a linguistic marker to detect symptoms of mental disorders would thus provide clinicians with automatic procedures for language analysis that can contribute to the early diagnosis and treatment of mental illnesses in an efficient and noninvasive fashion. We thus addressed one basic research issue: whether and to what extent perplexity scores allow categorizing transcripts of healthy subjects and subjects suffering from Alzheimer Disease (AD). In this experiment we used a publicly available dataset, the Pitt Corpus. A widely varied experimental setting was designed to investigate the predictive and discriminative power of perplexity scores, and to assess how the resulting categorization accuracy varies in function
of the amount of training/fine-tuning employed to acquire the LMs. We compared (2, 3, 4 and 5) N-gram models, \(0\) to \(30\) (GPT-2) fine-tuning epochs, and four different thresholding strategies, as well. Novel thresholds were proposed, and compared to those reported in literature: the newly proposed categorization strategies ensure consistent improvement over state-of-the-art results.
A final remark relates to an outlook on future work. Different language models can attain results possibly featured by analogous accuracy with a fraction of training/fine-tuning efforts: e.g., we conducted preliminary tests, not reported here for brevity, also on LSTMs that revealed poor performance, paired with a computational load higher than for the GPT-\(2\) architecture.7 Also, different categorization algorithms may be adopted to discriminate patients from control subjects; refinements to both employed LMs and overall categorization strategy may result in substantial improvements. Yet, further experiments are needed to assess perplexity on larger samples, and on different sorts of spoken language: as mentioned, the language required to comment the Cookie Theft picture is quite a regular one. A richer, fuller characterization of the discriminative power of perplexity scores will involve experimenting also on different languages, and the associated language models.
Footnote 7: More specifically, perplexity scores computed through LSTMs were highly volatile (with standard deviation values often overcoming mean perplexity values), even increasing the number of training epochs, which required almost twice the time necessary to train the GPT-\(2\) base model.
However, the findings from this proof-of-concept study have several implications: while predicting whether the author of a transcript was afflicted by dementia or a healthy subject, we obtained valuable results, especially if we consider that our predictions were based solely on perplexity scores, with a substantial reduction in the amount of information with respect to the clinical evidence collected all throughout the diagnosis steps employed by human experts to face the same categorization task [6].
|
2310.09668 | Beyond Testers' Biases: Guiding Model Testing with Knowledge Bases using
LLMs | Current model testing work has mostly focused on creating test cases.
Identifying what to test is a step that is largely ignored and poorly
supported. We propose Weaver, an interactive tool that supports requirements
elicitation for guiding model testing. Weaver uses large language models to
generate knowledge bases and recommends concepts from them interactively,
allowing testers to elicit requirements for further testing. Weaver provides
rich external knowledge to testers and encourages testers to systematically
explore diverse concepts beyond their own biases. In a user study, we show that
both NLP experts and non-experts identified more, as well as more diverse
concepts worth testing when using Weaver. Collectively, they found more than
200 failing test cases for stance detection with zero-shot ChatGPT. Our case
studies further show that Weaver can help practitioners test models in
real-world settings, where developers define more nuanced application scenarios
(e.g., code understanding and transcript summarization) using LLMs. | Chenyang Yang, Rishabh Rustogi, Rachel Brower-Sinning, Grace A. Lewis, Christian Kästner, Tongshuang Wu | 2023-10-14T21:24:03Z | http://arxiv.org/abs/2310.09668v1 | # Beyond Testers' Biases:
###### Abstract
Current model testing work has mostly focused on creating test cases. Identifying what to test is a step that is largely ignored and poorly supported. We propose Weaver, an interactive tool that supports requirements elicitation for guiding model testing.1 Weaver uses large language models to generate knowledge bases and recommends concepts from them interactively, allowing testers to elicit requirements for further testing. Weaver provides rich external knowledge to testers, and encourages testers to systematically explore diverse concepts beyond their own biases. In a user study, we show that both NLP experts and non-experts identified more, as well as more diverse concepts worth testing when using Weaver. Collectively, they found more than 200 failing test cases for stance detection with zero-shot ChatGPT. Our case studies further show that Weaver can help practitioners test models in real-world settings, where developers define more nuanced application scenarios (e.g., code understanding and transcript summarization) using LLMs.
Footnote 1: Weaver is available open-source at [https://github.com/malusamayo/Weaver](https://github.com/malusamayo/Weaver).
## 1 Introduction
Despite being increasingly deployed in real-world products, ML models still suffer from falsehoods Maynez et al. (2020), biases Shah et al. (2020), and shortcuts Geirhos et al. (2020), leading to usability, fairness, and safety issues in those products Liang et al. (2023); Nahar et al. (2023). For example, toxicity detection models are used by social media platforms to flag or remove harmful content, but their biases amplify harm against minority groups Sap et al. (2019). As standard benchmarks are often too coarse to expose these issues, recent work has proposed to test nuanced behaviors of ML models Ribeiro et al. (2020); Goel et al. (2021); Ribeiro and Lundberg (2022).
Such model testing of nuanced behaviors usually requires _translating behavior expectations into test cases (input-output pairs)_. To enable such test case creation, prior work has taken inspiration from the long-established _software testing_ approaches: For example, in their CheckList framework, Ribeiro et al. (2020) used _templates_ to form minimal functionality test cases, which was inspired by _unit tests_. Morris et al. (2020)'s work on editing inputs (e.g., synonym swap) for testing model invariances is akin to _metamorphic testing_. To enable testing generative models, Ribeiro (2023) have also explored specifying properties that any correct output should follow, similar to _property-based testing_.
However, prior work has focused on _how to write test cases_, not _what tests to write_. In software engineering, tests are fundamentally grounded in requirements and design work, as commonly expressed in the V-model Sommerville (2015), Figure 1). Ideally, each test can be traced back to a requirement and all requirements are tested. Software engineering research has long established the importance of requirements for development _and_ testing, and studied many approaches for _requirements elicitation_Van Lamsweerde (2009).
In comparison, little work has explicitly supported identifying _what to test_ in model testing. Researchers and practitioners seem to rely mostly
Figure 1: The V-model Sommerville (2015), a widely used development process in many fields of engineering, adapted for testing models within a system. This layering and planning is also compatible with more agile development approaches, where the different activities may be iterated in different ways but are still linked.
on intuition and generic domain knowledge McCoy et al. (2019); Dhole et al. (2021), or debug a small set of issues initially identified through _error analysis_Naik et al. (2018); Wu et al. (2019). Such approaches are often shaped heavily by individual knowledge and biases Rastogi et al. (2023); Lam et al. (2023), driving practitioners to focus on local areas of a few related concepts where they find problems, while neglecting the larger space Ribeiro and Lundberg (2022), as exemplified in Figure 2. For example, to test toxicity detection models, practitioners may identify and test (1) _racism_ with handcrafted test cases and (2) _robustness to paraphrasing_ with CheckList. However, they are likely to miss many concepts in the space (e.g., _spreading misinformation_ as a way to spread toxicity) if they have never seen or worked on similar aspects, as often observed in the fairness literature Holstein et al. (2019).
In this work, we contribute **the concept and method of requirements engineering** (_"what to test"_) for **improving model testing**. We connect testing to requirements with Weaver, _an interactive tool that supports requirements elicitation for guiding model testing_. Our goal is to provide comprehensive external knowledge, encouraging testers to systematically explore diverse concepts beyond their biases (Figure 2), while balancing the completeness of requirements collection and the effort of practitioners exploring these requirements. Weaver systematically generates knowledge bases (KB) by querying large language models (LLMs) with well-defined relations from ConceptNet, allowing testers to elicit requirements _for any use cases_ (as opposed to being limited by pre-built KBs). For example, in Figure 3, from a _seed concept_ "online toxicity", the LLM generates a KB showing "spreading misinformation" with relation "done via." To not overwhelm users and encourage iterative exploration, Weaver shows only a subset of the concepts from the KBs, balancing relevance and diversity, while allowing users to still explore more concepts on demand.
We demonstrate the usefulness and generality of Weaver through a user study and case studies. In the user study (SS4), users identified more, as well as more diverse concepts worth testing when using Weaver. Collectively, they found more than 200 failing test cases for stance detection with zero-shot ChatGPT (gpt-turbo-3.5). Our case studies (SS5) further show that Weaver can help practitioners test models in real-world settings, from transcript summarization to code understanding.
## 2 Weaver
Our key idea is to provide a knowledge base to support _systematic_ exploration of domain concepts to guide model testing. This allows testers to consider requirements broadly, mitigating their biases as testers otherwise tend to opportunistically explore concepts in local areas (Figure 2). Our tool, Weaver, has three primary building blocks: (1) An LLM-generated knowledge base for a given testing task, for encouraging more _systematic_ and diverse requirement elicitation beyond individual biases; (2) a graph-inspired recommendation paradigm that prioritizes _diverse yet relevant_ concepts for user inspection; and (3) an intuitive interface that can be easily paired with any test case creation methods, for supporting users to _interactively navigate_ the knowledge base(s) for their own purposes. Below, we walk through the design choices and corresponding rationales.
### LLM-generated Knowledge Base
To support testers to explore different concepts relevant to the problem, Weaver needs to **provide comprehensive knowledge beyond individual testers' biases.** As such, we choose to power Weaver using _knowledge bases (KBs) generated by LLMs_. As LLMs store diverse world knowledge that can be easily extracted through prompting Cohen et al. (2023), they empower Weaver to support a wide range of domains, tasks, and topics.
We start knowledge base construction with a _seed concept_, i.e., a user-provided high-level term that can represent their tasks well. These seeds can be as simple as the task name and description, or can be more customized depending on user needs. Using the seed, we will then automatically query an
Figure 2: Local exploitation (left) vs. global exploration (right). Most model testing is opportunistic in terms of _what to test_, often “hill-climbing” by searching near existing problems, with the risk of getting stuck in local areas. In contrast, Weaver supports exploring concepts globally across the entire space.
LLM2 to build a partial knowledge base of related concepts Wang et al. (2020); Cohen et al. (2023). Specifically, we iteratively prompt LLMs for entities or concepts that have different relations to the queried concept, using fluent zero-shot prompts paraphrased from well-established relations used in knowledge bases. For example, prompting LLMs with "_List some types of online toxicity._" can help us extract specific _TypeOf_ online toxicity. By default, we use 25 relations from ConceptNet3 Speer et al. (2017), e.g., _MotivatedBy_ and _LocatedAt_, and manually curated corresponding zero-shot templates. These relations prove to be reusable across many different domains in our studies, but users can also specify custom relations they want to explore (a complete list of prompts in Appendix A).
Footnote 2: We used OpenAI’s text-davinci-003 Ouyang et al. (2022), but LLMs with similar instruction following capabilities should all be useful (see additional experiments on 11maa–2–13b-chat in Appendix F).
Footnote 3: We used ConceptNet relations because they represent generic semantic relations, which we expect to be more or less generalizable—an assumption that is validated by our user study and case studies. In contrast, alternative KBs (e.g., WikiData, DBpedia) tend to focus on more specific types of semantic relations that are biased towards certain domains.
As the KB is used to support exploration (explained more below), we initially pre-generate two layers of the KB, and iteratively expand the knowledge base based on user interactions.
### Recommend Diverse & Relevant Concepts
A comprehensive KB comes at a higher cost of exploration--a single tester can easily get overwhelmed if presented with the entire KB. To assist users to navigate through the KB more efficiently, we employ the "overview first, details-on-demand" taxonomy Shneiderman (1996): We provide initial recommendations as starting points to user explorations. In particular, we strive to **make recommendations that are diverse** (so as to bring diverse perspectives to users) **but still relevant** (so users do not feel disconnected from their goals).
The trade-off between _relevant_ and _diverse_ resonates with the common exploration-exploitation trade-off in information retrieval Athukorala et al. (2016), and can be naturally translated to a graph problem: If we have all candidate concepts to form a fully-connected graph, where edge weights are distances between different concepts and node weights are the concepts' relevance to the queried concept, then our goal becomes to recommend a _diverse_ subset where concepts have large distances between each other, while they are still _relevant_ in the sense that they occur frequently in the context of the queried concept. Essentially, we aim to find a subgraph \(G^{\prime}\) of size \(k\) that maximizes a weighted (\(\alpha>0\)) sum of edge weights \(w_{E}\) (diversity) and node weights \(w_{V}\) (relevance):
\[\arg\max_{G^{\prime}\subset G,|G^{\prime}|=k}w_{E}(G^{\prime})+\alpha\cdot w_ {V}(G^{\prime})\]
To build the graph, we measure concept differences with cosine distance between concept embeddings using SentenceBERT Reimers and Gurevych (2019), and measure relevance with the perplexity of sentence "_[concept] often occurs in the context of [queried_concept]_", using GPT-2 Radford et al. (2019). Since finding the optimal subgraph is computationally expensive, we apply the classic greedy peeling algorithm Wormald (1995) to approximate it in linear time. That is, we greedily remove nodes with the smallest weights (sum of node and all edge weights) one at a time until the graph size reaches \(k\) (\(=10\) for initial recommendation, but grows with user expansion). We empirically show that the recommended concepts are of high quality and diverse in our evaluation.
### Interactive Interface for Exploration
Besides the recommended starting points, we allow users to **iteratively and interactively locate their concepts of interest.** Weaver visualizes the knowledge base in a tree structure (Figure 3), a representation that is familiar to most ML practitioners.
Figure 3: Weaver interface, where users can interactively explore concepts in the LLM-generated KB to elicit requirements for model testing. 1�
The knowledge base starts with 1 the seed concept users specify, with each recommended concept represented as 2 a node in the tree, accompanied by its relation to the parent concept. The interface allows users to identify concepts by 3 diving deeper and 4 exploring broadly before 5 selecting a concept to test. Alternatively, users can also distill personal knowledge by 5 creating concepts manually.
To assist users in creating concrete test cases, Weaver incorporates AdaTest Ribeiro and Lundberg (2022) as the default test case creation method, which uses LLMs to suggest test cases. However, the design of Weaver is compatible with any other techniques to test models once requirements are identified (e.g., Zeno, Cabrera et al., 2023). The full interface including the AdaTest integration can be seen in Appendix B.
## 3 Intrinsic Evaluation
As the primary goal of Weaver is to provide external knowledge to guide testing, it is important that the knowledge provided is comprehensive in the first place. Here, we quantitatively evaluate:
1. _How comprehensive are the knowledge bases generated by_ Weaver?
Tasks, data, and metrics.We select four tasks for the evaluation: Hateful meme detection, Pedestrian detection, Stance detection for feminism, and Stance detection for climate change (task descriptions in Appendix C). These tasks cover diverse domains and modalities, and importantly, provide us with gold concepts that can be used to evaluate our LLM-generated KB. The first two tasks have been studied in prior work, and we directly use their ground-truth concepts collected from existing documents Barzamini et al. (2022) and user studies Lam et al. (2023). For the last two tasks, we aggregate all concepts identified by 20 participants without using Weaver as part of our user study (discussed later in SS4), which we consider as our ground truth. Intuitively, such aggregation should help represent what concepts are generally deemed important. As shown in Table 1, the tasks have on average \(144\) ground-truth concepts.4
Footnote 4: All ground-truth concepts are shared at [https://figshare.com/s/481a69af1b36dd76088](https://figshare.com/s/481a69af1b36dd76088).
Independently, we generated a knowledge base for each task using Weaver with default relations. We derived the seed concepts directly from the task names: (1) _"hateful meme"_, (2) _"pedestrian"_, (3) _"feminism"_, and (4) _"climate change."_
We evaluate the comprehensiveness of the generated knowledge using _recall_, i.e., the fraction of existing concepts that also appear in the KB. Since there are many phrasing variations of the same concept, we decide that a concept is in the KB if it appears exactly the same in the KB, or our manual check decides that it matches one of the 10 most similar concepts from the KB, as measured by the cosine distance (cf. SS2.1). We established that the manual process is reliable by evaluating inter-rater reliability where two authors independently labeled a random sample of 50 concepts, finding substantial agreement (\(\kappa=69.4\%\)).
We also evaluate the validity of the generated knowledge using _precision_, i.e., the fraction of KB edges that are valid. Note that because our ground truths are incomplete by nature (collected from dataset analysis and user study), KB edges that are not in the ground truths can still be valid. Following prior work Cohen et al. (2023), we performed manual validation on sampled KB edges. We sampled 50 edges from each of the four generated KBs.
ResultsOverall, our KBs cover 91% of ground-truth concepts on average (Table 1), with 81% of sampled generated edges being valid. Qualitatively, we found that there are two distinct types of concepts the KB failed to cover: First, there are some very specific concepts (e.g., _old photo_ in hateful meme detection). Although the 2-layer KB does not cover them, it does often cover their hypernyms (e.g., _photo_). Therefore, these concepts can be discovered if users choose to explore deeper. Second, some concepts are interactions of two concepts (e.g., _fossil fuel uses in developing countries_ in climate change stance detection). These can be identified by users manually, as both of their components (_fossil fuel uses_ and _developing countries_) usually already exist in the KB.
## 4 User Study
Does Weaver support effective requirements elicitation? We conduct a user study to evaluate:
1. _To what degree does_ Weaver _help users explore concepts faster?_
2. _To what degree does_ Weaver _help users explore concepts broadly?_
3. _How much does_ Weaver _mitigate user biases when exploring concepts?_
We expect that our interaction design (SS2.3) supports faster exploration (Q.2) and that the recommendations (SS2.2) support broader and less biased
exploration (Q.3 and Q.4).
### Study Design
Conditions.We design an IRB-approved user study as a _within-subject controlled experiment,_ where participants test models in two conditions: A _treatment condition_, where users use Weaver to find concepts for testing, and a _control condition_, where users add the concepts manually while they explore test cases. In both conditions, users have access to AdaTest's LLM-based test case suggestions (cf. SS2.3). In essence, the _control_ interface is a re-implementation of AdaTest with Weaver's interface and interaction experience.
Tasks and models.We select two tasks of similar difficulty for our user study: Stance detection for feminism, and stance detection for climate change. They are accessible to participants from different backgrounds. We had participants test the performance of zero-shot ChatGPT (OpenAI, 2022) for both tasks, as we observed that it easily outperformed any available fine-tuned models on Huggingface--the latter failed at simple test cases (full prompts in Appendix A).
Procedure.We recruited 20 participants (graduate students with varying ML/NLP experience, details in Appendix D.1) for a 90-minute experiment session. We started by walking through the study instructions and asked them to try Weaver in an interactive tutorial session. Then participants tested the two aforementioned stance detection models for 30 minutes each, one in the _treatment_ condition and one in _control_ condition. To mitigate learning effects, we use a Latin square design (Box, 2009) with four groups, counterbalancing (1) which condition a participant encounters first, and (2) which model is tested first. Within each session, they were first asked to perform model testing for 25 minutes, and then identify (select or create) concepts worth future testing for 5 minutes. The first phase of model testing is designed to ground participants in what concepts are worth testing. The second phase of concept exploration is designed to approximate a longer time of model testing. This final design was derived from an earlier pilot study, where we observed that writing test cases for each concept took more time than identifying interesting concepts (Weaver's objective). In the end, participants filled out a post-study survey (details in Appendix D.2). Participants were compensated for their time.
Metrics and analysis.We use two measurements to approximate participants' exploration procedure: (1) the number of concepts they explore (representing exploration speed, Q.2), and (2) the number of _distinct_ concepts they explore (Q.3).
Specifically, for _distinctiveness_, we want to distinguish the local vs. global exploration patterns (cf. Figure 2), which requires us to locate _clusters_ of similar concepts, or concepts that only differ in granularity. Quantitative, this is reflected through inter-relevance between concepts, e.g., _rising sea level_ should be considered close to _sea surface temperature increase_ but distinct from _waste management_. To find a set of distinct concept clusters, we again measure the concept distance using SentenceBERT, and run Hierarchical Clustering (Ward Jr, 1963) on all available concepts collectively selected or created by our 20 user study participants, which, as argued in SS3, forms a representative set of what end users may care about for a given task. Note that we do not use all concepts from our KB for clustering as it would influence the ground truth. Hierarchical clustering allows us to choose concept clusters that have similar granularities using a single distance threshold. Empirically, we use the threshold of 0.7, which produces reasonably distinct clusters for both tasks (41 and 46 clusters for _feminism_ and _climate change_ respectively with an average size of 6.1 concepts).
\begin{table}
\begin{tabular}{l r r r} \hline \hline
**Task** & **Recall** & **Precision** & **\# Concept** \\ \hline Hateful meme detection & 93.1\% & 88.0\% & 101 \\ Pedestrian detection & 91.8\% & 74.0\% & 146 \\ Stance detection for feminism & 86.9\% & 84.0\% & 145 \\ Stance det. for climate change & 91.4\% & 76.0\% & 185 \\ Average & 90.6\% & 80.5\% & 144 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Knowledge bases generated by Weaver cover 90.6% of existing concepts on average.
As such, _distinctiveness_ is represented by the _number of hit cluster_ in each user's exploration.
We analyze both measurements with a repeated measures ANOVA analysis, which highlights the significance of our test condition (whether participants use Weaver) while considering the potential impact from other independent variables. In our analysis, we test to what degree our tool, the task, and the order (tool first or tool last) explain variance in the outcome variables. Detailed analysis results can be found in Appendix D.3.
### Results
Weaver helps users identify more concepts (Q.2).We first observe that with Weaver, participants identified 57.6% (\(p<0.01\)) more concepts (Figure 4). This is likely because users can more easily explore a wider range of concepts with the external knowledge base, as confirmed by participant P6: "_... (KB) gives ideas beyond what I had in mind so I explored a wider base of concepts."_
We also observe that bug-finding is relatively independent of concept exploration. On average, participants found around 11 failing test cases (0.44 per minute), regardless of whether they used Weaver or not. Since testing is orthogonal, participants who explore more concepts will find more failing test cases. We expect that if participants test longer, those with Weaver will find more bugs while others will run out of concepts to test.
Weaver helps users cover more concept clusters (Q.3).More interestingly, we observe that with Weaver, participants not only found more concepts but also covered 47.7% (\(p<0.01\)) more clusters in the space, i.e., they also explored more diverse concepts (Figure 4). This aligns with the survey responses, where 80% of participants agree that Weaver helps them test the model more holistically and 76% of participants agree that Weaver helps them find more diverse model bugs. We conjecture that this is because users with Weaver explore more concepts not only in quantity (Q.2) but also in diversity, which is confirmed by many participants, e.g., "_... (KB) encouraged me to explore more areas of the domain than I would have otherwise considered"_ (P10).
Looking at their exploration trajectory (Figure 5), we see evidence indicating that Weaver enables users to _continuously discover distinct concepts_. In contrast, participants in the _control_ condition converged to hitting previously identified concept clusters as they identified more concepts. We observed that without Weaver, participants tended to refine existing concepts later in the exploration, as exploring new distinct areas becomes increasingly difficult.
These contrasting trajectories eventually lead to different exploration results. As reflected in Figure 6, the participant without Weaver performed noticeably more local exploitation, finding highly related concepts in local areas, whereas Weaver helped the other participant explore more diverse
Figure 4: Participants with Weaver identified more concepts and hit more clusters.
Figure 5: Participants without Weaver converged to hitting previously identified concept clusters as they identified more concepts. The gap between the two groups widened over the process.
Figure 6: We project the SentenceBERT embeddings of concepts explored by two participants (P15 and P16) into a 2D space, using t-SNE (van der Maaten and Hinton, 2008). Without Weaver, participants explore less space, as they performed more local exploitation than global exploration.
concepts globally, without losing the ability to dive deeper into a few local areas.
**Weaver shifts users towards a different concept distribution (Q.4).** We also observe that participants collectively explored different concepts with Weaver, as shown in Figure 7. Some concepts (e.g., _violence_) are much more explored by participants with Weaver. This suggests that Weaver helps mitigate participants' own biases and explore new concepts, as supported by participant P13: "_... (KB) gives some inspiration in concepts I would not have otherwise thought of..."_
That said, Weaver also brings in its own biases, e.g. participants with Weaver rarely explored concepts like _STEM_ compared to those without, possibly because they were too heavily anchored by the suggested concepts. This indicates that humans and knowledge bases are complementary - future work can support humans to better exploit their prior knowledge while exploring diverse concepts.
## 5 Case Studies
Using two case studies, we demonstrate that Weaver can help practitioners test their own models and find various bugs in real-world settings, and has the potential to provide support beyond post-hoc model testing. In the studies, we provided sufficient supports to the practitioners, including integrating Weaver into their natural evaluation environment (Jupyter Notebook), joining each user to explore their models for approximately three hours, and offering feedback and discussions whenever necessary (e.g., as practitioners brainstormed their seed concepts).
**Case selection.** We approached researchers in our contact network to identify projects that were actively developing and improving models to be integrated into software products, such that (1) model testing has real stakes for them, and (2) the model needs to meet real-world requirements of a product, not just succeed on a standard benchmark.
We ended up recruiting practitioners working on two projects that matched our scope to try Weaver. The first practitioner (C1) is building a pipeline for knowledge transfer, where they prompt an LLM to summarize content from transcripts into instructions. The second practitioner (C2) is building an IDE plugin to help developers understand unfamiliar code, developing LLM prompts to generate text summary for code segments. While these are two distinct scenarios, their shared challenge is that for practitioners working on novel tasks, it is often non-trivial to perform prompt engineering, especially because they do not have appropriate datasets for evaluating their prompts.
The case studies were IRB-approved and participants were compensated for their time.
**Weaver supports quick and effective iterations.** Both C1 and C2 started with a seed concept and then refined the seed concept at least once based on their observations. For example, C2 first tried seed concepts _"challenges for summarizing a code script"_ and _"reasons why people look for code summary"_, finding the recommended concepts generic and not their major concerns. They self-reflected through the process and identified the key scenario their plugin wants to support: (1) it targets novice programmers and (2) the most important application domain is data visualization. After this, they tried the seed concept _"specific challenges that novice programmers might have in comprehending data visualization code"_ and found recommended concepts much more helpful.
**Weaver helps practitioners find new model bugs by augmenting their existing analyses.** While we did not have participants rate each concept they explored, based on their think-aloud reflection, we note that they were able to find
Figure 7: Visualizing the identified concepts for _feminism_ by their corresponding clusters, we see participants in different conditions had different focuses in model testing. This shows that Weaver suggests concepts complementary to human intuitions. In each test, the red strike-through labels are the wrong model predictions and the green ones are user-specified ground-truths.
many helpful concepts in a short amount of time. Even though practitioners have been working on their (LLM-backed) models for a while, they both obtained new insights into their models. First, Weaver helped them observe issues they did not consider before. For example, in the seven concepts C1 tested, they found that the resulting instructions are always chronological even when there are detours in the input and steps reordering is desired. Second, Weaver also helped them turn their prior, often fuzzy knowledge of problems or requirements into concrete testable concepts. For example, C1 turned their vague notion _"useful summaries should not take transcripts literal"_ into concrete theories, including _"behind the transcript, there is a hidden thought process important for identifying key action steps."_ Third, they were able to confirm model deficiencies they already suspected through systematic tests (e.g., _"transcript summaries are often too verbose"_). Similarly, C2 tested seven concepts and found _"different parameters for customization"_ and _"when to use different data visualization APIs"_ particularly novel and insightful.
Notably, while C1 used AdaTest for testing models on different concepts, C2 reused test cases from their existing datasets, showing Weaver's flexibility with different test case creation techniques. That C2 still discovered new insights within their own dataset demonstrates Weaver's capability for encouraging nuanced testing following requirements.
Weaver is useful beyond testing models after-the-fact.While we mostly position Weaver as a model testing tool, we find that its support for _requirements elicitation_ supports the entire model development cycle (cf. the V-model, Fig. 1).
Although practitioners sometimes found it initially challenging to define seed concepts, they found the process itself valuable. For example, C2 eventually settled on _"specific challenges that novice programmers might have in comprehending [domain] code"_; they self-reflected how finding a good seed nudged them to state their goal explicitly _for the first time_. For them, this reflection happened too late to radically redesign their product, but it shows that Weaver has the potential to support early-stage requirements engineering both for products and models. Meanwhile, C1 was inspired by concepts identified with Weaver on model improvement. They experimented with different changes to prompts, encoding context for concepts they found challenging (e.g., step ordering).
## 6 Related Work
Requirements elicitation.Requirements engineering has been extensively studied Van Lamsweerde (2009). Despite many calls for the importance of requirements in ML (e.g., Rahimi et al., 2019; Vogelsang and Borg, 2019), requirements in ML projects are often poorly understood and documented Nahar et al. (2022), which means that testers can rarely rely on existing requirements to guide their testing. Requirements elicitation is usually a manual and laborious process (e.g., interviews, focus groups, document analysis, prototyping), but the community has long been interested in automating parts of the process Meth et al. (2013), e.g., by automatically extracting domain concepts from unstructured text Shen and Breaux (2022); Barzamini et al. (2022). We rely on the insight that LLMs contain knowledge for many domains that can be extracted as KBs Wang et al. (2020); Cohen et al. (2023), and apply this idea to requirements elicitation.
Model evaluation, testing, and auditing.Recent work on ML model evaluation (e.g., Ribeiro et al., 2020; Goel et al., 2021; Rottger et al., 2021; Yang et al., 2022) has pivoted from i.i.d. traditional accuracy evaluation to nuanced evaluation of model behaviors. As surveyed in our prior work Yang et al. (2023), this line of research uses various test creation techniques, including slicing, perturbations, and template-based generation. While providing many useful tools, these approaches often assume an existing list of requirements and rarely engage with the question of _what to test_. Exiting research relied mostly on the knowledge of particular researchers, resulting in incomplete and biased requirements. For example, Ribeiro et al. (2020) explicitly state that their list of requirements in CheckList is not exhaustive and should be augmented by users with additional ones that are task-specific. Through LLM-assisted requirements elicitation, Weaver helps users identify _what to test_ systematically.
Various alternative methods have been proposed for identifying what to test. For example, error analysis (e.g., Naik et al., 2018; Wu et al., 2019) and slice discovery Eyuboglu et al. (2022) can help identify issues in existing datasets, but datasets are often incomplete and biased Rogers (2021), and can even be missing for emerging LLM applications where no dataset has been
pre-collected. Dataset-agnostic approaches like _adaptive testing_Ribeiro and Lundberg (2022); Gao et al. (2022) help users iteratively ideate concepts abstracted from generated test cases, but, as we confirmed, users tend to explore only _local_ areas. These approaches engage in _bottom-up_ style elicitation, which is reactive and may fare poorly with distribution shift. In contrast, Weaver engages in _top-down_ style elicitation, a more proactive process grounded in an understanding of the problem and domain.
Furthermore, algorithmic auditing Metaxa et al. (2021) elicits concerns from people with different backgrounds, usually on fairness issues, to avoid being limited by the ideas of a single tester. However, it can be challenging to recruit, incentivize, and scaffold the auditors Deng et al. (2023). In a way, Weaver might complement such work by providing diverse requirements for individual testers or crowd auditors.
## 7 Discussion and Conclusion
In this work, we propose Weaver, a tool that uses knowledge bases to guide model testing, helping testers consider requirements broadly. Thorough user studies and case studies, we show that Weaver supports users to identify more, as well as more diverse concepts worth testing, can successfully mitigating users' biases, and can support real-world applications. Beyond being a useful testing tool, the underlying concept of Weaver have interesting implications on ML model testing and development, which we detail below.
Model testing in the era of LLMs.Throughout our user studies and case studies, we focused on testing "models" achieved by prompting LLMs. Here, we would like to highlight the importance of _requirements_ in such cases. LLMs are increasingly deployed in different applications, and traditional model evaluations are becoming less indicative. With these models trained on massive web text, it is unclear what should be considered as "in-distribution evaluation data." Instead, the evaluation objectives heavily depend on what _practitioners_ need, which should be reflected through well-documented requirements.
Meanwhile, as most practitioners are not NLP experts, they face challenges articulating how and what they should test about their prompted models Zamfirescu-Pereira et al. (2023). As their use cases become more nuanced, it is also less likely for them to find pre-existing collections on important concepts. As such, enabling each individual to identify _what-to-test_ is essential. We hope Weaver can be used for democratizing rigorous testing, just as LLMs democratized access to powerful models. Still, currently Weaver relies purely on practitioners to identify requirements worth testing, which may result in mis-matched requirement granularity (cf. SS3). Future work can explore more complex structures that can represent knowledge (e.g., from KBs to knowledge graphs), and advanced recommendation mechanisms for practitioners to find the best requirements to explore first.
**Rethinking requirements for ML development.** Though we position Weaver to ground model testing in requirements, we expect it to be useful also in other development stages (cf. SS5). For example, we expect that it can help developers think about high-level goals and success measures for their products Rahimi et al. (2019); Passi and Barocas (2019), to guide development early on. For example, building on the observation that requirement-based testing may help practitioners perform prompt engineering, we envision that future practitioners can use Weaver for rapid prototyping, where they identify unique requirements, pair them with corresponding test cases, and achieve better overall performance either through ensembled prompts Pitis et al. (2023) or prompt pipelines Wu et al. (2022). Moreover, elicited model requirements themselves can serve as descriptions and documentation, which can foster collaboration and coordination in interdisciplinary teamwork Nahar et al. (2022); Subramonyam et al. (2022). Notably, we believe Weaver can support such iterations because it is built to be lightweight. In prior research, requirements engineering has sometimes been criticized to be too slow and bureaucratic, making developers less willing to dedicate time to this step. In contrast, Weaver allows developers to easily adjust their exploration directions (through seeds and interactions), which makes it feasible to be integrated into more agile and iterative development of ML products where requirements are evolving quickly.
### Limitations
**Availability of domain knowledge in LLMs.** LLMs encode a vast amount of knowledge, but may not include very domain-specific knowledge for specialized tasks, very new tasks, or tasks where
relevant information is confidential. Our technical implementation fundamentally relies on extracting knowledge from LLMs and will provide subpar guidance if the model has not captured relevant domain knowledge. Conceptually our approach to guide testing with domain knowledge would also work with other sources of the knowledge base, whether manually created, extracted from a text corpus Shen and Breaux (2022); Barzamini et al. (2022), or crowdsourced Metaxa et al. (2021).
Impacts from biases in LLMs.Weaver uses LLMs to build knowledge bases such that users can elicit diverse requirements. However, LLMs themselves are found to be biased, sometimes untruthful, and can cause harm Nadeem et al. (2021); Kumar et al. (2023). Therefore, users should carefully interpret results from Weaver in high-stake applications.
Threats to validity in human-subject evaluations.Every study design has tradeoffs and limitations. In our evaluation, we intentionally combined multiple different kinds of user studies to triangulate results.
First, we conducted a user study as a controlled experiment. While the results are very specific and created in somewhat artificial settings and must be generalized with care (limited external validity), the study design can enact a high level of control to ensure high confidence in the reliability of the findings in the given context with statistical techniques (high internal validity). For example, regarding external validity, results may not generalize easily to other tasks that require different amounts of domain understanding or are differently supported by the chosen test case creation technique, and our participant population drawn from graduate students with a technical background may not equally generalize to all ML practitioners. There are also some threats to internal validity that remain, for example, despite careful control for ordering and learning effects with a Latin square design and assuring that the four groups were balanced in experience ('years of ML experience' and 'NLP expertise' asked in the recruitment survey before assignment), we cannot control for all possible confounding factors such as prior domain knowledge, gender, and motivation. In addition, we rely on clustering and similarity measures among concepts for our dependent variables, which build on well-established concepts but may not always align with individual subjective judgment.
Second, we conducted case studies in real-world settings with practitioners (high external validity) but can naturally not control the setting or conduct repeated independent observations (limited internal validity). With only two case studies, generalizations must be made with care.
This tradeoff of external and internal validity is well understood Siegmund et al. (2015). Conducting both forms of studies allows us to perform some limited form of triangulation, increasing confidence as we see similar positive results regarding Weaver's usefulness for discovering diverse concepts.
Subjectivity in human judgments.All model testing requires judgment whether a model's prediction for a test example is correct. We noticed that user study participants and sometimes also case study practitioners struggled with determining whether model output for a specific test example was a problem, and multiple raters may sometimes disagree. For our purposes, we assume it is the tester's responsibility of identifying which model outputs they consider problematic, and we do not question any provided labels. This, however, reminds us that like data annotation Santy et al. (2023), any model testing process will likely bring in testers' biases, as they get to decide what is right and what is wrong. In practice, a broader discussion among multiple stakeholders may be required to identify what model behavior is actually expected and a decomposition of model testing using requirements might be helpful to foster such engagement.
### Ethics Statement
Research Reproducibility.While our experiments are mostly conducted on the closed API (text-davinci-003) provided by OpenAI, none of the conceptual contributions of our paper relate to specific models or APIs. The concrete evaluation results depend on how humans interact with specific models, but the approach can be used with other models. Indeed, our extra experiments with llama-2-13b-chat on the climate change task show that the generated concepts from open-source models achieve substantial levels of recall (83% vs. 91% originally). This supports that the idea behind Weaver is reproducible. While users may see somewhat different KBs through different runs of the same/different LLMs, they get similar chances of seeing useful concepts, receive a similar level of
support on requirement elicitation (our core contribution), and will be able to yield similar model testing effectiveness.
Human-subject Experiments.Our studies had been approved by our IRB before it was conducted, as is standard practice for human-subject experiments. We recruited all participants through emails, and all of them are graduate students with varying ML/NLP experience (see details in Appendix D.1). The participants were compensated for their time ($20 per hour). As part of testing, they may write or review text that is abusive, dangerous, hateful, or offensive--they were made aware of this fact and could end participation at any time.
## Acknowledgements
This work was supported in part by the National Science Foundation (#2131477) and gift funds from Adobe, Oracle, and Google. Work by Brower-Sinning and Lewis was funded and supported by the Department of Defense under Contract No. FA8702-15-D-0002 with Carnegie Mellon University for the operation of the Software Engineering Institute, a federally funded research and development center (DM23-2019).
|
2301.08237 | LoCoNet: Long-Short Context Network for Active Speaker Detection | Active Speaker Detection (ASD) aims to identify who is speaking in each frame
of a video. ASD reasons from audio and visual information from two contexts:
long-term intra-speaker context and short-term inter-speaker context. Long-term
intra-speaker context models the temporal dependencies of the same speaker,
while short-term inter-speaker context models the interactions of speakers in
the same scene. These two contexts are complementary to each other and can help
infer the active speaker. Motivated by these observations, we propose LoCoNet,
a simple yet effective Long-Short Context Network that models the long-term
intra-speaker context and short-term inter-speaker context. We use
self-attention to model long-term intra-speaker context due to its
effectiveness in modeling long-range dependencies, and convolutional blocks
that capture local patterns to model short-term inter-speaker context.
Extensive experiments show that LoCoNet achieves state-of-the-art performance
on multiple datasets, achieving an mAP of 95.2%(+1.1%) on AVA-ActiveSpeaker,
68.1%(+22%) on Columbia dataset, 97.2%(+2.8%) on Talkies dataset and
59.7%(+8.0%) on Ego4D dataset. Moreover, in challenging cases where multiple
speakers are present, or face of active speaker is much smaller than other
faces in the same scene, LoCoNet outperforms previous state-of-the-art methods
by 3.4% on the AVA-ActiveSpeaker dataset. The code will be released at
https://github.com/SJTUwxz/LoCoNet_ASD. | Xizi Wang, Feng Cheng, Gedas Bertasius, David Crandall | 2023-01-19T18:54:43Z | http://arxiv.org/abs/2301.08237v2 | # LoCoNet: _L_ong-Short _Co_ context _N_etwork for Active Speaker Detection
###### Abstract
Active Speaker Detection (ASD) aims to identify who is speaking in each frame of a video. ASD reasons from audio and visual information from two contexts: long-term intra-speaker context and short-term inter-speaker context. Long-term intra-speaker context models the temporal dependencies of the same speaker, while short-term inter-speaker context models the interactions of speakers in the same scene. These two contexts are complementary to each other and can help infer the active speaker. Motivated by these observations, we propose LoCoNet, a simple yet effective Long-Short Context Network that models the long-term intra-speaker context and short-term inter-speaker context. We use self-attention to model long-term intra-speaker context due to its effectiveness in modeling long-range dependencies, and convolutional blocks that capture local patterns to model short-term inter-speaker context. Extensive experiments show that LoCoNet achieves state-of-the-art performance on multiple datasets, achieving an mAP of 95.2%(+1.1%) on AVA-ActiveSpeaker, 68.1%(+22%) on Columbia dataset, 97.2%(+2.8%) on Talkies dataset and 59.7%(+8.0%) on Ego4D dataset. Moreover, in challenging cases where multiple speakers are present, or face of active speaker is much smaller than other faces in the same scene, LoCoNet outperforms previous state-of-the-art methods by 3.4% on the AVA-ActiveSpeaker dataset. The code will be released at:[https://github.com/SJTUwxz/LoCoNet_ASD](https://github.com/SJTUwxz/LoCoNet_ASD).
## 1 Introduction
Real-world, interactive computer vision systems will need to recognize not only the objects, people, and other physical properties of a scene, but also the social properties -- how people interact with one another. One fundamental task is to identify, at any given moment, who is speaking in a complex scene with multiple interacting people. This Active Speaker Detection (ASD) problem [3, 4, 10, 19, 26, 30, 32, 36, 38] is important for many real-world applications, such as human-robot interaction [24, 37, 39], speech diarization [14, 15, 16, 41, 44], video re-targeting [5, 33, 45] etc.
How can we tell whether someone is speaking? Visual cues including the movement of that person's mouth and eye direction, along with the correlation of these movements with the observed audio, are often the most direct evidence. This evidence is generally speaker-independent and can be learned by comparing the person's behavior during speaking and non-speaking times. This intra-person information must be observed for a sufficient amount of time to observe both speaking and non-speaking activities. On average, people speak about 10 words per sentence and 200 words per minute [46], so at least about 6 seconds of intra-person information is needed. However, most existing ASD methods consider only short video segments of around 200ms-500ms [3, 30]. The first row of Fig 1 illustrates long-term intra-speaker context where the model can learn to identify simple cues of speaking activity detection like mouth movement and eye direction.
However, in a complex video, a person's face is often occluded, facing away from the camera, off-frame, or very small, so there is not enough direct visual information to determine if they are speaking. Fortunately, the behaviors of other people in the scene also give evidence about whether a target person is speaking [37]. The second row of Fig 1 shows an example: from the n-frame video segment on the left, although we only partially see the face of the man on the right, we easily infer that he is speaking at \(T_{n}\) because the woman in the middle turns her head and neither of the two other people open their mouths. This inter-speaker context requires only a short-term temporal window, since speaking activities of speakers within a short time range are much more correlated than speakers separated farther away in time. In the second row of Fig 1, the fact that the woman at \(T_{n}\) is looking at the man on the right provides no information on whether the man is speaking at \(T_{N}\).
Existing ASD methods have not fully used these two sources of information (long-term intra-speaker and short-term inter-speaker context). ASCNet [3] models long-term intra- and inter-speaker context. Other methods [30, 32, 4] model long-term intra-speaker and single-frame inter-speaker context, but there is little dynamic inter-speaker contextual information within a single frame. TalkNet [38]
only models long-term intra-speaker context without considering other speakers. These issues limit the effectiveness of these methods in challenging scenarios. In comparison, our method considers both long-term intra-speaker context to capture speaker-independent indicators of speaking activity and short-term inter-speaker context to infer speaking activity from other speakers. These two contexts complement each other and provide more reliable information for ASD.
With the above issues in mind, we propose LoCoNet, a simple end-to-end Long-Short Context Network. To the best of our knowledge, our network is the first work that combines long-term intra-speaker context, short-term inter-speaker context, and end-to-end training. Long-term intra-speaker context is modeled using a self-attention mechanism [40] due to its known effectiveness in modeling long-range dependencies, and interactions in short-term inter-speaker context are modeled with convolutional blocks that capture local patterns. In addition, while most ASD methods use vision backbones for audio encoding [3, 4, 38, 3] due to high temporal downsampling in existing audio backbones, we propose VGGFrame, which is modified from VGGish [21] and can fully leverage pretrained AudioSet [17] weights to extract per-frame audio features.
Our extensive experiments demonstrate the effectiveness of our approach. On AVA-ActiveSpeaker dataset [36], LoCoNet achieves an mAP of 95.2%, outperforming current state-of-the-out method EASEE [4] by 1.1% despite using a simpler visual encoder. Furthermore, LoCoNet achieves 68.1% (+22%) on the Columbia dataset [10], 97.2% (+2.8%) on the Talkies dataset [32] and 59.7% (+8%) on the Ego4D dataset [19]. Besides the overall performance, we test the robustness of LoCoNet in multiple challenging scenarios such as where there are multiple speakers or where the face of the active speaker is small. Results also show that LoCoNet has higher performance than other methods in complex scenes.
## 2 Related Work
### Active Speaker Detection (ASD)
Most recent techniques for ASD can be characterized in terms of two salient dimensions: the type of encoder network and the training mechanism.
**Types of encoder network.** For the visual encoder, most ASD methods extract visual embeddings using 2D CNNs [20] since they require less GPU memory [3, 30, 32, 38], although some work [4, 30] use 3D CNNs [9]. For the audio encoder, existing audio backbones have high temporal downsampling which makes it hard to extract per-frame features [21, 29]. Thus most papers [3, 30, 32, 38] take Mel spectrograms [1] of the audio signal as input and use as the encoder a 2D CNN [3, 4, 32] pretrained on ImageNet [13] or trained from scratch. This could potentially harm the audio feature learning because visual pretrained weights might not be suitable for audio tasks. For our audio encoder, we propose VGGFrame which is modified from VGGish [21] to extract per-frame audio embeddings. With pretrained AudioSet [17] weights loaded, our audio encoder can make use of audio features already learned.
**Training mechanisms.** ASCNet [3], ASDNet [30], and MAAS [32] use a multi-stage training mechanism for the encoders and context modeling, which can make feature
Figure 1: **Long-term intra-speaker context and short-term inter-speaker context. In the long-term intra-speaker context modeling (upper), features of a single speaker across all temporal frames are used to model long-term relationships. In the short-term inter-speaker context modeling (lower), relations of speakers within a short-term temporal range of n frames are modeled to capture the conversation pattern. Red boxes show not active speakers and green boxes show active speakers.**
learning difficult. TalkNet [38] and EASEE [4] show that end-to-end training can lead to better feature optimization. Our proposed model is also trained end-to-end to enable direct optimization of audio-visual embeddings and learning of context information.
### Context Modeling in ASD methods
Recent work develops various mechanisms for context modeling to refine the audio-visual features [3, 38, 4, 30, 32]. ASCNet [3] uses a non-local self-attention layer [42] to learn pairwise attention between features of all context speakers across all temporal frames, and an LSTM [22] network to refine temporal features. However, this context modeling includes many weakly-correlated or even uncorrelated speaker-temporal pairs, so that important relations may not be fully leveraged.
ASDNet [30] concatenates the aggregated features of context speakers to audio-visual features of target speaker in each frame, and feeds these features into an LSTM for temporal modeling. Both MAAS [32] and EASEE [4] use graph neural networks [43, 28] to model relations between the visual nodes and audio nodes of context speakers. But ASDNet, MAAS, and EASEE only consider context speakers within each timestamp, and such static information is often not useful to infer speaking activity.
Meanwhile, TalkNet [38] models long-term temporal self attention, but only for each individual speaker. In sum, existing methods didn't make good use of the two complementary contexts, i.e., long-term intra-speaker and short-term inter-speaker context. Our approach takes these two contexts as important source of information for ASD task, and exploit long-range dependency modeling capability of self-attention, and local pattern learning ability of convolutional layers to model the two contexts.
## 3 Our Approach
Fig 2 provides an overview of our method, which we call LoCoNet. LoCoNet takes as input stacked consecutive face crops of the target speaker and context speakers in a video clip of length \(T\), and corresponding audio signals.
### Encoders
**Visual encoder.** Given \(T\) consecutive face crops of the target speaker in a video, we first sample \(S-1\) speakers from all candidate speakers that appear in the same scene and stack the face crops of all sampled speakers together. The target speaker is always the first element along the \(S\) axis for the convenience of evaluation. All the face tracks are stacked and converted to gray scale, denoted as \(V\in\mathbb{R}^{S\times T\times H\times W\times 1}\), where \(S\), \(T\), \(H\), \(W\) are the number of speakers, temporal length, and height and width of the face crops, respectively.
Given face crops of each speaker \(V_{i}\in\mathbb{R}^{T\times H\times W\times 1}\) as input, the visual encoder yields a time sequence of visual embeddings \(e_{v_{i}}\in\mathbb{R}^{T\times C_{v}}\) for the given speaker, where \(C_{v}\) is the visual feature dimension. The stacked visual embeddings of all the sampled speakers \(e_{v}\in\mathbb{R}^{S\times T\times C_{v}}\) represent temporal context of each speaker independently.
**Audio Encoder.** We need both per-frame audio and video features to do per-frame classification. However, most pre-trained audio encoders [11, 18, 21, 35] are for audio classification and thus have a high degree of temporal downsampling,
Figure 2: An overview of LoCoNet. Given a sequence of face tracks of a target person and corresponding audio, we sample \(S-1\) speakers from all other people appearing in the scene and stack their face crops together as visual input. The Audio-Visual Attention Block is used to align visual and audio features generated by visual and audio encoders. Then the audio and visual features are concatenated and fed into \(N\) blocks of the Long-Short Context Modeling module, where each block consists of a convolutional network for short-term inter-speaker modeling and a attention-based layer for long-term intra-speaker modeling. The final output is used to classify speaking activity of the target person in all frames.
which makes it difficult to be applied in our case.
To solve this problem, we propose VGGFrame as audio encoder, which can fully utilize the pretrained VGGish [21]. The architecture of VGGFrame is illustrated in Fig 3. We remove the temporal downsampling layer after block-4 and instead add a deconvolutional layer to upsample the temporal dimension. We concatenate the intermediate features with the upsampled features to extract a hierarchy of representations. VGGFrame takes as input the Mel-spectrograms \(A\in\mathbb{R}^{4T\times M}\) of the raw audio signal and outputs the audio embeddings \(h_{a}\in\mathbb{R}^{T\times C_{a}}\). To align with \(S\) speakers, we repeat the \(h_{a}\)\(S\) times to produce the audio embedding \(e_{a}\in\mathbb{R}^{S\times T\times C_{a}}\).
**Audio-Visual Attention Block**
We propose an Audio-Visual Attention Block to incorporate relevant audio cues into visual representations and vice versa. Due to the strong representation ability of attention mechanisms [40], we build this block with a dual-attention scheme,
\[h_{v} =\text{MLP}(\text{MHA}(e_{v},e_{a},e_{a})), \tag{1}\] \[h_{a} =\text{MLP}(\text{MHA}(e_{a},e_{v},e_{v})),\] (2) \[u =\text{Concat}(h_{v},h_{a}), \tag{3}\]
where MHA\((q,k,v)\) is the multi-head attention [40] with query \(q\), key \(k\), value \(v\), MLP is a multi-layer perceptron, and Concat\((x,y)\) concatenates \(x\) and \(y\) along the channel dimension. Residual connections are added to the MHA and MLP layers.
### Long-Short Context Modeling
The long-term intra-speaker and short-term inter-speaker context are important, complementary cues to infer the active speaker. We propose Long-Short Context Modeling (LSCM) to gradually model these two contexts. LSCM contains \(N\) blocks, each of which consists of a Short-term Inter-speaker Modeling (SIM) module and a Long-term Intra-speaker modeling (LIM) module sequentially. LSCM outputs context-aware audio-visual embedding \(u^{N}\in\mathbb{R}^{S\times T\times C}\).
**Short-term Inter-speaker Modeling (SIM).** For a given moment in the video, the speaking activity of a target person can be inferred from the behavior of other people in nearby frames, but behaviors of other people in distant frames is not useful. This means that the model needs to capture local temporal information instead of long-term dependencies. To do this, we employ a small convolutional network,
\[u^{l}_{s} =\text{LayerNorm}(\text{Conv}_{s\times k}(u^{l})) \tag{4}\] \[u^{l}_{s} =\text{MLP}(u^{l}_{s})+u^{l}, \tag{5}\]
where \(u^{l}\in\mathbb{R}^{S\times T\times C}\) is the output of previous layer \(l\) and \(u^{l}_{s}\in\mathbb{R}^{S\times T\times C}\) is the output of this layer.
Compared with previous work [30, 4, 32] that either tries to capture speaker-interaction in each individual frame or fuses it with long-term modeling, our SIM module with a short temporal receptive field can better learn dynamic patterns in interactions.
**Long-term Intra-speaker Modeling (LIM).** Long-term intra-speaker context models an individual person's behavior over a longer time period. To learn from this long-term information, the model needs to have large capacity and the ability to learn long-term dependencies. We use an attention mechanism to do this, with a Transformer layer,
\[h^{l} =\text{LayerNorm}(\text{SelfAttention}(u^{l}_{s})+u^{l}_{s}) \tag{6}\] \[u^{l+1} =\text{LayerNorm}(\text{MLP}(h^{l})+h^{l}), \tag{7}\]
where \(u^{l+1}\) is the output and passed to the next block.
Compared with previous methods [30, 32, 4, 3] that jointly model inter-speaker and intra-speaker context, our disjoint modeling first focuses on short-term inter-speaker context and then focuses speaker-dependent pattern learning in long-term intra-speaker context, which is easier to optimize and can learn more discriminative features. This process is repeated \(N\) times to further refine and distill more informative features.
### Loss Function
During training, we predict the active speaker utilizing three embeddings: (i) the context-aware audio-visual embedding \(u^{N}\in\mathbb{R}^{S\times T\times C}\), (ii) visual-enhanced audio embedding \(h_{a}\in\mathbb{R}^{S\times T\times C_{a}}\), and (iii) audio-enhanced visual embedding \(h_{v}\in\mathbb{R}^{S\times T\times C_{v}}\). We apply a fully-connected layer followed by a softmax operation to each of these three embeddings respectively. The outputs are the frame-level predictions for each embedding.
Following the training strategy of TalkNet [38] and ASCNet [3], context-aware audio-visual embeddings serve as the major supervision source, and the other two are auxiliary supervision sources. The final loss function is \(L=L_{c}+w_{a}\times L_{a}+w_{v}\times L_{v}\) where \(L_{c},L_{a},L_{v}\) correspond
Figure 3: An illustration of VGGFrame. We apply a deconvolutional layer to upsample output feature of block-4. The output features of block-3 (before max pooling) and deconvolutional layer are concatenated and transformed to per-frame output features of shape \(T\times C_{a}\).
to the standard cross-entropy loss of context-aware audio-visual, audio, and visual embeddings, and \(w_{a}\) and \(w_{v}\) are weights of audio and visual losses.
### Implementation details
**Speaker sampling.** We set \(S=3\) in the context speakers sampling, since most videos contain 3 or fewer speakers. The appearance of the sampled speakers should overlap at least half of the temporal length with the target speaker. If fewer than \(S-1\) candidate speakers are found, the face crops of the target speaker are repeatedly sampled.
**Encoders.** The visual temporal encoder consists of a 3D convolutional layer, a ResNet-18 [20], and a visual temporal convolution network (V-TCN) [31]. V-TCN is composed of 5 blocks of depth-wise separable convolutional layers [12], rectified linear unit (ReLU) [2], and batch normalization layer (BN) [23]. Our visual encoder is trained from scratch while the audio encoder VGGFrame is initialized with VGGF [11] weights pretrained on AudioSet [17].
**Training.** We train LoCoNet for 25 epochs on 4 Quadro RTX 6000 GPUs with batch size of 4 using PyTorch [32]. The learning rate is initialized as \(5\times 10^{-5}\) and reduced by 5% for every epoch. The face crops are resized to \(112\times 112\). Visual augmentation includes randomly resized cropping, horizontal flipping, and image rotation. For audio augmentation, another random audio signal is chosen from the rest of the training set, and added as noise to the target audio. We train all our models using Adam optimizer [27]. In the short-term inter-speaker context modeling stage, \(s\) is set to 3, which is the same as the number of context speakers, and \(k\) is set to 7 frames. We set \(w_{a}\) and \(w_{v}\) to 0.4 in the loss function.
**Inference.** During inference, for each target speaker, we sample S-1 overlapped context speakers and take the prediction from context-aware embedding \(u^{N}\) as the predictions for the target speaker. We ignore all the other predictions during inference.
## 4 Experimental Setup
### Datasets
**AVA-ActiveSpeaker**[36] is the first large-scale, carefully-labelled audio-visual active speaker detection dataset, created by annotating 262 fifteen-minute videos from Hollywood movies. Of all the videos, 120 are used for training, 33 for validation, and 109 for testing. Videos are recorded at 25-30 fps, which results in a total of 3.65 million frames and 5.3 million face crops. However, since it is largely composed of movies, active speakers in the video clips usually have larger faces with static backgrounds, and there are only 1.6 speakers on average per frame. This makes the dataset less challenging and realistic.
**Columbia ActiveSpeaker**[10] comes from an 87-minute-long academic panel discussion and has videos of 5 speakers, with 2-3 speakers visible at any time and a total of 150,000 face crops. It is a small-scale and less diverse ASD dataset because its videos have non-overlapping and clear voices, large face sizes, relatively static motion, simple backgrounds, and a small set of speakers. Since Columbia is usually used only as test set, we use the model trained on AVA-ActiveSpeaker [36] to evaluate on Columbia using the F1 metric, following common protocol.
**Talkies [32]** is the first in-the-wild ASD dataset. It contains 23,507 face tracks extracted from 421,997 labeled frames. Comparing to the above datasets, Talkies focuses on challenging scenarios with more speakers per frame, more diverse actors and scenes, and more appearances of off-of-screen speech.
**Ego4D [19]**'s Audio-Visual benchmark is the first egocentric video dataset for ASD. It has 3,670 hours of egocentric videos of unscripted daily activities spanning hundreds of environments, which offers a unique perspective on solving ASD in dynamic scenes. Unlike AVA-ActiveSpeaker and Talkies where faces of speakers are often large and centered, and cameras are static, the wearable camera videos in Ego4D have dynamic backgrounds and the active speakers are often off-center. Ego4D is thus not only massive in scale but also highly challenging.
### Evaluation Metric
Following [3, 38, 30, 4, 32], we use the official ActivityNet [8] evaluation tool to compute mean average precision (mAP) and evaluate on the AVA-ActiveSpeaker validation set [36]. We also compute AUC [7] as another evaluation metric using Sklearn [34]. We use F1 scores to evaluate on the Columbia ASD dataset [10], and mAP to evaluate on Talkies [32] and Ego4D dataset [19].
## 5 Results and Analysis
### Context Modeling
**Long-term Intra-speaker Context.** To verify the importance of long-term modeling, we train LoCoNet by keeping the number of sampled context speakers as 1, and varying the temporal length of input frames from 20 to 200. Based on the result of different temporal lengths in Table 0(a), we observe that the network yields the worst result when trained with the shortest video segments of 20 frames (800ms). The performance increases consistently as video segments become longer. When the duration of video clips increases to 100 frames (4s), mAP improves by 5.03%, and it further improves by 0.41% when the number of input frames becomes 200. This shows that sufficient temporal information in long-term video segments is necessary for ASD. Larger numbers
of frames such as 400 consumes too much GPU memory; we found 200 to be a good balance between performance and memory cost.
**Short-term Inter-speaker Context.** We keep the temporal length as 200 frames and vary the number of speakers from 1 to 3 to investigate the importance of short-term inter-speaker modeling. We did not try even larger number of speakers because \(>99\%\) of videos in AVA-ActiveSpeaker dataset [36] have 3 or fewer speakers present in the same frame. From Table 0(b), we observe that with more speakers included in training, mAP improves consistently. Since speakers in the same scene are correlated, modeling multiple context speakers in short-term inter-speaker context can contribute to the ASD task.
### Comparison with State-of-the-Art
In this section, we validate our approach on four ASD datasets: AVA-ActiveSpeaker dataset [3], Columbia dataset [10], Talkies dataset [32] and Ego4D dataset [19] and compare with state-of-the-art methods.
**AVA-ActiveSpeaker dataset.** As shown in Table 2, LoCoNet achieves 95.2% mAP, which outperforms previous state-of-the-art EASEE [4] by 1.1%. Note that EASEE uses R3D architecture pretrained on Kinetics-400 [25] as the visual encoder while our method only uses a 2D ResNet trained from scratch.
**Columbia ASD dataset.** On this dataset, both TalkNet [38] and LoCoNet are trained on AVA-ActiveSpeaker and tested on Columbia dataset. As shown in Table 3, LoCoNet outperforms previous state-of-the-art TalkNet by 22% in average F1 score, which indicates our Long-Short Context Modeling is more robust and generalizable compared with long-term-only context modeling in TalkNet.
**Talkies set.** We evaluate LoCoNet, MAAS-TAN [32], TalkNet [38] and EASEE [4] on Talkies under different settings including (i) trained on AVA-ActiveSpeaker, (ii) trained on Talkies, and (iii) first trained on AVA-ActiviSpeaker and then fine-tuned on Talkies. As shown in Table 4, LoCoNet outperforms previous best methods by 1.7%, 2.5%, and 2.7% in these three settings, respectively. Besides the long-term temporal information that all three methods use, EASEE models per-frame speakers interaction, and LoCoNet models short-term intra-speaker context. This shows the efficacy of our proposed disjoint Long-Short Context Modeling.
**Ego4D dataset.** We further evaluate our method on the largest egocentric dataset, Ego4D Audio-Visual benchmark. As shown in Table 5, LoCoNet outperforms previous state-of-the-art by 0.56% under zero-shot settings. We believe this marginal improvement is because the large domain gap between egocentric and exocentric datasets make both methods perform poorly. When training on Ego4D, LoCoNet achieves an mAP of 59.69%, outperforming TalkNet by 8.03%. In egocentric videos, visual cues of the target speaker normally have jitterly backgrounds and are less clear than in exocentric videos. This big improvement indicates inter-speaker context modeling, which TalkNet misses, is quite useful to this kind of videos.
### Challenging scenario evaluation
**Quantitative analysis.** We further validate the efficacy of our proposed Long-Short Context Modeling by analyzing the difficult cases: (i) small faces in the video, and (ii) many people in the same scene.
In the first case, following the separation rules used by some ASD methods [3, 4, 38], we divide face tracks into three categories. **(i) Small**: faces with a width less than 64
\begin{table}
\begin{tabular}{l c c c} \hline \hline \# Frames & 20 & 100 & 200 \\ \hline mAP(\%) & 87.79 & 92.82 & 93.23 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Performance comparison by temporal lengths and number of speakers in Long-Short Context Modeling on AVA-ActiveSpeaker dataset.
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline \multirow{2}{*}{**Method**} & \multicolumn{3}{c}{**Encoder**} & \multirow{2}{*}{**E2E**} & **mAP** & **AUC** \\ \cline{2-5} & V & A & & (\%) & (\%) \\ \hline ASCNet [3] & R-18 [20] & R-18 & ✗ & 87.1 & - \\ MAAS [32] & R-18 & R-18 & ✗ & 88.8 & - \\ Unicon [47] & R-18 & R-18 & ✗ & 92.2 & 97.0 \\ TalkNet [38] & R-18+VTCN & R-34 & ✓ & 92.3 & 96.8 \\ ASDNet [30] & ResNext-101 & R-18 & ✗ & 93.5 & - \\ EASEE [4] & 3D R-50 & R-18 & ✓ & 94.1 & - \\ \hline
**LoCoNet** & R-18+VTCN & VGGFrame & ✓ & **95.2** & **98.0** \\ \hline \hline \end{tabular}
\end{table}
Table 2: Comparison with the state-of-the-art on the AVA-ActiveSpeaker validation set with mAP and Area Under the Curve (AUC). E2E is short for end-to-end.
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline \multirow{2}{*}{**Method**} & \multicolumn{5}{c}{**Speaker**} & \multirow{2}{*}{**Avg**} \\ \cline{2-2} \cline{4-7} & Bell & Boll & Lieb & Long & Sick \\ \hline TalkNet [38] & 32.14 & **59.25** & 56.43 & 30.53 & 52.14 & 46.10 \\
**LoCoNet** & **54.01** & 49.08 & **80.22** & **80.35** & **76.77** & **68.08** \\ \hline \hline \end{tabular}
\end{table}
Table 3: Performance comparison on Columbia Active Speaker Detection Dataset with F1 metric. Both TalkNet and LoCoNet are trained on AVA-ActiveSpeaker dataset.
pixels; **(ii) Medium**: faces with a width between 64 pixels and 128 pixels; **(iii) Large**: faces with width larger than 128 pixels. In the second case, we divide the face tracks by 1, 2 or 3 speakers present in the same frame.
The results are shown in Table 6 and Table 7 respectively. The proposed LoCoNet performs consistently the best across all scenarios. The improvement of LoCoNet over other methods becomes more obvious when the number of faces is 3 or face size is small and our approach outperforms the second best by about 3.5% in both cases. This further verifies the importance of short-term inter-speaker modeling: even when the active speaker is not salient in the frame or there are many speakers, our method still effectively models the interactions of context speakers and thus infers speaking activity of the target person.
**Qualitative analysis.** Fig 4 shows some results of LoCoNet on AVA-ActiveSpeaker [36] along with TalkNet [38]'s results and groundtruth labels. The left three columns show four visible speakers talking. This is challenging because it is hard to distinguish the active speaker when more than one conversation is happening. LoCoNet successfully locates the active speaker of the complicated conversations while TalkNet fails to do this. The right three columns present a video clip where in the first two frames, the active speaker is actually the woman with a very small visible face at the background, and the man with a huge visible face is not speaking. TalkNet does not locate the active speaker in this case while LoCoNet does. By utilizing both long-term intra-speaker context for speaking pattern recognition, and short-term inter-speaker context for conversation pattern recognition, our approach better overcomes challenging speaking scenarios. However, in the last frame, both methods fail to recognize the active speaker at the back with a very small visible face. This scenario is especially challenging since the two active speakers are in separate conversations, thus it's hard to infer speaking activity of the speaker at the back from context speakers.
### Design Choices
**Audio Encoder.** We evaluate the performance of different audio backbones that take Mel-spectrogram of the audio signal as input. 2D ResNet [20] is widely used by ASD methods [3, 4, 30, 32, 38, 47]. We compare ResNet-18 [20]
\begin{table}
\begin{tabular}{c c c c} \hline \hline \multirow{2}{*}{Method} & \multicolumn{3}{c}{Face Size} \\ \cline{2-4} & Small & Medium & Large \\ & (18\%) & (30\%) & (52\%) \\ \hline ASCNet [3] & 56.2 & 79.0 & 92.2 \\ MAAS [32] & 55.2 & 79.4 & 93.0 \\ TalkNet [38] & 63.7 & 85.9 & 95.3 \\ ASDNet [30] & 74.3 & 89.8 & 96.3 \\ EASEE [4]\({}^{a}\) & 79.3 & 93.2 & 97.7 \\
**LoCoNet** & **77.8** & **93.0** & **97.3** \\ \hline \hline \end{tabular} \({}^{a}\)Results of EASEE on face size should be erroneous considering the average mAP of EASEE is actually 1.1% lower than proposed LoCoNet in Table 2.
\end{table}
Table 6: Performance comparison with mAP regarding different face sizes. Percentage of each scenario is presented under each scenario.
\begin{table}
\begin{tabular}{c c c c} \hline \hline \multirow{2}{*}{**Method**} & \multicolumn{3}{c}{**Train Set**} \\ \cline{2-4} & AVA & Talkies & **mAP** \\ \hline MAAS-TAN [32] & ✓ & ✗ & 79.1 \\ TalkNet [38] & ✓ & ✗ & 86.5 \\ EASEE [4] & ✓ & ✗ & 86.7 \\
**LoCoNet** & ✓ & ✗ & **88.4** \\ \hline TalkNet [38] & ✗ & ✓ & 93.2 \\ EASEE [4] & ✗ & ✓ & 93.6 \\
**LoCoNet** & ✗ & ✓ & **96.1** \\ \hline TalkNet [38] & ✓ & ✓ & 94.4 \\ EASEE [4] & ✓ & ✓ & 94.5 \\
**LoCoNet** & ✓ & ✓ & **97.2** \\ \hline \hline \end{tabular}
\end{table}
Table 4: Performance comparison on Talkies dataset under different training settings: train on AVA-ActiveSpeaker dataset alone, train on Talkies dataset alone, or train on AVA-ActiveSpeaker dataset and finetune on Talkies dataset.
\begin{table}
\begin{tabular}{c c c c} \hline \hline \multirow{2}{*}{Method} & \multicolumn{3}{c}{**\# Faces**} \\ \cline{2-4} & 1 & 2 & 3 \\ & (45\%) & (33\%) & (11\%) \\ \hline ASCNet [3] & 91.8 & 83.8 & 67.6 \\ MAAS [32] & 93.3 & 85.8 & 68.2 \\ TalkNet [38] & 95.4 & 89.6 & 80.3 \\ ASDNet [30] & 95.7 & 92.4 & 83.7 \\ EASEE [4]\({}^{a}\) & 96.5 & 92.4 & 83.9 \\
**LoCoNet** & **97.0** & **94.6** & **87.3** \\ \hline \hline \end{tabular}
\end{table}
Table 7: Performance comparison with mAP regarding different number of visible faces. Percentage of each scenario is presented under each scenario.
\begin{table}
\begin{tabular}{c c c} \hline \hline \multirow{2}{*}{**Method**} & \multicolumn{3}{c}{**Train Set**} \\ \cline{2-4} & AVA & Talkies & (\%) \\ \hline MAAS-TAN [32] & ✓ & ✗ & 79.1 \\ TalkNet [38] & ✓ & ✗ & 86.5 \\ EASEE [4] & ✓ & ✗ & 86.7 \\
**LoCoNet** & ✓ & ✗ & **88.4** \\ \hline TalkNet [38] & ✗ & ✓ & 93.2 \\ EASEE [4] & ✗ & ✓ & 93.6 \\
**LoCoNet** & ✗ & ✓ & **96.1** \\ \hline TalkNet [38] & ✓ & ✓ & 94.4 \\ EASEE [4] & ✓ & ✓ & 94.5 \\
**LoCoNet** & ✓ & ✓ & **97.2** \\ \hline \hline \end{tabular}
\end{table}
Table 5: Performance comparison on Ego4D dataset between TalkNet and LoCoNet with mAP. Two different training settings are tried: train on AVA-ActiveSpeaker dataset, or train on Ego4D dataset.
pretrained on ImageNet [13] with our VGGFrame pretrained on AudioSet [17]. The result is shown in Table 8. Changing audio backbone from 2D ResNet to VGGFrame brings a performance increase of about 1% mAP. This shows that using an audio encoder pretrained on audio datasets can be more effective than a common vision encoder.
**Context Modeling.** We investigate various designs to model long-term and short-term speaker-temporal context: 1) **temporal modeling only**[38] where a temporal self-attention module is applied to learn the long-term temporal relations. 2) **Non-local global speaker-temporal modeling**[42] captures pairwise relationships between all the speakers across all the temporal frames. 3) **Divided speaker-temporal modeling**[6] contains temporal modeling of each individual speaker, and captures all speakers within each single frame. 4) **Proposed LSCM** where long-term intra-speaker modeling leverages a Transformer layer to learn the common cues that indicate speaking activity, and a small convolutional network for short-term inter-speaker modeling to learn the interaction of context speakers.
The evaluation results are shown in Table 9. If only temporal modeling is applied, the network achieves 94.23% mAP, which is lower than almost all variants with speaker-temporal context. LoCoNet with non-local global speaker-temporal modeling applied performs slightly worse. Non-local global attention models the pairwise relations of all temporal frames and all speakers, but most of these pairs are weakly or not correlated, hindering performance while increasing memory use. Divided speaker-temporal modeling improves mAP by about 0.52%. Speakers in the same frame are correlated but since each frame is considered separately, it cannot learn dynamic information about conversation. Our proposed context modeling gets the best mAP of 95.2%, and contributes an additional 1% in mAP compared to the network that only uses temporal modeling. We also tried replacing the convolutional network with local self-attention in modeling short-term inter-speaker context, and the result is similar to that of LSCM. This further shows the effectiveness of the combination of long-term intra-speaker modeling and short-term inter-speaker modeling.
## 6 Conclusion
In this work, we observe that the speaker activity could be more efficiently inferred from long-term intra-speaker context and short-term inter-speaker context. We thus design an end-to-end long-short context ASD framework that utilizes attention-mechanism to model long-term intra-speaker context and convolutional network to model short-term inter-speaker context. With a simpler backbone network, our method achieves state-of-the-art performance on 4 mainstream ASD benchmarks and significantly outperforms previous sota by22% on Columbia dataset and 8.0% on Ego4D dataset. We also show that in challenging scenarios such as where multiple speakers are present in the same scene or speakers have small faces, our proposed method outperforms previous methods significantly. All of these shows the
Figure 4: Results comparison of LoCoNet and TalkNet on challenging scenarios of AVA-ActiveSpeaker validation dataset. The red box denotes inactive speaker, and the green box denotes active speaker. Left three columns are showing a multiple people conversation which include conversation of four people, and separate conversation of two. Right three columns are scenes where active speaker has small face size.
\begin{table}
\begin{tabular}{c c} \hline \hline Audio Backbone & mAP(\%) \\ \hline
2D ResNet [20] & 94.2 \\ VGGFrame & **95.2** \\ \hline \hline \end{tabular}
\end{table}
Table 8: Comparison of LoCoNet with different audio encoder on the AVA-ActiveSpeaker validation set using mAP.
\begin{table}
\begin{tabular}{c c} \hline \hline Design & mAP(\%) \\ \hline Temporal-only & 94.23 \\ Non-local global speaker temporal modeling & 94.20 \\ Divided speaker temporal modeling & 94.75 \\
**LSCM** & **95.18** \\ \hline \hline \end{tabular}
\end{table}
Table 9: Performance Evaluation of various context modeling designs.
robustness and effectiveness of our method. Future explorations include generalizing our model to both egocentric and exocentric videos to make it more practical for real-world applications.
|
2302.11236 | Multi-objective optimization of energy consumption and execution time in
a single level cache memory for embedded systems | Current embedded systems are specifically designed to run multimedia
applications. These applications have a big impact on both performance and
energy consumption. Both metrics can be optimized selecting the best cache
configuration for a target set of applications. Multi-objective optimization
may help to minimize both conflicting metrics in an independent manner. In this
work, we propose an optimization method that based on Multi-Objective
Evolutionary Algorithms, is able to find the best cache configuration for a
given set of applications. To evaluate the goodness of candidate solutions, the
execution of the optimization algorithm is combined with a static profiling
methodology using several well-known simulation tools. Results show that our
optimization framework is able to obtain an optimized cache for Mediabench
applications. Compared to a baseline cache memory, our design method reaches an
average improvement of 64.43\% and 91.69\% in execution time and energy
consumption, respectively. | Josefa Díaz Álvarez, José L. Risco-Martín, J. Manuel Colmenar | 2023-02-22T09:35:03Z | http://arxiv.org/abs/2302.11236v1 | Multi-objective Optimization of Energy Consumption and Execution Time in a Single Level Cache Memory for Embedded Systems
###### Abstract
Current embedded systems are specifically designed to run multimedia applications. These applications have a big impact on both performance and energy consumption. Both metrics can be optimized selecting the best cache configuration for a target set of applications. Multi-objective optimization may help to minimize both conflicting metrics in an independent manner. In this work, we propose an optimization method that based on Multi-Objective Evolutionary Algorithms, is able to find the best cache configuration for a given set of applications. To evaluate the goodness of candidate solutions, the execution of the optimization algorithm is combined with a static profiling methodology using several well-known simulation tools. Results show that our optimization framework is able to obtain an optimized cache for Mediabench applications. Compared to a baseline cache memory, our design method reaches an average improvement of 64.43% and 91.69% in execution time and energy consumption, respectively.
keywords: Cache memory, Energy, Performance, Multi-objective optimization, Evolutionary Computation +
Footnote †: journal: Elsevier
## 1 Introduction
Multimedia embedded systems like digital cameras, audio and video players, smartphones, etc., are one of the major driving forces in technology. Currently, they have less powerful resources than desktop systems, but these systems must run multimedia software (video, audio, gaming, etc.). These applications require high performance and consume much energy, which reduces the battery lifetime. The battery in embedded systems is limited in capacity and size because of design constraints. Hence, embedded systems
designers must be very concerned on both increasing performance and reducing energy consumption, which in turn will also affect to the lifetime of the device.
In recent years, a number of scientific papers have been published indicating that the memory subsystem acts as an energy bottleneck of the system [1]. In fact, cache memory behavior affects both performance and energy consumption. The best cache configuration gives us the minimum execution time and the lowest energy consumption. Total cache and block sizes, associativity, and algorithms for search, prefetch and replacement, or write policies are some of the parameters that form a cache configuration. Finding optimal values for these parameters will guide us to reach the best performance and energy consumption. Finding an optimal cache configuration for one single application is a bad choice for other applications with different memory access patterns [2]. Thus, we tackle the problem of finding the optimal cache configuration for all the applications executed in an embedded device, which will improve performance and energy consumption.
Energy optimization directly affects aging of transistors, which is a limiting factor for long term reliability of devices. In a common context where the lifetime of a device is determined by the earliest failing component, the aging impact is more serious on memory arrays, where failure of a single SRAM cell would cause the failure of the whole system. Previous works have shown that saving energy in the memory subsystem can effectively control aging effects and can extend lifetime of the cache significantly [3], [4]. Our approach, which optimizes performance and energy, is also indirectly improving the long term reliability of the target device.
A first brute-force approach to obtain the best cache configuration would require the execution and evaluation of time and energy for all available cache configurations and target applications, which is an unmanageable task given current time-to-market reduced windows. In addition, execution time and energy consumption are conflicting objectives in practice. For example, if associativity is increased, the number of misses is reduced, as well as the execution time. However, a high associativity increases the hardware complexity and thus the energy consumed by the cache memory [2]. Therefore, we present in this paper a new methodology to evaluate cache configurations in order to customize cache designs with the aim of reducing both the execution time and the energy consumption by means of a multi-objective optimization [5]. In particular, our optimization framework is built around the Non-dominated Sorting Genetic Algorithm II (NSGA-II) [6]. In order to evaluate our approach, we have automatically designed caches optimized for a set of multimedia applications taken from Mediabench benchmarks [7], since they are representative for image, audio and video processing. Our hardware architecture is based on the ARM920T processor [8], broadly used in multimedia embedded devices.
The rest of the paper is organized as follows. Next Section summarizes the related work on the topic. Section 3 describes the design of the search space for our multi-objective optimization. Section 4 shows details of our multi-objective function, describing both performance and energy models. Our optimization framework is integrated and explained in Section 5. Then, Section 6 analyzes our experimental results. In Section 7, we present our conclusions based on the results obtained, and explain the main lines of our future work.
## 2 Related work
The optimization of performance and energy consumption in the memory subsystem have received a lot of attention in the last decade. Regarding performance, multiple research works have been developed with the aim of improving performance through changing architectural parameters. With respect to energy, previous studies have demonstrated that half of the energy consumption in embedded systems is due to the cache memory [1]. The optimization of these parameters has been conducted mainly using two different techniques: dynamic reconfiguration and static profiling.
Regarding dynamic reconfiguration, Givargis [9] improved cache performance by choosing a variable set of bits used as index into the cache. Zhang minimized the energy consumption introducing a new cache design method called way concatenation to reconfigure the cache by software [10]. However, this approach provided a limited number of cache configurations, allowing the system engineer to optimize associativity (one-way, two-way or four-way), cache size and line size. Chen and Zou [11] proposed an efficient reconfiguration management algorithm to optimize three parameters: cache size, line size and associativity. Similarly, Gordon-Ross and Frank Vahid [12] presented a dynamic tuning methodology to optimize cache sizes (2, 4, or 8 KBytes), line sizes (16, 32, or 64 bytes), and associativity (1-way, 2-way or 4-way). Lopez et al. [13] proposed an on-line algorithm on _Simultaneous Multithreading (SMT)_ to decide the cache configuration after a fixed set of instructions, a technique based on a cache working-set adaptation [14]. Dynamic reconfiguration in soft real-time embedded systems on single-level cache hierarchy was proposed by Wang et al. in [15], and on a multi-level cache hierarchy in [16]. More recently, Wang et al. [17] minimize energy consumption in real-time embedded systems performing dynamic analysis during runtime. All these approaches optimize cache size (1, 2 or 4KB), line-size (16, 32 or 64 bytes) and associativity (1-way, 2-way or 4-way). The main inconvenient of dynamic reconfiguration is the addition of extra complexity in the design of the memory subsystem. We also see that these approaches only optimize a few number of cache parameters, minimizing either execution time or energy consumption. In addition, it is proved in this work
that an offline multi-objective optimization may find optimal cache parameter values, without the need of adding hardware complexity to the standard memory subsystem design.
With respect to the use of static profiling, Rackesh Reddy in [18] studied the effect of multiprogramming workloads on the data cache in a preemptive multitasking environment, and proposed a technique that mapping tasks to different cache partitions, significantly reduced both dynamic and leakage power. Our approach is different, since we try to obtain the behavior of a target set of applications, obtaining their full static profile and the best memory cache configuration (i.e., size, associativity, and replacement and prefetching algorithms for both data and instruction caches) for the whole set. Andrade et al. presented in [19] an extension of a systematic analytical modeling technique based on probabilistic miss equations, allowing the automated analysis of the cache behavior for codes with irregular access patterns resulting from indirections. Nevertheless, these models can only optimize cache size and associativity. Feng et al. [20] applied a new cache replacement policy to perform the replacement decision based on the reuse information of the cache lines and the requested data developing two reuse information predictors: a profile-based static predictor and a runtime predictor. Similarly, Xingyan and Hongyan [21], based on a profiling scheme of the OPT cache replacement, presented a method to generate best static cache hints. However, these approaches only improve the replacement algorithm. Gordon-Ross et al. [22] studied the interaction of code reordering and cache configuration, obtaining excellent results. However, this technique is applied to the introduction cache, and our systematic optimization method is applied to the full configuration of both the instruction and data caches.
Additionally, all the aforementioned approaches minimized either execution time or energy consumption. We propose the use of multi-objective optimization to simultaneously minimize both objectives. To this end, we use the concept of multi-objective optimization, which can be easily applied in evolutionary computation. Evolutionary computation and multi-objective optimization are being widely used in _Computer Aided Design (CAD)_ problems. Close to cache optimization, Risco et al. [23] applied a novel parallel multi-objective evolutionary algorithm to optimize desktop applications for their use in multimedia embedded systems, improving performance, memory usage and energy consumption of the memory subsystem. In [24] a simple online _Genetic Algorithm (GA)_ was used to obtain the best cache associativity to improve the performance of SMT processors. In this line, Bui et al. [25] proposed a solution for the cache interference problem applying cache partitioning techniques using a simple GA whose solution sets the size of each cache partition and assigns tasks to partitions such that system worst-case utilization is minimized thus increasing real-time schedulability. An approach based on NSGA-II algorithm was used in [26] to evaluate cache configurations
on a second cache-level in order to improve energy consumption and performance, optimizing cache size, line size and associativity. However, none of these approaches is able to simultaneously optimize cache performance and energy consumption for a target set of applications as our methodology performs.
To the best of our knowledge none of the previous works tackle the optimization of all the parameters that we propose in this research work. Most of the cited papers focus their space exploration on cache size, line size and associativity, even though the possible values for each configurable parameter is quite small. In this work, we optimize the following cache parameters: cache size, line size, associativity, replacement policy, prefetch policy and write policy. We also consider first-level (L1) instruction/data caches, although the methodology proposed can be applied to other cache types. All the aforementioned configurable parameters complete the chromosome in the _Multi-Objective Evolutionary Algorithm (MOEA)_ proposed. The aim is to find the best cache configurations that minimizes memory access time (performance) and energy consumption. As we try to minimize two conflicting objectives, multi-objective optimization is suitable to address this problem. Our approach is valid on embedded systems, where the small number of applications allows the engineer to select one cache design among all the optimizations performed, as we show in this work.
## 3 Design of the search space representation
Designing a cache memory implies the configuration of the set of parameters that define it: cache size, line size, replacement algorithm, associativity and prefetch algorithm for both the instruction and data caches, and also write policy for data cache. Figure 1 shows these parameters and the possible candidate values that we consider in our research for the instruction cache (labeled as I-Cache) and data cache (labeled as D-Cache). These possible values, most of them illustrated in Figure 1, are described below:
* Cache size: memory cache capacity in bytes. We consider a fixed cache size of 16 KB, which is the default size of the ARM920T cache [8], our target system.
* Line size: cache memory is divided into lines (blocks). When a miss takes place, a whole line is moved from main memory to cache memory. Possible values for this parameter are 8, 16, 32 and 64 bytes.
* Cache replacement algorithm: set of techniques designed to replace blocks. Algorithms selected to evaluate are: _Last Recently Used (LRU)_, _First Input First Output (FIFO)_ and RANDOM algorithms.
* Associativity: the degree of associativity refers to the number of places in a cache where a block can be located. It is defined by the number of ways. In this work we deal with 4, 8, 16, 32 and 64 ways.
* Prefetch algorithm: determines the policy to carry blocks to the cache memory. We consider three of them: (1) when a cache miss occurs (MISS-PREFETCH), (2) only when the data is required (ON-DEMAND), and (3) when data from a block is referenced, the following block is also prefetched (ALWAYS-PREFETCH).
* Write policy: it is designed to keep consistency between cache and main memory when data is modified in cache memory. Data stored in L1-cache can be written to main memory only when absolutely needed, with a COPY-BACK policy, or maybe written to cache and main memory simultaneously, with a WRITE-TROUGH policy.
Hence, the size of the search space is 64800 cache configurations for each cache size. Thereby, a deter
Figure 1: Taxonomy for a cache configuration. Both instruction and data caches, labeled as I-Cache and D-Cache, must be customized with available values.
ministic technique can take huge time slots to find an optimal solution (more than four months, as shown in Section 6), since each configuration must be evaluated with a program trace. In this regard, heuristic techniques fit well in solving the multi-objective optimization problem especially when a set of conflicting design objectives must be minimized. MOEAs usually provide good results in a multi-objective environment. In this context, a set of candidate solutions called individuals evolves improving a multi-objective fitness function (an individual is formed by a chromosome plus the associated value of the multi-objective function).
According to this choice, the encoding of different parameters values is necessary for the suitable development of the selected technique. An appropriate coding of the chromosome is essential to achieve the optimal solution and this will depend on the kind of problem to solve. Our approach works with the first-level cache and both instruction and data cache can be customized with eligible values for each parameter. Thus, a possible solution (individual) is defined as a specific cache configuration for I-cache and D-cache. Individual genes are then related to possible values of cache configuration parameters. Therefore, a chromosome is defined by the sequence of parameters for a specific cache configuration, coded as integer values. A chromosome, in our approach, looks like the one depicted in Figure 2.
Thus, the chromosome applies an encoding scheme where each gene is an integer value that is mapped to the alphanumeric symbols (e.g. 8, 16, 32 for Line Size and LRU, FIFO, RANDOM for Replacement Policy) defined in Figure 1, considering values from left to right, and starting with 0.
As an example, lets consider the chromosome in Figure 3. The first and fifth genes "1" and "0" are the line sizes for the I-cache and D-cache, 16 and 8 bytes, respectively (following Figure 1). Next, second and sixth genes "0" and "2" correspond to the degree of associativity, 4 and 16 ways. The third and seventh genes "1" and "0" correspond to the replacement algorithm, and are mapped to the FIFO and LRU algorithms. The fourth and eighth genes "2" and "0" are the MISS-PREFETCH and ON-DEMAND prefetch algorithms. Finally, the ninth gene "0" is mapped to copy-back policy. So, full genome decoded is shown in Figure 4.
Figure 2: Chromosome Specification.
## 4 Multi-objective function
Our approach defines a decision variable \(\mathbf{x}\in\mathbf{X}\) in a multi-objective optimization context (see Appendix A). Variable \(\mathbf{x}\) defines a set of cache parameters values that represent a cache configuration to evaluate. The evaluation process consists of calculating the multi-objective function \(\mathbf{f}\) as the execution time and the energy consumption, both of them related to cache memory operations. Therefore, the best cache configuration will correspond with low values of execution time and energy consumption. As stated above, both are conflicting objectives.
In order to evaluate different cache configurations we have applied energy and performance models based on [27]. So, the design of the embedded system architecture consists of a processor with one cache level with an instruction cache, a data cache and embedded DRAM as main memory. Both instruction and data caches are 16 Kbytes in size according to the default characteristics of the ARM920T processor [8], our target platform. The instruction cache is read-only. Main memory is 64 MB in size according to datasheets of devices like Car GPS HS-3502, for example.
Figure 4: Decoded individual’s genome.
Figure 3: A chromosome or individual’s genome.
### First objective: performance model
The equation used to calculate execution time is described bellow. Execution time is computed according to time needed to solve accesses and misses on the cache memory system.
\[execTime = I_{access}\times I_{access\_time}+ \tag{1}\] \[I_{miss}\times DRAM_{access\_time}+\] \[I_{miss}\times I_{line\_size}\times\frac{1}{DRAM\_bw}+\] \[D_{access}\times D_{access\_time}+\] \[D_{miss}\times DRAM_{access\_time}+\] \[D_{miss}\times D_{line\_size}\times\frac{1}{DRAM\_bw}\]
* \(I_{access}\) and \(D_{access}\) are the number of cache memory accesses to the instruction and data cache, respectively.
* \(I_{miss}\) and \(D_{miss}\) are the number of cache misses (when the data searched is not found in the cache memory and must be copied from the main memory).
* \(I_{access\_time}\) and \(D_{access\_time}\) represent the access time to the instruction and data cache respectively per access.
* \(DRAM_{access\_time}\) is the main memory latency time.
* \(I_{line\_size}\) and \(D_{line\_size}\) correspond to line size (or block size) for instruction a data cache, respectively.
* \(DRAM\_bw\) is the bandwidth of the DRAM (transfer capacity).
This equation have six well defined parts, as detailed in [27]. Therefore, \(I_{access}\times I_{access\_time}\) is the total time due to instruction cache accesses. \(I_{miss}\times DRAM_{access\_time}\) is the total time spent by main memory accesses in response to instruction cache misses. \(I_{miss}\times I_{line\_size}\times\frac{1}{DRAM\_bw}\) represents the total time needed to fill a cache line for each cache miss on the instructions cache. \(D_{access}\times D_{access\_time}\) is the total time due to data cache accesses. \(D_{miss}\times DRAM_{access\_time}\) is the total time spent by main memory accesses in response to data cache misses and \(D_{miss}\times D_{line\_size}\times\frac{1}{DRAM\_bw}\) represents the total time needed to fill a cache line for each cache miss on the data cache.
### Second objective: energy model
Energy model is explained according to the following equation:
\[Energy = execTime\times CPU_{power}+ \tag{2}\] \[I_{access}\times I_{access\_energy}+\] \[D_{access}\times D_{access\_energy}+\] \[I_{miss}\times I_{access\_energy}\times I_{line\_size}+\] \[D_{miss}\times D_{access\_energy}\times D_{line\_size}+\] \[I_{miss}\times DRAM_{access\_power}\times\] \[(DRAM_{access\_time}+I_{line\_size}\times\frac{1}{DRAM_{bw}})+\] \[D_{miss}\times DRAM_{access\_power}\times\] \[(DRAM_{access\_time}+D_{line\_size}\times\frac{1}{DRAM_{bw}})\]
where varibles not described in Section 4.1 are:
* \(DRAM_{access\_power}\) is the power consumption for each DRAM access.
* \(I_{access\_energy}\) and \(D_{access\_energy}\) correspond to energy consumption in each instruction and data cache access, respectively.
The \(I_{access}\times I_{access\_energy}\) and \(D_{access}\times D_{access\_energy}\) terms calculate the energy consumption because of instructions and data cache, respectively. \(I_{miss}\times I_{access\_energy}\times I_{line\_size}\) and \(D_{miss}\times D_{access\_energy}\times D_{line\_size}\) is the energy cost of filling information into instruction and data caches respectively from main memory when miss occurs. The last two terms calculate the energy cost of the DRAM to respond to cache misses.
In our approach we remove the first term of the Energy equation \(execTime\times CPU_{power}\) because of three reasons: (1) the term \(CPU_{power}\) is constant and the term \(execTime\) is already being minimized in the first objective, (2) it represents the amount of energy consumed by the CPU and we are optimizing just the performance and energy consumed by the memory subsystem, and (3) in a multi-objective optimization all the objectives must be as orthogonal as possible, i.e., the term \(execTime\) is redundant. Thus, our second objective is reduced to:
\[Energy = I_{access}\times I_{access\_energy}+ \tag{3}\] \[D_{access}\times D_{access\_energy}+\] \[I_{miss}\times I_{access\_energy}\times I_{line\_size}+\] \[D_{miss}\times D_{access\_energy}\times D_{line\_size}+\] \[I_{miss}\times DRAM_{access\_power}\times\] \[(DRAM_{access\_time}+I_{line\_size}\times\frac{1}{DRAM_{bw}})+\] \[D_{miss}\times DRAM_{access\_power}\times\] \[(DRAM_{access\_time}+D_{line\_size}\times\frac{1}{DRAM_{bw}})\]
All the equations use seconds for time, watts for power, Joules for energy, bytes for cache line size and bytes/sec for bandwidth.
Our algorithm evolves to minimize execution time and/or energy consumption. After a given number of generations the algorithm returns a Pareto Front (an approximation to the Pareto Optimal Front), that represents the best set of configurations to apply to the cache memory. The higher the number of generations, the better is the quality of the cache memory.
## 5 Optimization framework
In this section we describe the framework used to optimize cache memories for multimedia embedded systems. As mentioned above, this work proposes an approach to determine the best cache configurations for a given set of applications. The best cache configurations are those which take less execution time and less energy consumption. Figure 5 depicts all the steps needed to carry out the optimization process.
We have divided our optimization process into three different phases, labeled in Figure 5. Firstly, two processes are executed just once before the optimization (labeled as 1 and 2). Next, the optimization is performed (labeled as 3), using as input the results of the previous two phases. We have extracted the first two off-line phases from the optimization phase to save execution time. The first phase is performed in one hour, whereas the second phase can be completed in four hours. Using this pre-characterization policy saves months to the optimization process (more details of execution time are provided in Section 6). In the following, we describe more in depth these three phases.
he first phase is _cache characterization_. The characterization of the DRAM and cache memory is performed using Cacti [28] to compute access times and energy. Cacti is a widely used analytical model to estimate energy and power consumption, performance and area of caches. The characterization is performed off-line. Basic inputs required by Cacti are cache size, line size and associativity. Since the cache size is fixed, and line size and degree of associativity have 4 possible values, the number of possible cache characterizations is 16. After this phase, all the parameters needed in the objective function (equations 1 and 3) are available.
The second phase is _application profiling_. All the target applications are simulated with Trimaran and all cache memory accesses are compiled and saved in program traces. Trimaran is an integrated compilation and performance monitoring infrastructure which provides enough resources to obtain application traces with accuracy. Trimaran customizes ARM processors through simpleScalar [29], an architectural simulator that can model a large set of different architectures. The processing time required to perform this phase depends on the number of target applications and the number of instructions to simulate.
The third phase is _cache optimization_. This phase must be repeated for each target application and is carried out by the NSGA-II algorithm implemented in the JECO library [30]. NSGA-II evaluates every candidate solution calling Dinero IV, which is a trace-driven cache simulator [31]. Dinero IV receives a cache configuration from NSGA-II and returns the number of cache hits and misses for the corresponding target application trace. These data, and the parameters obtained in the first phase are then included in the
Figure 5: Three processes are involved in the cache configuration optimization: (1) cache characterization, (2) application profiling and (3) cache optimization.
multi-objective function to compute both the execution time and energy consumed.
We have selected NSGA-II as the multi-objective optimization algorithm because, according to a recent survey published in [32], the current de facto standard evolutionary algorithm for multi-objective optimization is NSGA-II. This survey states that NSGA-II was used as a single algorithm in 53% of the examined papers, positioning the algorithm as one of the most widely used MOEAs, and obtaining very competitive results. Since our aim is to provide a technique to automatically design optimized cache memories, and not to find the best optimization algorithm, we propose the use of this one.
For NSGA-II, we have used single point crossover and integer flip mutation operators. The single point crossover is illustrated in Figure 6, where a random point is selected in the chromosome and used to generate two children. Similarly, the integer flip mutation is depicted in Figure 7. A random integer is generated for all those genes that must be mutated (according to the mutation probability), always constrained to the limits of the corresponding gene. Following the example given in Figure 7, the third gene is mutated, modifying its value from "1" to "0", which in the phenotype is translated into a change from FIFO to LRU replacement
Figure 6: Single point crossover.
algorithm, respectively.
## 6 Experiments
Our simulation environment consists of an Intel(R) Core(TM) i7-3770 CPU @ 3.40GHz with 16 GB RAM memory, with a GNU/Linux Debian 7 Operating System running a master-worker parallel version of NSGA-II with 8 workers. Experimental results are based on the ARM architecture. ARM processors are widespread on multimedia embedded devices. ARM920T [8] is a typical embedded processor used in tablets, smartphones, and set-top boxes like Motorola Q9m Verizon Mobile Phone, Car GPS HS-3502, etc. The ARM920T processor is a member of the ARM9TDMI family of general-purpose microprocessors, which have a standalone processor core on a Harvard architecture device. By default, the ARM920T processor implements a separate 16 KB instruction and data cache.
### Setup
To evaluate the effectiveness of our approach we have selected a subset of the Mediabench [7] applications suite as our target applications. Although our methodology can be used with any kind of applications, the Mediabench benchmark has been selected because of the high variability in block size, which provides heterogeneity to the exploration space. Using our methodology, we design an optimal first level cache memory with fixed-size for instructions and data, similar to some devices that have an ARM920T processor. After that, we have validated our optimization framework with two additional hardware platforms. We have simulated twelve Mediabench benchmarks: cjpeg, djpeg, mpegdec, mpegenc, gsmdec, gsmenc, epic, unepic,
Figure 7: Integer flip mutation.
pegwitdec, pegwitenc, rawcaudio and rawdaudio, all of them with their standard input. As stated above, we have generated their traces using Trimaran tools [33]. Trimaran works with SimpleScalar [29]. Thus, we have modified both SimpleScalar and Trimaran tools to obtain application traces according to the Dinero IV cache simulator, which is continuously called by our parallel NSGA-II implementation to evaluate each candidate solution.
Every application has been simulated for \(7.5\times 10^{7}\) instructions to reach a balance between the simulation time, the size of the program traces generated and a proper number of instructions. NSGA-II has been executed 30 times for each target application.
Table 1 shows the NSGA-II configuration. As crossover and mutation probabilities, we have used the values recommended in [6]. The number of generations and individuals has been fixed after several tests.
### Optimization Results
In the following we show and analyze all the results obtained in this research work. Figures 8 and 9 show the Pareto fronts obtained with our optimization framework. Each point in the graph represents a cache configuration and the corresponding execution time and energy, driven by equations (1) and (3), respectively.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|c|} \hline
In this regard, Table 2 shows that there are two cache configurations found in nine out of the twelve applications under study. One of these two configurations is shown in Figure 10.
Mpegdec, pegwitdec and pegwitenc are the only applications that do not share this cache configuration. The best configurations found save a 63.45% and 91.68% for mpegdec, 60.09% and 92.46% for pegwitdec and 59.74% and 92.55% for pegwitenc in execution time and energy, respectively. However, we have detected that using the cache configuration of Figure 10, we save almost the same quantities in execution time and energy (61.39% and 90.98% for mpegdec, 60.05% and 91.83% for pegwitdec and 58.89% and 92.14% for pegwitenc). Definitely, these are really good results to unify the selection process of an optimized cache configuration, for a target set of applications.
However, the number or points in the final Pareto front is small compared to the size of the search space (more than 64000 alternatives, as computed in Section 3). It might occur because NSGA-II have found the global optimum or the algorithm usually falls into a strong local optimum. To clarify this point, we have computed the hypervolume indicator (\(I_{H}^{-}\)) for each single run (see A).
Table 3 shows all the hypervolumes averaged for most of the 30 different runs and 7 applications. We did not compute the hypervolume indicator for all the target applications because in some of them we just
Figure 8: Pareto front representation for epic, unepic, gsmdec, gsmenc, pegwitdec and pegwitenc.
obtained the same single solution in each of the 30 runs, and the hypervolume cannot be computed for one single point. In the remaining cases, it is worth noting that the standard deviation is almost 0 for all the seven applications, i.e., NSGA-II is finding the same Pareto front on each simulation. Given that the algorithm always started with a different random initial population, it probably means that NSGA-II reached the _Pareto-Optimal_ front.
### Comparison with a baseline cache
To analyze the level of improvement using our optimization framework, we compare our results with those obtained by a baseline cache configuration. The selected baseline configuration appears in devices mentioned above (Motorola Q9m Verizon Mobile Phone, Car GPS HS-3502, among others). This cache has the following configuration values:
Figure 10: Cache Configuration shared by all the Pareto sets.
Figure 9: Pareto front representation for cjpeg, djpeg, mpegdec, mpegenc, rawcaudio and rawdaudio.
* Icache: Cache size: 16 KB; Block size: 16; Associativity: 64; Replacement algorithm: LRU: Prefetch policy: ON-DEMAND;
* Dcache: Cache size: 16 KB; Block size: 16; Associativity: 64; Replacement algorithm: LRU: Prefetch policy: ON-DEMAND; Write policy: COPY-BACK;
We have computed execution time and energy for this baseline cache following our model developed in Section 4. Next, we compare each point in the Pareto fronts depicted in Figures 8 and 9 with the baseline metrics using the following equations:
\[\text{Improvement}_{\text{execTime}} = 100\times\frac{T_{\text{baseline}}-T_{\text{optimized}}}{T_{\text{ baseline}}} \tag{4}\] \[\text{Improvement}_{\text{Energy}} = 100\times\frac{E_{\text{baseline}}-E_{\text{optimized}}}{E_{\text{ baseline}}} \tag{5}\]
where \(T_{\text{baseline}}\) and \(T_{\text{optimized}}\) are the execution time of the baseline and optimized caches, respectively. In the same manner, \(E_{\text{baseline}}\) and \(E_{\text{optimized}}\) are the energy computed for baseline and optimized caches, respectively.
Figures 11 and 12 show the level of improvement computed for each point in the Pareto fronts obtained for all the 12 target applications. Figure 11 depicts the percentage of improvement in execution time, whereas Figure 12 depicts the level of improvement in energy consumption. As can be seen, our approach achieves a significant improvement in both objectives. In this regard, Table 4 shows these improvements averaged over each Pareto front, and their averages in the last row. Our optimization method is able to reach cache configurations which are, in average, a 64.43% and 91.69% better in execution time and energy, respectively. To better understand this high improvement, we must compare the baseline configuration with, for example, the optimized cache configuration shown in Figure 10. Firstly, the baseline configuration has a 16 bytes block size, whereas the optimized configuration has an 8 bytes block size. Moving 16 bytes from main
\begin{table}
\begin{tabular}{|c c c|} \hline
**Application** & **Mean** & **STD** \\ \hline epic & \(-0.61\) & \(0\) \\ unepic & \(-0.69\) & \(0\) \\ jcpg & \(-0.61\) & \(0\) \\ jdpg & \(-0.47\) & \(0\) \\ gsmdec & \(-0.64\) & \(0\) \\ gsmenc & \(-0.62\) & \(1.18\times 10^{-16}\) \\ mpegenc & \(-0.21\) & \(0\) \\ \hline \end{tabular}
\end{table}
Table 3: Hypervolume Metric (S-metric).
memory to cache memory consumes more energy than moving 8 bytes. Secondly, the baseline configuration has 64 ways versus the 4 ways of the optimized version. It means that the baseline configuration is much more associative and then finding the desired block spends much more time and energy, since each label must be compared 64 times against 4. Finally, the prefetch policy of the baseline instructions cache configuration is "ON-DEMAND", whereas in the optimized cache is "ALWAYS". Instructions are usually loaded from consecutive memory addresses (discarding branch instructions), and thus, the "ON-DEMAND" prefetch policy will consume more time and energy than the optimized "ALWAYS" prefetch policy [2].
Regarding the convergence of the optimization process, Table 5 shows a summary of the evolution for both objectives. Column labeled as INI represents the level of improvement for each objective averaged over the initial random population. Column labeled as END represents the same values averaged over the final population. Column AVG shows averaged improvements for each objective and over all the generations and individuals, from INI to END. As Table 5 shows, NSGA-II easily improves the performance of the baseline cache even after the first generation. The same does not happen in energy, where after the first generation, only in the case of cjpeg NSGA-II is able to improve the energy consumption of the baseline
Figure 11: Pareto front execution time with respect to the baseline configuration (labels represent the percentage of improvement in execution time). Each color represents one point in the non-dominated front. For example, unepic has 4 points, whereas pegwitdec has one single point.
cache. Fortunately, after 3-4 generations, NSGA-II quickly find cache configurations that improve the baseline cache in both performance and energy. Surprisingly, the improvement in energy, which started with worst values, quickly grows up and reach much better values than the improvement in execution time (up to 97% in the case of cjpeg). In summary, Table 5 demonstrates that our optimization methodology, even when starting from bad initial solutions, is able to reach high levels of improvement with respect to a baseline configuration.
### Validation with two additional baseline cache
To validate our optimization framework, we have optimized the cache memory of two additional different hardware platforms included in some Apple devices of the family of SoC Apple AX, like iPhone 5, iPhone 5s, iPad 2, iPod-touch or iApple-TV. Apple AX series integrate the ARM processors family, for instance Cortex-A9 (iPad 2, iPod-touch or iApple-TV) or Cortex-A15 (iPhone 5, iPhone 5s). According to this, these two new cache configurations are:
* : Cache size: 32 KB; Block size: 64; Associativity: 4; Replacement algorithm: RANDOM: Prefetch policy: ALWAYS; Write policy (DCache): COPY-BACK;
Figure 12: Pareto front energy consumption with respect to the baseline configuration (labels represent the percentage of improvement in energy consumption). Each color represents one point in the non-dominated front, as in Figure 11.
\begin{table}
\begin{tabular}{|c c c|} \hline
**Application** & **Execution Time** & **Energy Consumption** \\ \hline epic & 62.79 & 90.85 \\ unepic & 60.74 & 89.75 \\ gsmdec & 63.12 & 91.48 \\ gsmenc & 63.21 & 91.52 \\ pegwitdec & 60.09 & 92.43 \\ pegwitenc & 59.74 & 92.55 \\ cjpeg & 87.23 & 96.68 \\ djpeg & 62.37 & 89.44 \\ mpegdec & 63.45 & 91.68 \\ mpegenc & 63.28 & 90.56 \\ rawcaudio & 63.59 & 91.68 \\ rawdaudio & 63.58 & 91.68 \\ \hline
**Average** & **64.43** & **91.69** \\ \hline \end{tabular}
\end{table}
Table 4: Percentage of improvement, averaged for each Pareto front and single objective vs. baseline cache configuration.
\begin{table}
\begin{tabular}{|c c c c|c c c|} \hline
**Application** & \multicolumn{3}{c|}{**Execution Time**} & \multicolumn{3}{c|}{**Energy Consumption**} \\ \cline{2-7} & **INI** & **AVG** & **END** & **INI** & **AVG** & **END** \\ \hline epic & 11.40 & 57.04 & 63.11 & -71.42 & 77.56 & 91.71 \\ unepic & 12.55 & 54.97 & 61.80 & -95.27 & 75.79 & 91.74 \\ cjpeg & 66.13 & 85.01 & 87.51 & 48.03 & 91.77 & 97.25 \\ djpeg & 12.88 & 55.26 & 62.77 & -69.11 & 74.77 & 91.91 \\ gsmdec & 1.25 & 53.37 & 63.52 & -97.14 & 77,22 & 93,19 \\ gsmenc & 4.46 & 55.01 & 63.65 & -71.97 & 77.58 & 93.20 \\ rawcaudio & 12.63 & 57.27 & 63.59 & -80.69 & 76.90 & 91.68 \\ rawdaudio & 10.27 & 57.50 & 63.58 & -60.05 & 77.95 & 91.68 \\ mpegdec & 2.18 & 57.17 & 63.45 & -105.70 & 76.63 & 91.68 \\ mpegenc & 4.02 & 57.08 & 63.37 & -66.60 & 76.97 & 91.74 \\ pegwitdec & -1.02 & 51.56 & 60.09 & -146.23 & 73.00 & 92.46 \\ pegwitenc & 12.07 & 51.03 & 59.74 & -161.10 & 71.76 & 92.55 \\ \hline \end{tabular}
\end{table}
Table 5: Improvement percentages: initial, average and final improvements per application vs. the baseline configuration.
* : Cache size: 32 KB; Block size: 64; Associativity: 2; Replacement algorithm: LRU: Prefetch policy: ALWAYS; Write policy (DCache): COPY-BACK;
We have repeated the process for the two additional baselines, computing the percentage of improvement obtained in the comparison of each baseline cache memory with the best cache configuration obtained by the optimization framework. Table 6 shows the percentages obtained, in execution time and energy consumption. It is worth noting that whereas the energy savings are still similar to Baseline 1 (close to 90%), the improvement in execution time is decreased (from 64% to 24% and 17%, respectively). These differences are given to the nature of the MediaBench benchmark and each specific baseline architecture. The first baseline, a Car GPS device, is oriented to a very specific navigation application, completely different in nature to MediaBench. It explains the high level of optimization in both execution time and energy. On the other hand, baselines 2 and 3 are general-purpose devices, and their cache memories are oriented to a wide range of different block sizes. It is translated into a low associativity but a big size, or in other words, better execution time but more energy, which explains the low level of improvement in execution time versus the high level of improvement in energy.
### On the performance of the optimization framework
Finally, with respect to the execution time of the optimization process, our master-worker architecture computed optimized cache configurations in an averaged wall-clock time of 10 hours (0.42 days) per application. The optimization of the set of 12 applications was performed in 5 days. Taking into account that the
\begin{table}
\begin{tabular}{|c|c c c c|} \hline \multirow{2}{*}{**Application**} & \multicolumn{2}{c|}{**Baseline 2**} & \multicolumn{2}{c|}{**Baseline 3**} \\ \cline{2-5} & **ExTime\%** & **Energy\%** & **ExTime\%** & **Energy\%** \\ \hline epic & 19.80 & 87.60 & 11.77 & 88.66 \\ unepic & 9.73 & 87.28 & 1.05 & 88.34 \\ gsmdec & 23.97 & 89.35 & 16.98 & 90.27 \\ gsmenc & 24.18 & 89.51 & 16.98 & 90.37 \\ pegwitdec & 34.26 & 92.25 & 28.12 & 92.72 \\ pegwitenc & 35.42 & 92.61 & 29.49 & 93.05 \\ cjpeg & 21.11 & 87.98 & 13.11 & 88.98 \\ djpeg & 27.38 & 88.74 & 21.01 & 89.93 \\ mpegdec & 24.80 & 88.01 & 18.61 & 89.42 \\ mpegenc & 22.20 & 87.65 & 14.32 & 88.70 \\ rawcaudio & 22.80 & 87.97 & 14.68 & 88.92 \\ rawdaudio & 24.13 & 87.81 & 16.14 & 88.77 \\ \hline
**Average** & 24.15 & 88.90 & 16.85 & 89.84 \\ \hline \end{tabular}
\end{table}
Table 6: Percentage of improvement for the best cache configuration obtained vs. new baselines chosen.
averaged time used by our simulation framework to evaluate one single cache configuration is equal to 13 seconds, an exhaustive optimization algorithm would take almost 10 days to find the Pareto optimal front for one single application, and 117 days to reach the set of 12 Pareto optimal fronts. As a result, a parallel master-worker NSGAII algorithm obtain excellent solutions (64.43% and 91.69% better in execution time and energy, respectively) with a difference of more than three months, obtaining a speed-up of 23.4 with respect to the exhaustive algorithm.
## 7 Conclusion and future work
Current multimedia embedded devices like smartphones, video players, etc. are highly constrained from battery lifetime and performance. Cache memories are added to these devices in order to improve performance. However, the selection of the best cache configuration for each embedded system is a hard task because of the large space of possible cache configurations. Several design techniques have been proposed for years in order to facilitate the search of the best cache configuration for different applications.
In this paper, we have presented a novel technique based on static profiling and multi-objective optimization to find the best cache configuration for a given target embedded system and a target set of applications. The process has been divided in two phases: the first one is responsible for obtaining the program traces and parameters needed to characterize the set of candidate cache configurations. The second phase applies multi-objective evolutionary algorithm, using NSGA-II and Dinero IV, to evaluate each application under the candidate set of cache configurations.
The result of the optimization is a set of cache configurations that minimizes execution time and energy consumption for each application. Therefore, this improves the performance and increases the lifetime of both batteries and devices. Taking a cache configuration commonly used in current multimedia systems as a baseline, experimental results show an average improvement of 64.43% and 91.69% in execution time and energy consumption, respectively.
Our methodology still needs human decisions to select the final cache memory, the best possible for the whole set of applications. We have seen that this is not a difficult task. However, as our future work, we are already extending this methodology to allow us the automatic optimization of all the target applications at a time. This will require a greater parallelization degree of the evaluation process and the design of a new accurate multi-objective function, incorporating for instance fuzzy decisions to reduce the number of objectives from 12 applications \(\times\) 2 objectives to two or three objectives.
## Acknowledgment
This work has been partly funded by the Spanish Ministry of Economy and Competitivity under research grants TIN2014-54806-R and TIN2014-56494-C4-2-P.
|
2308.04106 | Parallel Learning by Multitasking Neural Networks | A modern challenge of Artificial Intelligence is learning multiple patterns
at once (i.e.parallel learning). While this can not be accomplished by standard
Hebbian associative neural networks, in this paper we show how the Multitasking
Hebbian Network (a variation on theme of the Hopfield model working on sparse
data-sets) is naturally able to perform this complex task. We focus on systems
processing in parallel a finite (up to logarithmic growth in the size of the
network) amount of patterns, mirroring the low-storage level of standard
associative neural networks at work with pattern recognition. For mild dilution
in the patterns, the network handles them hierarchically, distributing the
amplitudes of their signals as power-laws w.r.t. their information content
(hierarchical regime), while, for strong dilution, all the signals pertaining
to all the patterns are raised with the same strength (parallel regime).
Further, confined to the low-storage setting (i.e., far from the spin glass
limit), the presence of a teacher neither alters the multitasking performances
nor changes the thresholds for learning: the latter are the same whatever the
training protocol is supervised or unsupervised. Results obtained through
statistical mechanics, signal-to-noise technique and Monte Carlo simulations
are overall in perfect agreement and carry interesting insights on multiple
learning at once: for instance, whenever the cost-function of the model is
minimized in parallel on several patterns (in its description via Statistical
Mechanics), the same happens to the standard sum-squared error Loss function
(typically used in Machine Learning). | Elena Agliari, Andrea Alessandrelli, Adriano Barra, Federico Ricci-Tersenghi | 2023-08-08T07:43:31Z | http://arxiv.org/abs/2308.04106v1 | # Parallel Learning by Multitasking Neural Networks
###### Abstract
A modern challenge of Artificial Intelligence is learning multiple patterns at once (i.e. _parallel learning_). While this can not be accomplished by standard Hebbian associative neural networks, in this paper we show how the Multitasking Hebbian Network (a variation on theme of the Hopfield model working on sparse data-sets) is naturally able to perform this complex task. We focus on systems processing in parallel a finite (up to logarithmic growth in the size of the network) amount of patterns, mirroring the low-storage level of standard associative neural networks at work with pattern recognition. For mild dilution in the patterns, the network handles them hierarchically, distributing the amplitudes of their signals as power-laws w.r.t. their information content (hierarchical regime), while, for strong dilution, all the signals pertaining to all the patterns are raised with the same strength (parallel regime).
Further, confined to the low-storage setting (i.e., far from the spin glass limit), the presence of a teacher neither alters the multitasking performances nor changes the thresholds for learning: the latter are the same whatever the training protocol is supervised or unsupervised. Results obtained through statistical mechanics, signal-to-noise technique and Monte Carlo simulations are overall in perfect agreement and carry interesting insights on _multiple learning at once_: for instance, whenever the cost-function of the model is minimized _in parallel on several patterns_ (in its description via Statistical Mechanics), the same happens to the standard sum-squared error Loss function (typically used in Machine Learning).
###### Contents
* 1 Introduction
* 2 Parallel learning in multitasking Hebbian neural networks
* 2.1 A preliminary glance at the emergent parallel retrieval capabilities
* 2.2 From parallel storing to parallel learning
* 3 Parallel Learning: the picture by statistical mechanics
* 3.1 Study of the Cost function and its related Statistical Pressure
* 3.1.1 Low-entropy data-sets: the Big-Data limit
* 3.1.2 Ergodicity breaking: the critical phase transition
* 3.2 Stability analysis via standard Hessian: the phase diagram
* 3.2.1 Ergodic state: \(\bar{\mathbf{n}}=\bar{n}_{d,\rho,\beta}(0,\ldots,0)\)
* 3.2.2 Pure state: \(\bar{\mathbf{n}}=\bar{n}_{d,\rho,\beta}(1,0,\ldots,0)\)
* 3.2.3 Parallel state: \(\bar{\mathbf{n}}=\bar{n}_{d,\rho,\beta}(1,\ldots,1)\)
* 3.2.4 Hierarchical state: \(\bar{\mathbf{n}}=\bar{n}_{d,\rho,\beta}((1-d),d(1-d),d^{2}(1-d),...)\)
* 3.3 From the Cost function to the Loss function
* 4 Conclusions
* A A more general sampling scenario
* B On the data-set entropy \(\rho\)
* B.1 I: multitasking Hebbian network equipped with not-affecting-dilution noise
* B.2 II: multitasking Hebbian network equipped with not-preserving-dilution noise
* C Stability analysis: an alternative approach
* C.1 Stability analysis via signal-to-noise technique
* C.2 Evaluation of momenta of the effective post-synaptic potential
* D Explicit Calculations and Figures for the cases \(K=2\) and \(K=3\)
* D.1 \(K=2\)
* D.2 \(K=3\)
* E Proofs
* E.1 Proof of Theorem 1
* E.2 Proof of Proposition 1
Introduction
Typically, Artificial Intelligence has to deal with several inputs occurring at the same time: for instance, think about automatic driving, where it has to distinguish and react to different objects (e.g., pedestrians, traffic lights, riders, crosswalks) that may appear simultaneously. Likewise, when a biological neural network learns, it is rare that it has to deal with one single input per time1: for instance, while trained in the school to learn any single letter, we are also learning about the composition of our alphabets. In this perspective, when stating that neural networks operate _in parallel_, some caution in potential ambiguity should be paid. To fix ideas, let us focus on the Hopfield model [34], the _harmonic oscillator_ of associative neural networks accomplishing pattern recognition [8; 25]: its neurons indeed operate synergistically in parallel but with the purpose of retrieving one single pattern per time, not several simultaneously [8; 10; 36]. A parallel processing where multiple patterns are simultaneously retrieved cannot be accessible to the standard Hopfield networks as long as each pattern is fully informative, namely its vectorial binary representation is devoid of blank entries. On the other hand, when a fraction of entries can be blank [14] multiple-pattern retrieval is potentially achievable by the network. Intuitively, this can be explained by noticing that the overall number of neurons making up the networks - and thus available for information processing - equals the length of the binary vectors codifying the patterns to be retrieved, hence, as long as these vectors contain information in all their entries, there is no free room for dealing with multiple patterns. Conversely, the multitasking neural networks, introduced in [2], are able to overcome this limitation and have been shown to succeed in retrieving multiple patterns simultaneously, just by leveraging the presence of lacunae in the patterns stored by the network. The emerging pattern-recognition properties have been extensively investigated at medium storage (i.e., on random graphs above the percolation threshold) [23], at high storage (i.e., on random graphs below the percolation threshold) [24] as well as on scale-free [44] and hierarchical [3] topologies.
Footnote 1: It is enough to note that, should serial learning take place rather than parallel learning, Pavlov’s Classical Conditioning would not be possible [14].
However, while the study of the parallel retrieval capabilities of these multi-tasking networks is nowadays over, the comprehension of their parallel learning capabilities just started and it is the main focus of the present paper.
In these regards it is important to stress that the Hebbian prescription has been recently revised to turn it from a storing rule (built on a set of already definite patterns, as in the original Amit-Gutfreund-Sompolinksy (AGS) theory) into a genuine learning rule (where unknown patterns have to be inferred by experiencing solely a sample of their corrupted copies), see e.g., [5; 13; 27]2.
Footnote 2: While Statistical Learning theories appeared in the Literature a long time ago, see e.g. [1; 29; 43] for the original works and [6; 20; 26; 41] for updated references, however the statistical mechanics of Hebbian learning was not deepened in these studies.
In this work we merge these extensions of the bare AGS theory and use definite patterns (equipped with blank entries) to generate a sparse data-set of corrupted examples, that is the solely information experienced by the network: we aim to highlight the role of lacunae density and of the data-set size and quality on the network performance, in particular deepening the way the network learns simultaneously the patterns hidden behind the supplied examples. In this investigation we focus on the low-storage scenario (where the number of definite patterns grows sub-linearly with the volume of the network) addressing both the _supervised_ and the _unsupervised_ setting.
The paper is structured as follows: the main text has three Sections. Beyond this Introduction provided in Section 1, in Section 2 we revise the multi-tasking associative network; once briefly sum
marized its parallel retrieval capabilities (Sec. 2.1), we introduce a simple data-set the network has to cope with in order to move from the simpler storing of patterns to their learning from examples (Sec. 2.2). Next, in Section 3 we provide an exhaustive statistical mechanical picture of the network's emergent information processing capabilities by taking advantage of Guerra's interpolation techniques: in particular, focusing on the Cost function (Sec. 3.1), we face the _big-data_ limit (Sec. 3.1.1) and we deepen the nature of the phase transition the network undergoes as ergodicity breaking spontaneously takes place (Sec. 3.1.2). Sec. 3.2 is entirely dedicated to provide phase diagrams (namely plots in the space of the control parameters where different regions depict different global computational capabilities). Further, before reaching conclusions and outlooks as reported in Sec. 4, in Sec. 3.3 we show how the network's Cost function (typically used in Statistical Mechanics) can be sharply related to standard Loss functions (typically used in Machine Learning) to appreciate how parallel learning effectively lowers several Loss functions at once.
In the Appendices we fix a number of subtleties: in Appendix A we provide a more general setting for the sparse data-sets considered in this research3, while in Appendix B we inspect the relative entropies of these data-sets and finally in Appendix C we provide a revised version of the Signal-to-Noise technique (that allows to evaluate computational shortcuts beyond providing an alternative route to obtain the phase diagrams). Appendices D and E give details on calculations, plots and proofs of the main theorems.
Footnote 3: In the main text we face the simplest kind of pattern’s dilution, namely we just force to be blank the same fraction of their entries whose position is preserved in the generation of the data-sets (hence whenever the pattern has a zero, in all the examples it gives rise to, the zero will be kept), while in the appendix we relax this assumption (and blank entries can move along the examples yet preserving their amount). As in the thermodynamic limit the theory is robust w.r.t. these structural details we present as a main theme the simplest setting and in the appendix A the more cumbersome one.
## 2 Parallel learning in multitasking Hebbian neural networks
### A preliminary glance at the emergent parallel retrieval capabilities
Hereafter, for the sake of completeness, we briefly review the retrieval properties of the multitasking Hebbian network in the low-storage regime, while we refer to [2; 4] for an extensive treatment.
**Definition 1**.: _Given \(N\) Ising neurons \(\sigma_{i}=\pm 1\) (\(i=1,...,N\)), and \(K\) random patterns \(\mathbf{\xi}^{\mu}\) (\(\mu=1,...,K\)), each of length \(N\), whose entries are i.i.d. from_
\[\mathbb{P}(\xi_{i}^{\mu})=\frac{(1-d)}{2}\delta_{\xi_{i}^{\mu},-1}+\frac{(1-d) }{2}\delta_{\xi_{i}^{\mu},+1}+d\delta_{\xi_{i}^{\mu},0}, \tag{1}\]
_where \(\delta_{i,j}\) is the Kronecker delta and \(d\in[0,1]\), the Hamiltonian (or cost function) of the system reads as_
\[\mathcal{H}_{N}(\mathbf{\sigma}|\mathbf{\xi}):=-\frac{1}{2N}\sum_{\begin{subarray}{c}i,j\\ i\neq j\end{subarray}}^{N,N}\left(\sum_{\mu=1}^{K}\xi_{i}^{\mu}\xi_{j}^{\mu} \right)\sigma_{i}\sigma_{j}. \tag{2}\]
The parameter \(d\) tunes the "dilution" in pattern entries: if \(d=0\) the standard Rademacher setting of AGS theory is recovered, while for \(d=1\) no information is retained in these patterns: otherwise stated, these vectors display, on average, a fraction \(d\) of blank entries.
**Definition 2**.: _In order to assess the network retrieval performance we introduce the \(K\) Mattis magnetizations_
\[m_{\mu}:=\frac{1}{N}\sum_{i}^{N}\xi_{i}^{\mu}\sigma_{i},\ \mu=1,...,K, \tag{3}\]
_which quantify the overlap between the generic neural configuration \(\mathbf{\sigma}\) and the \(\mu^{th}\) pattern._
Note that the cost function (2) can be recast as a quadratic form in \(m_{\mu}\), namely
\[\mathcal{H}_{N}(\mathbf{\sigma}|\mathbf{\xi})=-\frac{N}{2}\sum_{\mu}m_{\mu}^{2}+\frac{ K}{2}, \tag{4}\]
where the term \(K/2\) in the r.h.s. stems from diagonal terms (\(i=j\)) in the sum at the r.h.s. of eq. 2 and in the low-load scenario (i.e., \(K\) grows sub-linearly with \(N\)) can be neglected in the thermodynamic limit (\(N\to\infty\)).
As we are going to explain, the dilution ruled by \(d\) is pivotal for the network in order to perform parallel processing. It is instructive to first consider a toy model handling just \(K=2\) patterns: let us assume, for simplicity, that the first pattern \(\mathbf{\xi}^{1}\) contains information (i.e., no blank entries) solely in the first half of its entries and the second pattern \(\mathbf{\xi}^{2}\) contains information solely in the second half of its entries, that is
\[\mathbf{\xi}^{1}=\underbrace{(\xi_{1}^{1},...,\xi_{N/2}^{1},0,...,0)}_{\in\{-1,+ 1\}^{\frac{N}{2}}},\quad\mathbf{\xi}^{2}=(\underbrace{0,...,0}_{\in\{0\}^{\frac{N }{2}}},\underbrace{\xi_{N/2+1}^{1},...,\xi_{N}^{1}}_{\in\{-1,+1\}^{\frac{N}{2} }}) \tag{5}\]
Unlike the standard Hopfield reference (\(d=0\)), where the retrieval of one pattern employs all the resources and there is no chance to retrieve any other pattern, not even partially (i.e., as \(m_{1}\to 1\) then \(m_{2}\approx 0\) because patterns are orthogonal for large \(N\) values in the standard random setting), here nor \(m_{1}\) neither \(m_{2}\) can reach the value \(1\) and therefore the complete retrieval of one of the two still leaves resources for the retrieval of the other. In this particular case, the minimization of the cost function \(\mathcal{H}_{N}(\mathbf{\sigma}|\mathbf{\xi})=-\frac{N}{2}\left(m_{1}^{2}+m_{2}^{2}\right)\) is optimal when _both_ the magnetizations are equal to one-half, that is when they both saturate their upper bound. In general, for arbitrary dilution level \(d\), the minimization of the cost function requires the network to be in one of the following regimes
* _hierarchical scenario_: for values of dilution not too high (i.e., \(d<d_{c}\), _vide infra_), one of the two patterns is fully retrieved (say \(m_{1}\approx 1-d\)) and the other is retrieved to the largest extent given the available resources, these being constituted by, approximately, the \(Nd\) neurons corresponding to the blank entries in \(\mathbf{\xi}^{1}\) (thus, \(m_{2}\approx d(1-d)\)), and so on if further patterns are considered.
* _parallel scenario_: for large values of dilution (i.e., above a critical threshold \(d_{c}\)), the magnetizations related to all the patterns raise and the signals they convey share the same amplitude.
In general, in this type of neural network, the _pure state ansatz4_\(\mathbf{m}=(1,0,0,...,0)\), that is \(\sigma_{i}=\xi_{i}^{1}\) for \(i=1,...,N\), barely works and parallel retrieval is often favored. In fact, for \(K\geq 2\), at relatively low values of pattern dilution \(d_{1}\) and in the zero-noise limit \(\beta\to\infty\), one can prove the validity of the so-called _hierarchical ansatz_[2] as we briefly discuss: one pattern, say \(\mathbf{\xi}^{1}\), is perfectly retrieved and displays a Mattis magnetization \(m^{1}\approx(1-d)\); a fraction \(d\) of neurons is not involved and is therefore available for further retrieval, with any remaining pattern, say \(\mathbf{\xi}^{2}\), which yields \(m_{2}\sim(1-d)d\); proceeding iteratively, one finds \(m_{\ell}=d^{\ell-1}(1-d)\) for \(\ell=1,...,\hat{K}\) and the overall number \(\hat{K}\) of patterns
simultaneously retrieved corresponds to the employment of all the resources. Specifically, \(\hat{K}\) can be estimated by setting \(\sum_{\ell=0}^{\hat{K}-1}(1-d)d^{\ell}=1\), with the cut-off at finite \(N\) as \((1-d)d^{\hat{K}-1}\geq N^{-1}\), due to discreteness: for any fixed and finite \(d\), this implies \(K\lesssim\log N\), which can be thought of as a "parallel low-storage" regime of neural networks. It is worth stressing that, in the above mentioned regime of low dilution, the configuration leading to \(m_{\ell}=d^{\ell-1}(1-d)\) for \(\ell=1,...,\hat{K}\) is the one which minimizes the cost function. The hierarchical retrieval state \(\mathbf{m}=(1-d)\left(1,d,d^{2},d^{3},\cdots\right)\) can also be specified in terms of neural configuration as [2]
\[\sigma_{i}^{*}=\xi_{i}^{1}+\sum_{\nu=2}^{\hat{K}}\xi_{i}^{\nu}\prod_{\rho=1}^{ \nu-1}\delta_{\xi_{i}^{\rho},0}\,. \tag{6}\]
This organization is stable until a critical dilution level \(d_{c}\) is reached where \(m_{1}\sim\sum_{k>1}m_{k}\)[2], beyond that level the network undergoes a rearrangement and a new organization called _parallel ansatz_ supplants the previous one. Indeed for high values of dilution (i.e \(d\to 1\)) it is immediate to check that the ratio among the various intensities of all the magnetizations stabilizes to the value one, i.e. \((m_{k}/m_{k-1})\sim d^{k-1}(1-d)/d^{k-2}(1-d)\to 1\), hence in this regime all the magnetizations are raised with the same strength and the network is operationally set in a fully parallel retrieval mode: the parallel retrieval state simply reads \(\mathbf{m}=(\bar{m})\left(1,1,1,1,\cdots\right)\). This picture is confirmed by the plots shown in Fig. 1 and obtained by solving the self-consistency equations for the Mattis magnetizations related to the multitasking Hebbian network equipped with \(K=2\) patterns that read as [2]
\[m_{1} = d(1-d)\tanh(\beta m_{1})+\frac{(1-d)^{2}}{2}\left\{\tanh[\beta(m _{1}+m_{2})]+\tanh[\beta(m_{1}-m_{2})]\right\}, \tag{7}\] \[m_{2} = d(1-d)\tanh(\beta m_{1})+\frac{(1-d)^{2}}{2}\left\{\tanh[\beta(m _{1}+m_{2})]-\tanh[\beta(m_{1}-m_{2})]\right\} \tag{8}\]
where \(\beta\in\mathbb{R}^{+}\) denotes the level of noise.
We remark that these hierarchical or parallel organizations of the retrieval, beyond emerging naturally
Figure 1: Numerical solutions of the two self-consistent equations (7) and (8) obtained for \(K=2\), see [2], as a function of \(d\) and for different choices of \(\beta\): in the \(d\to 0\) limit the Hopfield serial retrieval is recovered (one magnetization with intensity one and the other locked at zero), for \(d\to 1\) the network ends up in the parallel regime (where all the magnetizations acquire the same value), while for intermediate values of dilution the hierarchical ordering prevails (both the magnetizations are raised, but their amplitude is different).
within the equilibrium description provided by Statistical Mechanics, are actually the real stationary states of the dynamics of these networks at work with diluted patterns as shown in Figure 2.
### From parallel storing to parallel learning
In this section we revise the multitasking Hebbian network [2; 4] in such a way that it can undergo a _learning_ process instead of a simple _storing_ of patterns. In fact, in the typical learning setting, the set of definite patterns, hereafter promoted to play as "archetypes", to be reconstructed by the network is not available, rather, the network is exposed to examples, namely noisy versions of these archetypes.
As long as enough examples are provided to the network, this is expected to correctly form its own representation of the archetypes such that, in further expositions to a new example related to a certain archetype, it will be able to retrieve it and, since then, suitably generalize it.
This generalized Hebbian kernel has recently been introduced to encode unsupervised [5] and supervised [13] learning processes and, in the present paper, these learning rules are modified in order to deal with diluted patterns.
First, let us define the data-set these networks have to cope with: the archetypes are randomly drawn from the distribution (1). Each archetype \(\mathbf{\xi}^{\mu}\) is then used to generate a set of \(M_{\mu}\) perturbed
Figure 2: We report two examples of Monte Carlo dynamics until thermalization within the hierarchical (upper plots, dilution level \(d=0.2\)) and parallel (lower plots, dilution level \(d=0.8\)) scenarios respectively. These plots confirm that the picture provided by statistical mechanics is actually dynamically reached by the network. We initialize the network sharply in a pattern as a Cauchy condition (represented as the dotted blue Dirac delta peaked at the pattern in the second columns) and, in the first column, we show the stationary values of the various Mattis magnetizations pertaining to different patterns, while in the second column we report their histograms achieved by sampling 1000 independent Monte Carlo simulations: starting from a sequential retrieval regime, the network ends up in a multiple retrieval mode, hierarchical vs parallel depending on the level of dilution in the patterns.
versions, denoted as \(\mathbf{\eta}^{\mu,a}\) with \(a=1,...,M_{\mu}\) and \(\mathbf{\eta}^{\mu,a}\in\{-1,0,+1\}^{N}\). Thus, the overall set of examples to be supplied to the network is given by \(\mathbf{\eta}=\{\mathbf{\eta}^{\mu,a}\}_{\mu=1,...,K}^{a=1,...,M_{\mu}}\). Of course, different ways to sample examples are conceivable: for instance, one can require that the position of blank entries appearing in \(\mathbf{\xi}^{\mu}\) is preserved over all the examples \(\{\mathbf{\eta}^{\mu,a}\}_{a=1,...,M_{\mu}}\), or one can require that only the number of blank entries \(\sum_{i=1}^{N}\delta_{\xi_{i}^{\mu},0}\) is preserved (either strictly or in the average). Here we face the first case because it requires a simpler notation, but we refer to Appendix A for a more general treatment.
**Definition 3**.: _The entries of each examples are depicted following_
\[\mathbb{P}(\eta_{i}^{\mu,a}|\xi_{i}^{\mu})=\frac{1+r_{\mu}}{2}\delta_{\eta_{i }^{\mu,a},\xi_{i}^{\mu}}+\frac{1-r_{\mu}}{2}\delta_{\eta_{i}^{\mu,a},-\xi_{i}^ {\mu}}, \tag{9}\]
_for \(i=1,\ldots,N\) and \(\mu=1,\ldots,K\). Notice that \(r_{\mu}\) tunes the data-set quality: as \(r_{\mu}\to 1\) examples belonging to the \(\mu\)-th set collapse on the archetype \(\mathbf{\xi}^{\mu}\), while as \(r_{\mu}\to 0\) examples turn out to be uncorrelated with the related archetype \(\mathbf{\xi}^{\mu}\)._
As we will show in the next sections, the behavior of the system depends on the parameters \(M_{\mu}\) and \(r_{\mu}\) only through the combination \(\frac{1-r_{\mu}^{2}}{M_{\mu}r_{\mu}^{2}}\), therefore, as long as the ratio \(\frac{1-r_{\mu}^{2}}{M_{\mu}r_{\mu}^{2}}\) is \(\mu\)-independent, the theory shall not be affected by the specific choice of the archetype. Thus, for the sake of simplicity, hereafter we will consider \(r\) and \(M\) independent of \(\mu\) and we will pose \(\rho:=\frac{1-r^{2}}{Mr^{2}}\). Remarkably, \(\rho\) plays as an information-content control parameter [13]: to see this, let us focus on the \(\mu\)-th pattern and \(i\)-th digit, whose related block is \(\mathbf{\eta}_{i}^{\mu}=(\eta_{i}^{\mu},\eta_{i}^{\mu,2},\ldots,\eta_{i}^{\mu,M})\), the error probability for any single entry is \(\mathcal{P}(\xi_{i}^{\mu}\neq 0)\mathcal{P}(\eta_{i}^{\mu,a}\neq\xi_{i}^{\mu})=(1-d)( 1-r_{\mu})/2\) and, by applying the majority rule on the block, we get \(\mathcal{P}(\xi_{i}^{\mu}\neq 0)\mathcal{P}(\mathrm{sign}(\sum\limits_{a}\eta_{i}^{ \mu,a})\xi_{i}^{\mu}=-1)\underset{M\gg 1}{\approx}\frac{(1-d)}{2}\left[1-\mathrm{ erf}\left(1/\sqrt{2\rho}\right)\right]\) thus, by computing the conditional entropy \(H_{d}(\xi_{i}^{\mu}|\mathbf{\eta}_{i}^{\mu})\), that quantifies the amount of information needed to describe the original message \(\xi_{i}^{\mu}\) given the related block \(\mathbf{\eta}_{i}^{\mu}\), we get
\[H_{d}(\xi_{i}^{\mu}|\mathbf{\eta}_{i}^{\mu}) = -\left[\frac{1+d}{2}+\frac{1-d}{2}\mathrm{erf}\left(\frac{1}{ \sqrt{2\rho}}\right)\right]\ \log\left[\frac{1+d}{2}+\frac{1-d}{2}\mathrm{erf}\left(\frac{1}{ \sqrt{2\rho}}\right)\right]\] \[-\left[\frac{1-d}{2}-\frac{1-d}{2}\mathrm{erf}\left(\frac{1}{ \sqrt{2\rho}}\right)\right]\ \log\left[\frac{1-d}{2}-\frac{1-d}{2}\mathrm{erf}\left(\frac{1}{ \sqrt{2\rho}}\right)\right]\]
which is monotonically increasing with \(\rho\). Therefore, with a slight abuse of language, in the following \(\rho\) shall be referred to as _data-set entropy_.
The available information is allocated directly in the synaptic coupling among neurons (as in the standard Hebbian storing), as specified by the following supervised and unsupervised generalization of the multitasking Hebbian network:
**Definition 4**.: _Given \(N\) binary neurons \(\sigma_{i}=\pm 1\), with \(i\in(1,...,N)\), the cost function (or Hamiltonian) of the multitasking Hebbian neural network in the supervised regime is_
\[\mathcal{H}_{N,K,d,M,r}^{(sup)}(\mathbf{\sigma}|\mathbf{\eta})=-\frac{1}{2N}\frac{1}{ (1-d)(1+\rho)}\sum_{\mu=1}^{K}\sum_{i,j=1}^{N,N}\left(\frac{1}{Mr}\sum_{a=1}^{ M}\eta_{i}^{\mu,a}\right)\left(\frac{1}{Mr}\sum_{b=1}^{M}\eta_{j}^{\mu,b}\right) \sigma_{i}\sigma_{j}. \tag{11}\]
**Definition 5**.: _Given \(N\) binary neurons \(\sigma_{i}=\pm 1\), with \(i\in(1,...,N)\), the cost function (or Hamiltonian) of the multitasking Hebbian neural network in the unsupervised regime is_
\[\mathcal{H}_{N,K,d,M,r}^{(unsup)}(\mathbf{\sigma}|\mathbf{\eta})=-\frac{1}{2N}\frac{1}{ (1-d)(1+\rho)}\sum_{\mu=1}^{K}\sum_{i,j=1}^{N,N}\left(\frac{1}{Mr^{2}}\sum_{a =1}^{M}\eta_{i}^{\mu,a}\eta_{j}^{\mu,a}\right)\sigma_{i}\sigma_{j}. \tag{12}\]
**Remark 1**.: _The factor \((1-d)(1+\rho)\) appearing in (2.11) corresponds to \(\mathbb{E}_{\xi},\mathbb{E}_{(\eta|\xi)}\left[\sum\limits_{a}\eta_{i}^{\mu,a}/( Mr)\right]^{2}\) and it plays as a normalization factor. A similar factor is also inserted in (2.12)._
**Remark 2**.: _By direct comparison between (2.11) and (2.12), the role of the "teacher" in the supervised setting is evident: in the unsupervised scenario, the network has to handle all the available examples regardless of their archetype label, while in the supervised counterpart a teacher has previously grouped examples belonging to the same archetype together (whence the double sum on \(a=(1,...,M)\) and on \(b=(1,...,M)\) appearing in eq. (2.11), that is missing in eq. (2.12))._
We investigate the model within a canonical framework: we introduce the Boltzmann-Gibbs measure
\[\mathcal{P}^{(sup,unsup)}_{N,K,\beta,d,M,r}(\mathbf{\sigma}|\mathbf{\eta}):=\frac{ \exp[-\beta\mathcal{H}^{(sup,unsup)}_{N,K,d,M,r}(\mathbf{\sigma}|\mathbf{\eta})]}{ \mathcal{Z}^{(sup,unsup)}_{N,K,\beta,d,M,r}(\mathbf{\eta})}, \tag{2.13}\]
where
\[\mathcal{Z}^{(sup,unsup)}_{N,K,\beta,d,M,r}(\mathbf{\eta}):=\sum_{\mathbf{\sigma}} \exp\left[-\beta\mathcal{H}^{(sup,unsup)}_{N,K,d,M,r}(\mathbf{\sigma}|\mathbf{\eta})\right] \tag{2.14}\]
is the normalization factor, also referred to as partition function, and the parameter \(\beta\in\mathbb{R}^{+}\), rules the broadness of the distribution in such a way that for \(\beta\to 0\) (infinite noise limit) all the \(2^{N}\) neural configurations are equally likely, while for \(\beta\to\infty\) the distribution is delta-peaked at the configurations corresponding to the minima of the Cost function.
The average performed over the Boltzmann-Gibbs measure is denoted as
\[\omega^{(sup,unsup)}_{N,K,\beta,d,M,r}[\cdot]:=\sum_{\mathbf{\sigma}}^{2^{N}}\, \cdot\,\mathcal{P}^{(sup,unsup)}_{N,K,\beta,d,M,r}(\mathbf{\sigma}|\mathbf{\eta}). \tag{2.15}\]
Beyond this average, we shall also take the so-called _quenched_ average, that is the average over the realizations of archetypes and examples, namely over the distributions (2.1) and (2.9), and this is denoted as
\[\mathbb{E}[\cdot]=\mathbb{E}_{\xi}\mathbb{E}_{(\eta|\xi)}[\cdot]. \tag{2.16}\]
**Definition 6**.: _The quenched statistical pressure of the network at finite network size \(N\) reads as_
\[\mathcal{A}^{(sup,unsup)}_{N,K,\beta,d,M,r}=\frac{1}{N}\mathbb{E}\log\mathcal{ Z}^{(sup,unsup)}_{N,K,\beta,d,M,r}(\mathbf{\eta}). \tag{2.17}\]
_In the thermodynamic limit we pose_
\[\mathcal{A}^{(sup,unsup)}_{K,\beta,d,M,r}=\lim_{N\to\infty}\mathcal{A}^{(sup,unsup)}_{N,K,\beta,d,M,r}. \tag{2.18}\]
_We recall that the statistical pressure equals the free energy times \(-\beta\) (hence they convey the same information content)._
**Definition 7**.: _The network capabilities can be quantified by introducing the following order parameters, for \(\mu=1,\ldots,K\),_
\[m_{\mu} :=\frac{1}{N}\sum_{i=1}^{N}\xi_{i}^{\mu}\sigma_{i},\] \[n_{\mu,a} :=\frac{1}{(1+\rho)r}\frac{1}{N}\sum_{i=1}^{N}\eta_{i}^{\mu,a} \sigma_{i}, \tag{2.19}\] \[n_{\mu} :=\frac{1}{M}\sum_{a=1}^{M}n_{\mu,a}=\frac{1}{(1+\rho)r}\frac{1}{ NM}\sum_{i,a=1}^{N,M}\eta_{i}^{\mu,a}\sigma_{i},\]
We stress that, beyond the fairly standard \(K\) Mattis magnetizations \(m_{\mu}\), which assess the alignment of the neural configuration \(\mathbf{\sigma}\) with the archetype \(\mathbf{\xi}^{\mu}\), we need to introduce also \(K\) empirical Mattis magnetizations \(n_{\mu}\), which compare the alignment of the neural configuration with the average of the examples labelled with \(\mu\), as well as \(K\times M\) single-example Mattis magnetizations \(n_{\mu,a}\), which measure the proximity between the neural configuration and a specific example. An intuitive way to see the suitability of the \(n_{\mu}\)'s and of the \(n_{\mu,a}\)'s is by noticing that the cost functions \(\mathcal{H}^{(sup)}\) and \(\mathcal{H}^{(unsup)}\) can be written as a quadratic form in, respectively, \(n_{\mu}\) and \(n_{\mu,a}\); on the other hand, the \(m_{\mu}\)'s do not appear therein explicitly as the archetypes are unknowns in principle.
Finally, notice that no spin-glass order parameters is needed here (since we are working in the low-storage regime [8; 25]).
## 3 Parallel Learning: the picture by statistical mechanics
### Study of the Cost function and its related Statistical Pressure
To inspect the emergent capabilities of these networks, we need to estimate the order parameters introduced in Equations (19) and analyze their behavior versus the control parameters \(K,\beta,d,M,r\). To this task we need an explicit expression of the statistical pressure in terms of these order parameters so to extremize the former over the latter. In this Section we carry on this investigation in the thermodynamic limit and in the low storage scenario by relying upon Guerra's interpolating techniques (see e.g., [30; 17; 31; 32]): the underlying idea is to introduce an interpolating statistical pressure whose extrema are the original model (which is the target of our investigation but we can be not able to address it directly) and a simple one (which is usually a one-body model that we can solve exactly). We then start by evaluating the solution of the latter and next we propagate the obtained solution back to the original model by the fundamental theorem of calculus, integrating on the interpolating variable. Usually, in this last passage, one assumes replica symmetry, namely that the order-parameter fluctuations are negligible in the thermodynamic limit as this makes the integral propagating the solution analytical. In the low-load scenario replica symmetry holds exactly, making the following calculation rigorous. In fact, as long as \(K/N\to 0\) while \(N\to\infty\), the order parameters self-average around their means [19; 46], that will be denoted by a bar, that is
\[\lim_{N\to\infty}\mathcal{P}_{N,K,\beta,d,M,r}(m_{\mu}) = \delta\left(m_{\mu}-\bar{m}_{\mu}\right),\quad\forall\mu\in(1,...,K), \tag{22}\] \[\lim_{N\to\infty}\mathcal{P}_{N,K,\beta,d,M,r}(n_{\mu}) = \delta\left(n_{\mu}-\bar{n}_{\mu}\right),\quad\forall\mu\in(1,...,K), \tag{23}\]
where \(\mathcal{P}_{N,K,\beta,d,M,r}\) denotes the Boltzmann-Gibbs probability distribution for the observables considered. We anticipate that the mean values of these distributions are independent of the training (either supervised or unsupervised) underlying the Hebbian kernel.
Before proceeding, we slightly revise the partition functions (14) by inserting an extra term in their exponents because it allows to apply the functional generator technique to evaluate the Mattis magnetizations. This implies the following modification, respectively in the supervised and unsupervised settings, of the partition function
**Definition 8**.: _Given the interpolating parameter \(t\in[0,1]\), the auxiliary field \(J\) and the constants \(\{\psi_{\mu}\}_{\mu=1,...,K}\in\mathbb{R}\) to be set a posteriori, Guerra's interpolating partition function for the supervised
_and unsupervised multitasking Hebbian networks is given, respectively, by_
\[\mathcal{Z}^{(sup)}_{N,K,\beta,d,M,r}(\mathbf{\eta};J,t)=\sum_{\{\mathbf{ \sigma}\}}\int\,d\mu(z_{\mu})\exp\Bigg{[}J\sum_{\mu,i}\xi_{i}^{\mu}\sigma_{i}+ \frac{t\beta N(1+\rho)}{2(1-d)}\sum_{\mu}n_{\mu}^{2}(\mathbf{\sigma})+(1-t)\frac{N }{2}\sum_{\mu}\psi_{\mu}\,n_{\mu}(\mathbf{\sigma})\Bigg{]}. \tag{10}\] \[\mathcal{Z}^{(unsup)}_{N,K,\beta,d,M,r}(\mathbf{\eta};J,t)=\sum_{\{ \mathbf{\sigma}\}}\int\,d\mu(z_{\mu})\exp\Bigg{[}J\sum_{\mu,i}\xi_{i}^{\mu}\sigma_{ i}+\frac{t\beta N(1+\rho)}{2(1-d)M}\sum_{\mu=1}^{K}\sum_{a=1}^{M}n_{\mu,a}^{2}( \mathbf{\sigma})+(1-t)N\sum_{\mu,a}\psi_{\mu}\,n_{\mu,a}(\mathbf{\sigma})\Bigg{]}. \tag{11}\]
More precisely, we added the term \(J\sum_{\mu}\sum_{i}\xi_{i}^{\mu}\sigma_{i}\) that allows us to to "generate" the expectation of the Mattis magnetization \(m_{\mu}\) by evaluating the derivative w.r.t. \(J\) of the quenched statistical pressure at \(J=0\). This operation is not necessary for _Hebbian storing_, where the Mattis magnetization is a natural order parameter (the Hopfield Hamiltonian can be written as a quadratic form in \(m_{\mu}\), as standard in AGS theory [8]), while for _Hebbian learning_ (whose cost function can be written as a quadratic form in \(n_{\mu}\), not in \(m_{\mu}\) as the network does not experience directly the archetypes) we need such a term for otherwise the expectation of the Mattis magnetization would not be accessible. This operation gets redundant in the \(M\to\infty\) limit, where \(m_{\mu}\) and \(n_{\mu}\) become proportional by a standard Central Limit Theorem (CLT) argument (see also Sec. 3.1.1 and [13]). Clearly, \(\mathcal{Z}^{(sup,unsup)}_{N,K,\beta,d,M,r}(\mathbf{\eta})=\lim_{J\to 0} \mathcal{Z}^{(sup,unsup)}_{N,K,\beta,d,M,r}(\mathbf{\eta};J)\) and these generalized interpolating partition functions, provided in eq.s (10) and (11) respectively, recover the original models when \(t=1\), while they return a simple one-body model at \(t=0\).
Conversely, the role of the \(\psi_{\mu}\)'s is instead that of mimicking, as close as possible, the true post-synaptic field perceived by the neurons.
These partition functions can be used to define a generalized measure and a generalized Boltzmann-Gibbs average that we indicate by \(\omega_{t}^{(sup,unsup)}[\cdot]\). Of course, when \(t=1\) the standard Boltzmann-Gibbs measure and related averages are recovered.
Analogously, we can also introduce a generalized interpolating quenched statistical pressures as
**Definition 9**.: _The interpolating statistical pressure for the multitasking Hebbian neural network is introduced as_
\[\mathcal{A}^{(sup,unsup)}_{N,K\beta,d,M,r}(J,t)\coloneqq\frac{1}{N}\mathbb{E }\left[\ln\mathcal{Z}^{(sup,unsup)}_{N,K,\beta,d,M,r}(\mathbf{\eta};J,t)\right], \tag{12}\]
_and, in the thermodynamic limit,_
\[\mathcal{A}^{(sup,unsup)}_{K,\beta,d,M,r}(J,t)\coloneqq\lim_{N\to\infty} \mathcal{A}^{(sup,unsup)}_{N,K,\beta,d,M,r}(J,t). \tag{13}\]
_Obviously, by setting \(t=1\) in the interpolating pressures we recover the original ones, namely \(\mathcal{A}^{(sup,unsup)}_{K,\beta,d,M,r}(J)=\mathcal{A}^{(sup,unsup)}_{K, \beta,d,M,r}(J,t=1)\), which we finally evaluate at \(J=0\)._
We are now ready to state the next
**Theorem 1**.: _In the thermodynamic limit (\(N\to\infty\)) and in the low-storage regime (\(K/N\to 0\)), the quenched statistical pressure of the multitasking Hebbian network - trained under supervised or unsupervised learning - reads as_
\[\mathcal{A}^{(sup,unsup)}_{K,\beta,d,M,r}(J) = \mathbb{E}\left\{\ln\left[2\cosh\left(J\sum_{\mu=1}^{K}\xi^{\mu} +\frac{\beta}{1-d}\sum_{\mu=1}^{K}\bar{n}_{\mu}\tilde{\eta}^{\mu}\right)\, \right]\right\}-\frac{\beta}{1-d}(1+\rho)\sum_{\mu=1}^{K}\bar{n}_{\mu}^{2}. \tag{14}\]
_where \(\mathbb{E}=\mathbb{E}_{\xi}\mathbb{E}_{(\eta|\xi)}\), \(\hat{\eta}^{\mu}=\frac{1}{Mr}\sum_{a=1}^{M}\eta_{i}^{\mu,a}\), and the values \(\bar{n}_{\mu}\) must fulfill the following self-consistent equations_
\[\bar{n}_{\mu}=\frac{1}{(1+\rho)}\mathbb{E}\left\{\tanh\left[\frac{\beta}{(1-d) }\sum_{\nu=1}^{K}\bar{n}_{\nu}\hat{\eta}^{\nu}\right]\hat{\eta}^{\mu}\right\},\quad\forall\mu\in(1,...,K), \tag{3.8}\]
_as these values of the order parameters are extremal for the statistical pressure \(\mathcal{A}_{K,\beta,d,M,r}^{(sup,unsup)}(J=0)\)._
**Corollary 1**.: _By considering the auxiliary field \(J\) coupled to \(m_{\mu}\) and recalling that \(\lim_{N\to\infty}m_{\mu}=\bar{m}_{\mu}\), we can write down a self-consistent equation also for the Mattis magnetization as \(\bar{m}_{\mu}=\partial_{J}\mathcal{A}_{K,\beta,d,M,r}^{(sup,unsup)}(J)_{|J=0}\), thus we have_
\[\bar{m}_{\mu}=\mathbb{E}\left\{\tanh\left[\frac{\beta}{(1-d)}\sum_{\nu=1}^{K }\bar{n}_{\nu}\hat{\eta}^{\nu}\right]\xi^{\mu}\right\},\quad\forall\mu\in(1,...,K). \tag{3.9}\]
For the proof of Proposition 1 and of Corollary 1 we refer to Appendix E.1.
We highlighted that the expressions of the quenched statistical pressure for a network trained with or without the supervision of a teacher do actually coincide: intuitively, this happens because we are considering only a few archetypes (i.e. we work at low load), consequently, the minima of the cost function are well separated and there is only a negligible role of the teacher in shaping the landscape to avoid overlaps in their basins of attractions. Clearly, this is expected to be no longer true in the high load setting and, indeed, it is proven not to hold for non-diluted patterns, where supervised and unsupervised protocols give rise to different outcomes [5, 13]. By a mathematical perspective the fact that, whatever the learning procedure, the expression of the quenched statistical pressure is always the same, is a consequence of standard concentration of measure arguments [17, 45] as, in the \(N\to\infty\) limit, beyond eq. (3.2), it is also \(\mathcal{P}(n_{\mu,a})\to\delta(n_{\mu,a}-\bar{n}_{\mu})\).
The self-consistent equations (3.9) have been solved numerically for several values of parameters and results for K=2 and for \(K=3\) are shown in Fig. 3 (where also the values of the cost function is reported) and Fig. 4 respectively. We also checked the validity of these results by comparing them with the outcomes of Monte Carlo simulations, finding an excellent asymptotic agreement; further, in the large \(M\) limit, the magnetizations eventually converge to the values predicted by the theory developed in the storing framework, see eq. (2.6). Therefore, in both the scenarios, the hierarchical or parallel organization of the magnetization's amplitudes are recovered: beyond the numerical evidence just mentioned, in Appendix D an analytical proof is provided.
#### 3.1.1 Low-entropy data-sets: the Big-Data limit
As discussed in Sec. 2.2, the parameter \(\rho\) quantifies the amount of information needed to describe the original message \(\mathbf{\xi}^{\mu}\) given the set of related examples \(\{\mathbf{\eta}^{\mu,a}\}_{a=1,...,M}\). In this section we focus on the case \(\rho\ll 1\) that corresponds to a highly-informative data-set; we recall that in the limit \(\rho\to 0\) we get a data-set where either the items (\(r\to 1\)) or their empirical average (\(M\to\infty\), \(r\) finite) coincide with the archetypes, in such a way that the theory collapses to the standard Hopfield reference.
As explained in Appendix E.2, we start from the self-consistent equations (3.8)-(3.9) and we exploit the Central Limit Theorem to write \(\hat{\eta}^{\mu}\sim\xi^{\mu}\left(1+\lambda_{\mu}\sqrt{\rho}\right)\), where \(\lambda_{\mu}\sim\mathcal{N}(0,1)\). In this way we reach the simpler expressions given by the next
**Proposition 1**.: _In the low-entropy data-set scenario, preserving the low storage and thermodynamic limit assumptions, the two sets of order parameters of the theory, \(\bar{m}_{\mu}\) and \(\bar{n}_{\mu}\) become related by the
following equations_
\[\bar{n}_{\mu} = \frac{\bar{m}_{\mu}}{(1+\rho)}+\beta^{{}^{\prime}}\frac{\rho\,\bar{ n}_{\mu}}{(1+\rho)}\mathbb{E}_{\xi,Z}\left\{\left[1-\tanh^{2}\left(g(\beta,Z,\bar{ \boldsymbol{n}})\right)\right](\xi^{\mu})^{2}\right\}, \tag{3.10}\] \[\bar{m}_{\mu} = \mathbb{E}_{\xi,Z}\left\{\tanh\left[g(\beta,\boldsymbol{\xi},Z, \bar{\boldsymbol{n}})\right]\xi^{\mu}\right\}, \tag{3.11}\]
_where_
\[g(\beta,\boldsymbol{\xi},Z,\bar{\boldsymbol{n}})=\beta^{{}^{\prime}}\sum_{ \nu=1}^{K}\bar{n}_{\nu}\xi^{\nu}+\beta^{{}^{\prime}}\,Z\sqrt{\rho\,\sum_{\nu= 1}^{K}\bar{n}_{\nu}^{2}\left(\xi^{\nu}\right)^{2}} \tag{3.12}\]
_and \(Z\sim\mathcal{N}(0,1)\) is a standard Gaussian variable. Furthermore, to lighten the notation and assuming \(d\neq 1\) with no loss of generality, we posed_
\[\beta^{{}^{\prime}}=\frac{\beta}{1-d}. \tag{3.13}\]
The regime \(\rho\ll 1\), beyond being an interesting one (e.g., it can be seen as a _big data_\(M\to\infty\) limit of the theory), offers a crucial advantage because of the above emerging proportionality relation between \(\bar{n}\) and \(\bar{m}\) (see eq. 3.10). In fact, the model is supplied only with examples - upon which the \(n_{\mu}\)'s are defined - while it is not aware of archetypes - upon which the \(m_{\mu}\)'s are defined - yet we can
Figure 3: Snapshots of cost function (upper plots) -where we use the label \(E\) for _energy_- and magnetizations (lower plots) for data-sets generated by \(K=2\) archetypes and at different entropies, in the noiseless limit \(\beta\to\infty\). Starting by \(\rho=0.0\) we see that the hierarchical regime (black lines) dominates at relatively mild dilution values (i.e., the energy pertaining to this configuration is lower w.r.t. the parallel regime), while for \(d\to 1\) the hierarchical ordering naturally collapse to the parallel regime (red lines), where all the magnetizations acquire the same values. Further note how, by increasing the entropy in the data-set (e.g. for \(\rho=0.1\) and \(\rho=0.4\)), the domain of validity of the parallel regime enlarges (much as increasing \(\beta\) in the network, see Fig. 1). The vertical blue lines mark the transitions between these two regimes as captured by Statistical Mechanics: it corresponds to switching from the white to the green regions of the phase diagrams of Fig. 6.
use this relation to recast the self-consistent equation for \(\bar{n}\) into a self-consistent equation for \(\bar{m}\) such that its numerical solution in the space of the control parameters allows us to get the phase diagram of such a neural network more straightforwardly.
Further, we can find out explicitly the thresholds for learning, namely the minimal amount of examples (given the level of noise \(r\), the amount of archetype to handle \(K\), etc.) that guarantee that the network can safely infer the archetype from the supplied data-set. To obtain these thresholds we have to deepen the ground state structure of the network, that is, we now handle the Eqs. (20)-(21) to compute their zero fast-noise limit (\(\beta\to\infty\)). As detailed in the Appendix E.2 (see Corollary 3), by taking the limit \(\beta\to\infty\) in eqs. (20)-(21) we get
\[\bar{m}_{\mu}\,=\,\mathbb{E}_{\xi}\left\{\mathrm{erf}\left[\left(\sum_{\nu=1 }^{K}\bar{m}_{\nu}\xi^{\nu}\right)\left(2\rho\sum_{\nu=1}^{K}\bar{m}_{\nu}^{2} \left(\xi^{\nu}\right)^{2}\right)^{-1/2}\right]\xi^{\mu}\right\}\,. \tag{22}\]
Once reached a relatively simple expression for \(\bar{m}_{\mu}\), we can further manipulate it and try to get information about the existence of a lower-bound value for \(M\), denoted with \(M_{\otimes}\), which ensures that the network has been supplied with sufficient information to learn and retrieve the archetypes.
Figure 4: _Behaviour of the Mattis magnetizations as more and more examples are supplied to the network._ Monte Carlo numerical checks (colored dots, \(N=6000\)) for a diluted network with \(r=0.1\) and \(K=3\) are in plain agreement with the theory: solutions of the self-consistent equation for the Mattis magnetizations reported in the Corollary 1 are shown as solid lines. As dilution increases, the network behavior departs from a Hopfield-like retrieval (\(d=0.1\)) where just the blue magnetization is raised (serial pattern recognition) to the hierarchical regime (\(d=0.25\) and \(d=0.55\)) where multiple patterns are simultaneously retrieved with different amplitudes, while for higher values of dilution the network naturally evolves toward the parallel regime (\(d=0.75\)) where all the magnetizations are raised and with the same strength. Note also the asymptotic agreement with the dotted lines, whose values are those predicted by the multitasking Hebbian storage [2].
Setting \(\beta\to\infty\), we expect that the magnetizations fulfill the hierarchical organization, namely \((\bar{m}_{1},\bar{m}_{2},\ldots)=(1-d)(1,d,\ldots)\) and (3.14) becomes
\[\bar{m}_{\mu}\sim\frac{1-d}{2}\mathbb{E}_{\xi^{\nu\neq\mu}}\left\{\mathrm{erf} \left[\frac{d^{\mu}+\sum\limits_{\nu\neq\mu}^{K}d^{\nu}\xi^{\nu}}{\sqrt{2\rho} \sqrt{d^{2\mu}+\sum\limits_{\nu\neq\mu}^{K}d^{2\nu}\left(\xi^{\nu}\right)^{2}} }\right]+\mathrm{erf}\left[\frac{d^{\mu}-\sum\limits_{\nu\neq\mu}^{K}d^{\nu} \xi^{\nu}}{\sqrt{2\rho}\sqrt{d^{2\mu}+\sum\limits_{\nu\neq\mu}^{K}d^{2\nu} \left(\xi^{\nu}\right)^{2}}}\right]\right\}\,, \tag{3.15}\]
where we highlighted that the expectation is over all the archetypes but the \(\mu\)-th one under inspection.
Next, we introduce a confidence interval, ruled by \(\Theta\), and we require that
\[\bar{m}_{\mu}>(1-d)d^{\mu-1}\mathrm{erf}\left[\Theta\right]. \tag{3.16}\]
In order to quantify the critical number of examples \(M_{\otimes}^{\mu}\) needed for a successful learning of the archetype \(\mu\) we can exploit the relation
\[\mathbb{E}_{\xi^{\nu\neq\mu}}\Big{\{}\mathrm{erf}\Big{[}f(\xi)\Big{]}\Big{\}} \geq\min_{\xi^{\nu\neq\mu}}\!\Big{\{}\mathrm{erf}\Big{[}f(\xi)\Big{]}\Big{\}}\,, \tag{3.17}\]
where in our case
\[\begin{split}\min_{\xi^{\nu\neq\mu}}\!\Big{\{}\mathrm{erf} \Big{[}f(\xi)\Big{]}\Big{\}}&=\min_{\xi^{\nu\neq\mu}}\left\{ \mathrm{erf}\left[\frac{d^{\mu}+\sum\limits_{\nu\neq\mu}^{K}d^{\nu}\xi^{\nu}}{ \sqrt{2\rho}\sqrt{d^{2\mu}+\sum\limits_{\nu\neq\mu}^{K}d^{2\nu}\left(\xi^{\nu }\right)^{2}}}\right]+\mathrm{erf}\left[\frac{d^{\mu}-\sum\limits_{\nu\neq\mu}^{ K}d^{\nu}\xi^{\nu}}{\sqrt{2\rho}\sqrt{d^{2\mu}+\sum\limits_{\nu\neq\mu}^{K}d^{2\nu} \left(\xi^{\nu}\right)^{2}}}\right]\right\}\\ &=\,2\,\mathrm{erf}\left[\left(d^{\mu}-\sum\limits_{\nu\neq\mu}^{ K}d^{\nu}\right)\left(2\rho\sum\limits_{\nu=1}^{K}d^{2\nu}\right)^{-1/2}\right]. \end{split} \tag{3.18}\]
Thus, using the previous relation in (3.16), the following inequality must hold
\[\mathrm{erf}\left[\left(d^{\mu}-\sum\limits_{\nu\neq\mu}^{K}d^{\nu}\right) \left(2\rho\sum\limits_{\nu=1}^{K}d^{2\nu}\right)^{-1/2}\right]=\mathrm{erf} \left[\sqrt{\frac{1+d}{2\rho(1-d)}}\frac{2d^{\mu-1}-1-2d^{\mu}+d^{K}}{\sqrt{1- d^{2K}}}\right]>d^{\mu-1}\mathrm{erf}\left[\Theta\right] \tag{3.19}\]
and we can write the next
**Proposition 2**.: _In the noiseless limit \(\beta\to\infty\), the critical threshold for learning \(M_{\otimes}\) (in the number of required examples) depends on the data-set noise \(r\), the dilution \(d\), the amount of archetypes to handle \(K\) (and of course on the amplitude of the chosen confidence interval \(\Theta\)) and reads as_
\[M_{\otimes}^{\mu}(\Theta,r,d,K)>2\left(\mathrm{erf}^{-1}\left[d^{\mu-1} \mathrm{erf}\left[\Theta\right]\right]\right)^{2}\left(\frac{1-r^{2}}{r^{2}} \right)\frac{(1-d)(1-d^{2K})}{(1+d)(2d^{\mu-1}-1-2d^{\mu}+d^{K})^{2}} \tag{3.20}\]
_and in the plots (see Figure 5) we use \(\Theta=1/\sqrt{2}\) as this choice corresponds to the fairly standard condition \(\mathbb{E}_{\xi}\mathbb{E}_{(\eta|\xi)}[\xi_{i}^{1}h_{i}^{(1)}(\boldsymbol{ \xi}^{1})]>\sqrt{\mathrm{Var}[\xi_{i}^{1}h_{i}^{(1)}(\boldsymbol{\xi}^{1})]}\) when \(\mu=1\)._
To quantify these thresholds for learning, in Fig. 5 we report the required number of examples to learn the first archetype (out of \(K=2,3,4,50\) as shown in the various panels) as a function of the dilution of the network.
#### 3.1.2 Ergodicity breaking: the critical phase transition
The main interest in the statistical mechanical approach to neural networks lies in inspecting their emerging capabilities, that typically appear once ergodicity gets broken: as a consequence, finding the boundaries of validity of ergodicity is a classical starting point to deepen these aspects.
To this task, hereafter we provide a systematic fluctuation analysis of the order parameters: the underlying idea is to check when, starting from the high noise limit (\(\beta\to 0\), where everything is uncorrelated and simple Probability arguments apply straightforwardly), these fluctuations diverge, as that defines the onset of ergodicity breaking as stated in the next
**Theorem 2**.: _The ergodic region, in the space of the control parameters \((\beta,d,\rho)\) is confined to the half-plane defined by the critical line_
\[\beta_{c}=\frac{1}{1-d}, \tag{3.21}\]
_whatever the entropy of the data-set \(\rho\)._
Proof.: The idea of the proof is the same we used so far, namely Guerra interpolation but on the rescaled fluctuations rather that directly on the statistical pressure.
Figure 5: We plot the logarithm of the critical number of example (required to raise the first magnetization) \(M_{\otimes}^{1}\) at different loads \(K=2,3,4,5\) and as a function of the dilution of the networks, for different noise values of the data-set (as shown in the legend). Note the divergent behavior of \(M_{\otimes}^{1}\) when approaching the critical dilution level \(d_{c}(K)=d_{1}\), as predicted by the parallel Hebbian storage limit [2; 4]: this is the crossover between the two multi-tasking regimes, hierarchical vs parallel, hence, solely at the value of dilution \(d_{1}\), there is no sharp behavior to infer and, correctly, the network can not accomplish learning. This shines by looking at (3.20) where the critical amount of examples to correctly infer the archetype is reported: its denominator reduces to \(1-2d+d^{K}\) and, for \(\mu=1\), it becomes zero when \(d\to d_{1}\).
The rescaled fluctuations \(\tilde{n}_{\nu}^{2}\) of the magnetizations are defined as
\[\tilde{n}_{\nu}=\sqrt{N}(n_{\nu}-\bar{n}_{\nu}). \tag{3.22}\]
We remind that the interpolating framework we are using, for \(t\in(0,1)\), is defined via
\[Z(t)=\sum_{\{\sigma\}}\exp\left[\frac{\beta}{2}tN(1+\rho)\sum_{\mu=1}^{K}n_{ \mu}^{2}+N(1-t)\beta(1+\rho)N_{\mu}n_{\mu}\right], \tag{3.23}\]
and it is a trivial exercise to show that, for any smooth function \(F(\sigma)\) the following relation holds:
\[\frac{d\langle F\rangle}{dt}=\frac{\beta}{2}(1+\rho)\left(\langle F\sum_{\nu} \tilde{n}_{\nu}^{2}\rangle-\langle F\rangle\langle\sum_{\nu}\tilde{n}_{\nu}^{ 2}\rangle\right), \tag{3.24}\]
such that by choosing \(F=\tilde{n}_{\mu}^{2}\) we can write
\[\begin{split}\frac{d\langle\tilde{n}_{\mu}^{2}\rangle}{dt}& =\frac{\beta}{2}(1+\rho)\left(\langle\tilde{n}_{\mu}^{2}\sum_{\nu} \tilde{n}_{\nu}^{2}\rangle-\langle\tilde{n}_{\mu}^{2}\rangle\langle\sum_{\nu }\tilde{n}_{\nu}^{2}\rangle\right)\\ &=\frac{\beta}{2}(1+\rho)\left(\langle\tilde{n}_{\mu}^{4}\rangle+ \langle\bar{n}_{\mu}^{2}\sum_{\nu\neq\mu}\tilde{n}_{\nu}^{2}\rangle-\langle \tilde{n}_{\mu}^{2}\rangle^{2}-\langle\tilde{n}_{\mu}^{2}\rangle\langle\sum_{ \nu\neq\mu}\tilde{n}_{\nu}^{2}\rangle\right)\\ &=\beta(1+\rho)\langle\tilde{n}_{\mu}^{2}\rangle^{2}\end{split} \tag{3.25}\]
thus we have
\[\langle\tilde{n}_{\mu}^{2}\rangle_{t}=\frac{\langle\tilde{n}_{\mu}^{2}\rangle _{t=0}}{1-t\beta(1+\rho)\langle\tilde{n}_{\mu}^{2}\rangle_{t=0}} \tag{3.26}\]
where the Cauchy condition \(\langle\tilde{n}_{\mu}^{2}\rangle_{t=0}\) reads
\[\begin{split}\langle\tilde{n}_{\mu}^{2}\rangle_{t=0}& =\,N\mathbb{E}_{\xi}\mathbb{E}_{(\eta|\xi)}\frac{\sum_{\{\sigma\}} \left(\frac{1}{N^{2}(1+\rho)^{2}}\sum_{i,j}\tilde{\eta}_{i}^{\mu}\tilde{\eta}_ {j}^{\mu}\sigma_{i}\sigma_{j}+\tilde{n}_{\mu}^{2}-2\frac{1}{N(1+\rho)}\sum_{i }\hat{\eta}_{i}^{\mu}\sigma_{i}\tilde{n}_{\mu}\right)\exp\left[\beta\sum_{\nu }\tilde{n}_{\nu}\sum_{i}\tilde{\eta}_{i}^{\nu}\sigma_{i}\right]}{\sum_{\{ \sigma\}}\exp\left[\beta\sum_{\nu}\tilde{n}_{\nu}\sum_{i}\tilde{\eta}_{i}^{ \nu}\sigma_{i}\right]}\\ &=\,\frac{1-d}{(1+\rho)}-N_{\mu}^{2}.\end{split} \tag{3.27}\]
Evaluating \(\langle\tilde{n}_{\mu}^{2}\rangle_{t}\) for \(t=1\), that is when the interpolation scheme collapses to the Statistical Mechanics, we finally get
\[\langle\tilde{n}_{\mu}^{2}\rangle_{t=1}=\frac{1-d-(1+\rho)N_{\mu}^{2}}{[1- \beta\left(1-d-(1+\rho)N_{\mu}^{2}\right)]} \tag{3.28}\]
namely the rescaled fluctuations are described by a meromorphic function whose pole is
\[\beta=\frac{1}{\left(1-d-(1+\rho)N_{\mu}^{2}\right)}\xrightarrow{N_{\mu}=0} \beta_{{}_{C}}=\frac{1}{1-d}, \tag{3.29}\]
that is the critical line reported in the statement of the theorem.
### Stability analysis via standard Hessian: the phase diagram
The set of solutions for the self-consistent equations for the order parameters (3.10) describes as candidate solutions a plethora of states whose stability must be investigated to understand which solution is preferred as the control parameters are made to vary: this procedure results in picturing the phase diagrams of the network, namely plots in the space of the control parameters where different regions pertain to different macroscopic computational capabilities.
Remembering that \(A_{K,\beta,d,M,r}(\bar{\mathbf{n}})=-\beta f_{K,\beta,d,M,r}(\bar{\mathbf{n}})\) (where \(f_{K,\beta,d,M,r}(\bar{\mathbf{n}})\) is the free energy of the model), in order to evaluate the stability of these solutions, we need to check the sign of the second derivatives of the free energy. More precisely, we need to build up the Hessian, a matrix \(\mathbf{A}\) whose elements are
\[\frac{\partial^{2}f(\bar{\mathbf{n}})}{\partial n^{\mu}\partial n^{\nu}}=A^{\mu\nu}\,. \tag{3.30}\]
Then, we evaluate and diagonalize \(\mathbf{A}\) at a point \(\tilde{\mathbf{n}}\), representing a particular solution of the self-consistency equation (3.10): the numerical results are reported in the phase diagrams provided in Fig.6.
We find straightforwardly
\[A^{\mu\nu}=(1+\rho)\left[[1-\beta(1-d)]+\rho\beta\mathbb{E}\left\{\mathcal{T} _{K\beta,\rho}^{2}(\bar{\mathbf{n}},z)(\xi^{\mu})^{2}\right\}\right]\delta^{\mu \nu}+Q^{\mu\nu} \tag{3.31}\]
where we set \(\mathcal{T}_{K\beta,\rho}(\bar{\mathbf{n}},z)=\tanh\left(\beta\sum_{\lambda=1}^{K} \bar{n}_{\lambda}\xi^{\lambda}+z\beta\sqrt{\rho\sum_{\lambda=1}^{K}(\bar{n}_{ \lambda}\xi^{\lambda})^{2}}\right)\) and
\[\begin{array}{rl}Q^{\mu\nu}&=\,\beta\mathbb{E}\left\{\left[\mathcal{T}_{K \beta,\rho}^{2}(\bar{\mathbf{n}},z)\right]\xi^{\mu}\xi^{\nu}\right\}(1-\delta^{\mu \nu})+2\rho\beta^{2}\mathbb{E}\left\{\left[\mathcal{T}_{K\beta,\rho}(\bar{\mathbf{ n}},z)\right]\left[1-\mathcal{T}_{K\beta,\rho}^{2}(\bar{\mathbf{n}},z)\right]\left[ \bar{n}_{\nu}\xi^{\nu}+\bar{n}_{\mu}\xi^{\mu}\right]\xi^{\mu}\xi^{\nu}\right\} \\ &\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \
where \({\cal T}=\tanh\left[\beta\bar{n}\xi^{\mu}(1+z\sqrt{\rho})\right]\). It is easy to check that \(A\) becomes diagonal, with
\[A^{\mu\mu} = (1+\rho)\Big{[}1-\beta(1-d)+\beta(1-d)\,\mathbb{E}\left\{{\cal T}^{ 2}\right\}\Big{]}\] \[+4\beta^{2}\rho\bar{n}(1-d)\mathbb{E}\left\{{\cal T}\Big{[}1-{ \cal T}^{2}\Big{]}\right\}+2\beta^{3}\rho^{2}\bar{n}^{2}(1-d)\mathbb{E}\left\{ \Big{[}1-3{\cal T}^{2}\Big{]}\Big{[}1-{\cal T}^{2}\Big{]}\right\}\,,\] \[A^{\nu\nu\neq\mu} = (1+\rho)\Big{[}1-\beta(1-d)+\beta(1-d)^{2}\,\mathbb{E}\left\{{ \cal T}^{2}\right\}\Big{]}\,.\]
Notice that these eigenvalues do not depend on \(K\) since \({\cal T}\) does not depend on \(K\). Requiring the positivity for all the eigenvalues, we get the region in the plane \((d,\beta^{-1})\), where the pure state is stable: this correspond the blue region in the phase diagrams reported in Fig. 6.
We stress that these pure state solutions, namely the standard Hopfield-type ones, in the ground state (\(\beta^{-1}\to 0\)) are never stable whenever \(d\neq 0\) as the multi-tasking setting prevails. Solely at positive
Figure 6: Phase diagram in the dilution-noise (\(d-\beta^{-1}\)) plane for different values of \(K\) and \(\rho\). We highlight that different regions –marked with different colors– represent different operational behavior of the network: in yellow the ergodic solution, in light-blue the pure state solution (that is, solely one magnetization different from zero), in white the hierarchical regime (that is, several magnetizations differ from zero and they all assume different values) and in light-green the parallel regime (several magnetization differ from zero but their amplitude is the same for all).
values of \(\beta\), this single-pattern retrieval state is possible as the role of the noise is to destabilize the weakest magnetization of the hierarchical displacement, _vide infra_).
#### 3.2.3 Parallel state: \(\bar{\boldsymbol{n}}=\bar{n}_{d,\rho,\beta}(1,\ldots,1)\)
In this case the structure of the solution has the form of a symmetric mixture state corresponding to the unique self consistency equation for all \(\mu=1,\ldots K\), namely
\[\bar{n}\,=\,\frac{\mathbb{E}_{\xi,Z}\left\{\tanh\left[g(\beta,\boldsymbol{\xi},Z,\bar{n})\right]\xi^{\mu}\right\}}{(1+\rho)}+\beta\frac{\rho\,\bar{n}}{(1+ \rho)}\mathbb{E}_{\xi,Z}\left\{\left[1-\tanh^{2}\left(g(\beta,\boldsymbol{\xi} Z,\bar{n})\right)\right](\xi^{\mu})^{2}\right\}, \tag{3.37}\]
where
\[g(\beta,\boldsymbol{\xi},Z,\bar{n})=\beta\bar{n}\left[\sum_{\lambda=1}^{K}\xi^ {\lambda}+\beta\,Z\sqrt{\rho\,\sum_{\lambda=1}^{K}\left(\xi^{\lambda}\right)^ {2}}\right]. \tag{3.38}\]
In this case, the diagonal terms of \(\boldsymbol{A}\) are
\[\begin{array}{rcl}a=A^{\mu\mu}\,=&\Big{[}1-\beta(1-d)+\beta\,\mathbb{E}\left \{\left[\mathcal{T}^{2}\right](\xi^{\mu})^{2}\right\}\Big{]}(1+\rho)\\ \\ &&+4\beta^{2}\rho\bar{n}\mathbb{E}\left\{\mathcal{T}\Big{[}1-\mathcal{T}^{2} \Big{]}\xi^{\mu}\right\}+2\beta^{3}\rho^{2}\bar{n}^{2}\mathbb{E}\left\{\Big{[} 1-3\mathcal{T}^{2}\Big{]}\Big{[}1-\mathcal{T}^{2}\Big{]}(\xi^{\mu})^{2} \right\}\,,\end{array} \tag{3.39}\]
instead the off-diagonal ones are
\[b=A^{\mu\nu\neq\mu}\,=\,\beta\mathbb{E}\left\{\left[\mathcal{T}^{2}\right] \xi^{\mu}\xi^{\nu}\right\}+2\rho^{2}\beta^{3}\bar{n}^{2}\mathbb{E}\left\{\left[ 1-3\mathcal{T}^{2}\right]\left[1-\mathcal{T}^{2}\right](\xi^{\mu}\xi^{\nu})^{ 2}\right\}\,. \tag{3.40}\]
In general the following relationship holds
\[\boldsymbol{A}=\begin{pmatrix}a&b&\cdots&b&b\\ b&a&\cdots&b&b\\ \vdots&\vdots&\ddots&\vdots&\vdots\\ b&b&\cdots&a&b\\ b&b&\cdots&b&a\end{pmatrix} \tag{3.41}\]
This matrix has always only two kinds of eigenvalues, namely \(a-b\) and \(a+(K-1)b\), thus, for the stability of the parallel state, after computing (3.39) and (3.40), we have only to check for which point in the \((d-\beta^{-1})\) plane both \(a-b\) and \(a+(K-1)b\) are positive. The region, in the phase diagrams of Fig. 6, where the parallel regime is stable is depicted in green.
2.4 Hierarchical state: \(\bar{\boldsymbol{n}}=\bar{n}_{d,\rho,\beta}((1-d),d(1-d),d^{2}(1-d),...)\)
In this case the structure of the solution has the hierarchical form \(\bar{\boldsymbol{n}}=\bar{n}_{d,\rho,\beta}((1-d),d(1-d),d^{2}(1-d),...)\) and the region left untreated so far in the phase diagram, namely the white region in the plots of Fig. 6, is the room left to such hierarchical regime.
### From the Cost function to the Loss function
We finally comment on the persistence -in the present approach- of quantifiers related to the evaluation of the pattern recognition capabilities of neural networks, i.e. the Mattis magnetization, also as quantifiers of a good learning process. The standard Cost functions used in Statistical Mechanics of neural networks (e.g., the Hamiltonians) can be related one-to-one to standard Loss functions used
in Machine Learning (i.e. the squared sum error functions), namely, once introduced the two Loss functions \(L_{\mu}^{+}:=(1/2N)||\xi^{\mu}-\sigma||^{2}=1-m_{\mu}\) and \(L_{\mu}^{-}=(1/2N)||\xi^{\mu}+\sigma||^{2}=1+m_{\mu}\)5, it is immediate to show that
Footnote 5: Note that in the last passage we naturally highlighted the presence of the Mattis magnetization in these Loss functions.
\[H(\mathbf{\sigma}|\mathbf{\xi})=\frac{-1}{2N}\sum_{i,j}^{N,N}\sum_{\mu}^{K}\xi_{i}^{\mu }\xi_{j}^{\mu}\sigma_{i}\sigma_{j}\equiv-N\sum_{\mu}^{K}\left(1-L_{\mu}^{+} \cdot L_{\mu}^{-}\right),\]
thus minimizing the former implies minimizing the latter such that, if we are extremizing w.r.t. the neurons we are performing machine retrieval (i.e. pattern recognition), while if we extremize w.r.t. the weights we perform machine leaning: indeed, at least in this setting, learning and retrieval are two faces of the same coin (clearly the task here, from a machine learning perspective, is rather simple as the network is just asked to correctly classify the examples and possibly generalize).
In Fig. 7 we inspect what happens to these Loss functions -pertaining to the various archetypes-as the Cost function gets minimized: we see that, at difference with the standard Hopfield model (where solely one Loss function per time diminishes its value), in this parallel learning setting several Loss functions (related to different archetypes) get simultaneously lowered, as expected by a parallel learning machine.
## 4 Conclusions
Since the AGS milestones on Hebbian learning dated 1985 [9; 10], namely the first comprehensive statistical mechanical theory of the Hopfield model for pattern recognition and associative memory, attractor neural networks have experienced an unprecedented growth and the bulk of techniques developed for spin glasses in these four decades (e.g. replica trick, cavity method, message passage,
Figure 7: Left: Parallel minimization of several (mean square-error) Loss functions \(L_{\pm}=||\xi^{\mu}\pm\sigma||^{2}\) (each pertaining to a different archetype) as the noise in the data-set \(r\) is varied. Here: \(M=25\), \(N=10000\). The horizontal gray dashed lines are the saturation level of the Loss functions, namely \(1-\frac{d}{2}-(1-d)d^{\mu-1}\). We get \(r_{\otimes}\) (the vertical black line) by the inversion of (3.20). Right: Parallel minimization of several (mean square-error) Loss function \(L_{\pm}=||\xi^{\mu}\pm\sigma||^{2}\) (each pertaining to a different archetype) as the data-set size \(M\) is varied: as M grows the simultaneous minimization of more than one Loss functions takes place, at difference with learning via standard Hebbian mechanisms where one Loss function -dedicate to a single archetype- is minimized per time. Orange and blue lines pertain to Loss functions of other patterns that, at these levels of dilution and noise, can not be minimized at once with the previous ones.
interpolation) acts now as a prosperous cornucopia for explaining the emergent information processing capabilities that these networks show as their control parameters are made to vary.
In these regards, it is important to stress how nowadays it is mandatory to optimize AI protocols (as machine learning for complex structured data-sets is still prohibitively expensive in terms of energy consumption [38]) and, en route toward a Sustainable AI (SAI), statistical mechanics may still pave a main theoretical strand: in particular, we highlight how the knowledge of the _phase diagrams_ related to a given neural architecture (that is the ultimate output product of the statistical mechanical approach) allows to set "a-priori" the machine in the optimal working regime for a given task thus unveiling a pivotal role of such a methodology even for a conscious usage of AI (e.g. it is useless to force a standard Hopfield model beyond its critical storage capacity)6.
Footnote 6: Further, still searching for optimization of resources, such a theoretical approach can also be used to inspect key computational shortcuts (as, e.g. early stopping criteria faced in Appendix C.1 or the role of the size of the mini-batch used for training [40] or of the flat minima emerging after training [15].
Focusing on Hebbian learning, however, while the original AGS theory remains a solid pillar and a paradigmatic reference in the field, several extensions are required to keep it up to date to deal with modern challenges: the first generalization we need is to move from a setting where the machine stores already defined patterns (as in the standard Hopfield model) toward a more realistic learning procedure where these patterns are unknown and have to be inferred from examples: the Hebbian storage rule of AGS theory quite naturally generalizes toward both supervised and an unsupervised learning prescriptions [5, 13]. This enlarges the space of the control parameters from \(\alpha,\beta\) (or \(K\), \(N\), \(\beta\)) of the standard Hopfield model toward \(\alpha,\beta,\rho\) (or \(K\), \(N\), \(\beta\), \(M\), \(r\)) as we now deal also with a data-set where we have \(M\) examples of mean quality r for each pattern (archetype) or, equivalently, we speak of a data-set produced at given entropy \(\rho\).
Once this is accomplished, the second strong limitation of the original AGS theory that must be relaxed is that patterns share the same length and, in particular, this equals the size of the network (namely in the standard Hopfield model there are N neurons to handle patterns, whose length is exactly N for all of them): a more general scenario is provided by dealing with patterns that contain different amounts of information, that is patterns are diluted. Retrieval capabilities of the Hebbian setting at work with diluted patterns have been extensively investigated in the last decade [2, 3, 23, 24, 28, 35, 42, 44, 47] and it has been understood how, dealing with patterns containing sparse entries, the network automatically is able to handle several of them in parallel (a key property of neural networks that is not captured by standard AGS theory). However the study of the parallel learning of diluted patterns was not addressed in the Literature and in this paper we face this problem, confining this first study to the low storage regime, that is when the number of patterns scales at most logarithmically with the size of the network. Note that this further enlarges the space of the control parameters by introducing the dilution \(d\): we have several control parameters because the network information processing capabilities are enriched w.r.t. the bare Hopfield reference7.
Footnote 7: However we clarify how it could be inappropriate to speak about structural differences among the standard Hopfield model and the present multitasking counterpart: ultimately these huge behavioral differences are just consequences of the different nature of the the data-sets provided to the same Hebbian network during training.
We have shown here that if we supply to the network a data-set equipped with dilution, namely a sparse data-set whose patterns contain -on average- a fraction \(d\) of blank entries (whose value is 0) and, thus, a fraction \((1-d)\) of informative entries (whole values can be \(\pm 1\)), then the network spontaneously undergoes parallel learning and behaves as a multitasking associative memory able to learn, store and retrieve multiple patterns in parallel. Further, focusing on neurons, the Hamiltonian of the model plays as the Cost function for neural dynamics, however, moving the att
have shown how the latter is one-to-one related to the standard (mean square error) Loss function in Machine Learning and this resulted crucial to prove that, by experiencing a diluted data-set, the network lowers in parallel several Loss functions (one for each pattern that is learning from the experienced examples).
For mild values of dilution, the most favourite displacement of the Mattis magnetizations is a _hierarchical ordering_, namely the intensities of these signals scale as power laws w.r.t. their information content \(m_{K}\sim d^{K}\cdot(1-d)\), while at high values of dilution a _parallel ordering_, where all these amplitudes collapse to the same value, prevails: the phase diagrams of these networks properly capture these different working regions.
Remarkably, confined to the low storage regime (where glassy phenomena can be neglected), the presence (or its lacking) of a teacher does not alter the above scenario and the threshold for a secure learning, namely the minimal required amount of examples (given the constraints, that is the noise in the data-set r, the amount of different archetypes \(K\) to cope with, etc.) \(M_{\otimes}\) that guarantees that the network is able to infer the archetype and thus generalize, is the same for supervised and unsupervised protocols and its value has been explicitly calculated: this is another key point toward sustainable AI. Clearly there is still a long way to go before a full statistical mechanical theory of extensive parallel processing will be ready, yet this paper acts as a first step in this direction and we plan to report further steps in a near future.
A more general sampling scenario
The way in which we add noise over the archetypes to generate the data-set in the main text (see eq. (9)) is a rather peculiar one as, in each example, it preserves the number but also the positions of lacunae already present in the related archetype. This implies that the noise can not affect the amplitudes of the original signal, i.e. \(\sum_{i}(\eta^{\mu,a}_{i})^{2}=\sum_{i}(\xi^{\mu}_{i})^{2}\) holds for any \(a\) and \(\mu\), while we do expect that with more general kinds of noise the dilution, this property is not preserved sharply.
Here we consider the case where the number of blank entries present in \(\mathbf{\xi}^{\mu}\) is preserved on average in the related sample \(\{\eta^{\mu,a}\}_{a=1,\dots,M}\) but lacunae can move along the examples: this more realistic kind of noise gives rise to cumbersome calculations (still analytically treatable) but should not affect heavily the capabilities of learning, storing and retrieving of these networks (as we now prove).
Specifically here we define the new kind of examples \(\tilde{\eta}^{\mu,a}_{i}\) (that we can identify from the previous ones \(\eta^{\mu,a}_{i}\) by labeling them with a tilde) in the following way
**Definition 10**.: _Given \(K\) random patterns \(\mathbf{\xi}^{\mu}\) (\(\mu=1,...,K\)), each of length \(N\), whose entries are i.i.d. from_
\[\mathbb{P}(\xi^{\mu}_{i})=\frac{(1-d)}{2}\delta_{\xi^{\mu}_{i},-1}+\frac{(1-d )}{2}\delta_{\xi^{\mu}_{i},+1}+d\delta_{\xi^{\mu}_{i},0}, \tag{10}\]
_we use these archetypes to generate \(M\times K\) different examples \(\{\tilde{\eta}^{\mu,a}_{i}\}^{a=1,\dots,M}\) whose entries are depicted following_
\[\begin{split}&\mathbb{P}(\tilde{\eta}^{\mu,a}_{i}|\xi^{\mu}_{i}= \pm 1)=A_{\pm}(r,s)\delta_{\tilde{\eta}^{\mu,a}_{i},\xi^{\mu}_{i}}+B_{\pm}(r,s) \delta_{\tilde{\eta}^{\mu,a}_{i},-\xi^{\mu}_{i}}+C_{\pm}(r,s)\delta_{\tilde{ \eta}^{\mu,a}_{i},0}\\ &\mathbb{P}(\tilde{\eta}^{\mu,a}_{i}|\xi^{\mu}_{i}=0)=A_{0}(r,s) \delta_{\tilde{\eta}^{\mu,a}_{i},\xi^{\mu}_{i}}+B_{0}(r,s)\delta_{\tilde{\eta }^{\mu,a}_{i},+1}+C_{0}(r,s)\delta_{\tilde{\eta}^{\mu,a}_{i},-1}\end{split} \tag{11}\]
_for \(i=1,\dots,N\) and \(\mu=1,\dots,K\), where we pose_
\[\begin{split}& A_{\pm}(r,s)=\frac{1+r}{2}\left[1-\frac{d}{1-d}(1 -s)\right]+\frac{d(1-s)(1-r)}{4(1-d)}\,,\qquad A_{0}(r,s)=\frac{1+s}{2}\,,\\ & B_{\pm}(r,s)=\frac{1-r}{2}\left[1-\frac{d}{1-d}(1-s)\right]+ \frac{d(1-s)(1+r)}{4(1-d)}\,,\qquad B_{0}(r,s)=\frac{1-s}{4}\,,\\ & C_{\pm}(r,s)=\frac{d}{2(1-d)}(1-s)\,,\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad C_{0}(r,s)=\frac{1-s}{4}\,,\end{split} \tag{12}\]
_with \(r,s\in[0;1]\) (whose meaning we specify soon,_ vide infra_)._
Equation (11) codes for the new noise, the values of the coefficients presented in (12) have been chosen in order that all the examples contain on average the same fraction \(d\) of null entries as the original archetypes. To see this it is enough to check that the following relation holds for each \(a=1,\dots,M\), \(i=1,\dots,N\) and \(\mu=1,\dots,K\)
\[\mathbb{P}(\tilde{\eta}^{\mu,a}_{i}=0)=\sum_{x\in\{-1,0,1\}}\mathbb{P}(\tilde{ \eta}^{\mu,a}_{i}=0|\xi^{\mu}_{i}=x)\mathbb{P}(\xi^{\mu}_{i}=x)=C_{\pm}(r,s)(1 -d)+A_{0}(r,s)d=d\,. \tag{13}\]
Defined the data-set, the cost function follows straightforwardly in Hebbian settings as
**Definition 11**.: _Once introduced \(N\) Ising neurons \(\sigma_{i}=\pm 1\) (\(i=1,...,N\)) and the data-set considered in the definition above, the Cost function of the multitasking Hebbian network equipped with not-preserving-dilution noise reads as_
\[\mathcal{H}^{(sup)}_{N,K,M,r,s,d}(\mathbf{\sigma}|\tilde{\mathbf{\eta}})=-\frac{1}{N} \frac{1}{(1-d)(1+\tilde{\rho})}\sum_{\mu=1}^{K}\sum_{i,j=1}^{N,N}\left(\frac{1 }{\tilde{r}M}\sum_{a=1}^{M}\tilde{\eta}^{\mu,a}_{i}\right)\left(\frac{1}{ \tilde{r}M}\sum_{b=1}^{M}\tilde{\eta}^{\mu,b}_{j}\right)\sigma_{i}\sigma_{j}, \tag{14}\]
_where_
\[\tilde{r}=\frac{r}{(1-d)}\left[1-\frac{d}{2}(5-3s)\right]\] (A.6)
_and \(\tilde{\rho}\) is the generalization of the data-set entropy, defined as:_
\[\tilde{\rho}=\frac{1-\tilde{r}^{2}}{M\tilde{r}^{2}}\,.\] (A.7)
**Definition 12**.: _The suitably re-normalized example's magnetizations \(n_{\mu}\) read as_
\[n_{\mu}:=\frac{1}{(1+\tilde{\rho})}\frac{1}{N}\sum_{i=1}^{N}\left(\frac{1}{ \tilde{r}M}\sum_{a=1}^{M}\tilde{\eta}_{i}^{\mu,a}\right)\sigma_{i}\,.\] (A.8)
En route toward the statistical pressure, still preserving Guerra's interpolation as the underlying technique, we give the next
**Definition 13**.: _Once introduced the noise \(\beta\in\mathbb{R}^{+}\), an interpolating parameter \(t\in(0,1)\), the \(K+1\) auxiliary fields \(J\) and \(\psi_{\mu}\) (\(\mu\in(1,...,K)\)), the interpolating partition function related to the model defined by the Cost function (A.5) reads as_
\[\mathcal{Z}^{(sup)}_{\beta,N,K,M,r,s,d}(\boldsymbol{\xi},\boldsymbol{\tilde{ \eta}};J,t)=\sum_{\{\boldsymbol{\sigma}\}}\exp\Bigg{[}\ J\sum_{\mu,i=1}^{K,N} \xi_{i}^{\mu}\sigma_{i}+t\beta N\frac{(1+\tilde{\rho})}{2(1-d)}\sum_{\mu=1}^{ K}n_{\mu}^{2}(\boldsymbol{\sigma})+(1-t)N\sum_{\mu=1}^{K}\psi_{\mu}\,n_{\mu}( \boldsymbol{\sigma})\Bigg{]}.\] (A.9)
_and the interpolating statistical pressure \(\mathcal{A}_{\beta,K,M,r,s,d}=\lim_{N\to\infty}A_{\beta,N,K,M,r,s,d}\) induced by the partition function (A.9) reads as_
\[A_{\beta,N,K,M,r,s,d}(J,t)=\frac{1}{N}\mathbb{E}\Big{[}\ln\mathcal{Z}^{(sup) }_{\beta,N,K,M,r,s,d}(\boldsymbol{\xi},\boldsymbol{\tilde{\eta}};J,t)\Big{]}\] (A.10)
_where \(\mathbb{E}=\mathbb{E}_{\xi}\mathbb{E}_{(\tilde{\eta}|\xi)}\)._
**Remark 3**.: _Of course, as in the model studied in the main text still with Guerra's interpolation technique, we aim to find an explicit expression (in terms of the control and order parameters of the theory) of the interpolating statistical pressure evaluated at \(t=1\) and \(J=0\)._
We thus perform the computations following the same steps of the previous investigation: the \(t\) derivative of interpolating pressure is given by
\[\frac{d\mathcal{A}_{\beta,K,M,r,s,d}(J,t)}{dt}=\,\frac{\beta}{2(1-d)}(1+\tilde {\rho})\sum_{\mu=1}^{K}\langle n_{\mu}^{2}\rangle_{t}-\sum_{\mu=1}^{K}\psi_{ \mu}\langle n_{\mu}\rangle_{t}.\] (A.11)
fixing
\[\psi_{\mu}=\frac{\beta}{1-d}(1+\tilde{\rho})\bar{n}_{\mu}\] (A.12)
and computing the one-body term
\[\begin{split}\mathcal{A}_{\beta,K,M,r,s,d}(J,t=0)&= \mathbb{E}\ln\,\left[2\cosh\left(\sum_{\mu=1}^{K}\psi_{\mu}\frac{1}{(1+\tilde {\rho})}\frac{1}{\tilde{r}M}\sum_{a=1}^{M}\tilde{\eta}_{i}^{\mu,a}+J\sum_{\mu= 1}^{K}\xi^{\mu}\right)\right]\\ &=\mathbb{E}\ln\,\left\{2\cosh\left[\frac{\beta}{1-d}\sum_{\mu=1 }^{K}\bar{n}_{\mu}\left(\frac{1}{\tilde{r}M}\sum_{a=1}^{M}\tilde{\eta}_{i}^{ \mu,a}\right)+J\sum_{\mu=1}^{K}\xi^{\mu}\right]\right\}.\end{split}\] (A.13)
We get the final expression as \(N\to\infty\) such that we can state the next
**Theorem 3**.: _In the thermodynamic limit \((N\to\infty)\) and in the low load regime \((K/N\to 0)\), the quenched statistical pressure of the multitasking Hebbian network equipped with not-preserving-dilution noise, whatever the presence of a teacher, reads as_
\[\mathcal{A}_{\beta,K,M,r,s,d}(J)\,=\,\mathbb{E}\left\{\ln\left[2\cosh\left( \beta^{{}^{\prime}}\sum_{\mu=1}^{K}\bar{n}_{\mu}\tilde{\eta}^{\mu}+J\sum_{\mu= 1}^{K}\xi^{\mu}\right)\,\right]\right\}-\frac{\beta^{{}^{\prime}}}{2}(1+\tilde {\rho})\sum_{\mu=1}^{K}\bar{n}_{\mu}^{2}. \tag{110}\]
_where \(\beta^{{}^{\prime}}=\beta/(1-d)\), \(\mathbb{E}=\mathbb{E}_{\xi}\mathbb{E}_{(\tilde{\eta}|\xi)}\) and \(\tilde{\eta}^{\mu}=\frac{1}{\tilde{r}M}\sum_{a=1}^{M}\tilde{\eta}_{i}^{\mu,a}\)and the values \(\bar{n}_{\mu}\) must fulfill the following self-consistent equations_
\[\bar{n}_{\mu}=\frac{1}{(1+\tilde{\rho})}\mathbb{E}\left\{\left[\tanh\left( \beta^{{}^{\prime}}\sum_{\nu=1}^{K}\bar{n}_{\nu}\tilde{\eta}^{\nu}\right) \,\right]\tilde{\eta}^{\mu}\right\}\quad\text{for}\;\;\mu=1,\ldots,K\,, \tag{111}\]
_that extremize the statistical pressure \(\mathcal{A}_{\beta,K,M,r,s,d}(J=0)\) w.r.t. them._
Furthermore, the simplest path to obtain a self-consistent equation also for the Mattis magnetization \(m_{\mu}\) is by considering the auxiliary field \(J\) coupled to \(m_{\mu}\) namely \(\bar{m}_{\mu}=\nabla_{J}\mathcal{A}_{\beta,K,M,r,s,d}(J)|_{J=0}\) to get
\[\bar{m}_{\mu}=\mathbb{E}\left\{\tanh\left[\beta^{{}^{\prime}}\sum_{\nu=1}^{K }\bar{n}_{\nu}\tilde{\eta}^{\nu}\right]\xi^{\mu}\right\}\quad\text{for}\;\;\mu =1,\ldots,K\,. \tag{112}\]
We do not plot these new self-consistency equations as, in the large \(M\) limit, there are no differences w.r.t. those obtained in the main text.
## Appendix B On the data-set entropy \(\rho\)
In this appendix, focusing on a single generic bit, we deepen the relation between the conditional entropy \(H(\xi_{i}^{\mu}|\mathbf{\eta}_{i}^{\mu})\) of a given pixel \(i\) regarding archetype \(\mu\) and the information provided by the data-set regarding such a pixel, namely the block \(\left(\eta_{i}^{\mu,1},\eta_{i}^{\mu,2},\ldots,\eta_{i}^{\mu,M}\right)\) to justify why we called \(\rho\) the data-set entropy in the main text. As calculations are slightly different among the two analyzed models (the one preserving dilution position provided in the main text and the generalized one given in the previous appendix) we repeat them model by model for the sake of transparency.
### I: multitasking Hebbian network equipped with not-affecting-dilution noise
Let us focus on the \(\mu\)-th pattern and the \(i\)-th digit, whose related block is
\[\eta_{i}^{\mu}=\left(\eta_{i}^{\mu,1},\eta_{i}^{\mu,2},\ldots,\eta_{i}^{\mu,M }\right); \tag{113}\]
the error probability for any single entry is
\[\mathbb{P}(\xi_{i}^{\mu}\neq 0)\mathbb{P}(\eta_{i}^{\mu,a}\neq\xi_{i}^{\mu})=(1-d )(1-r)/2 \tag{114}\]
and, by applying the majority rule on the block, it is reduced to
\[\mathbb{P}(\xi_{i}^{\mu}\neq 0)\mathbb{P}\left(\text{sign}\Big{(}\sum_{a}\eta_ {i}^{\mu,a}\xi_{i}^{\mu}\Big{)}=-1\right)\underset{M\gg 1}{\approx}\frac{(1-d)}{2} \left[1-\text{erf}\left(\frac{1}{\sqrt{2\rho}}\right)\right]. \tag{115}\]
Thus
\[H_{d,r,M}(\boldsymbol{\xi}^{\mu}|\boldsymbol{\eta}^{\mu})=-\left[x(d,r,M)\log_{2}x( d,r,M)+y(d,r,M)\log_{2}y(d,r,M)\right]\] (B.4)
where
\[x(d,r,M)=\frac{(1-d)}{2}\left[1-\mathrm{erf}\left(\frac{1}{\sqrt{2\rho}}\right) \right]\;,\;\;y(d,r,M)=1-x(d,r,M)\,.\] (B.5)
### II: multitasking Hebbian network equipped with not-preserving-dilution noise
Let us focus on the \(\mu\)-th pattern and the \(i\)-th digit, whose related block is
\[\tilde{\eta}_{i}^{\mu}=\left(\tilde{\eta}_{i}^{\mu,1},\tilde{\eta}_{i}^{\mu,2 },\ldots,\tilde{\eta}_{i}^{\mu,M}\right);\] (B.6)
the error probability for any single entry is
\[\mathbb{P}(\xi_{i}^{\mu}\neq 0)\mathbb{P}(\tilde{\eta}_{i}^{\mu,a}\xi_{i}^{ \mu}\neq+1|\xi_{i}^{\mu}\neq 0)+\mathbb{P}(\xi_{i}^{\mu}=0)\mathbb{P}( \tilde{\eta}_{i}^{\mu,a}\neq 0|\xi_{i}^{\mu}=0)=d(1-s)\,.\] (B.7)
By applying the majority rule on the block, it is reduced to
\[\begin{split}&\mathbb{P}(\xi_{i}^{\mu}\neq 0)\left[1-\mathbb{P} \Big{(}\mathrm{sign}(\hat{\eta}_{i}^{\mu}\xi_{i}^{\mu})=+1\Big{|}\xi_{i}^{\mu }\neq 0\Big{)}\right]+\mathbb{P}(\xi_{i}^{\mu}=0)\mathbb{P}\Big{(}\mathrm{ sign}[|\hat{\eta}_{i}^{\mu}|]=+1\Big{|}\xi_{i}^{\mu}=0\Big{)}\\ &\underset{M\gg 1}{\approx}\frac{(1-d)}{2}\left\{1-\mathrm{erf} \left[\left(2\tilde{\rho}-\frac{d(1-s)}{(1-d)M\tilde{r}^{2}}\right)^{-1/2} \right]\right\}+\frac{d}{2}\;\left\{1-\mathrm{erf}\left[\left(\frac{1-s}{M \tilde{r}^{2}}\right)^{-1/2}\right]\right\}\,.\end{split}\] (B.8)
Figure 8: Comparison of the numerical solution of the self consistency equations related to the Mattis magnetization in the two models: upper panel is due to the first model (reported in the main text), lower panel reports on the second model (deepened here). Beyond a different transient at small \(M\) the two models behave essentially in the same way.
Thus
\[H_{d,r,s,M}(\xi_{i}^{\mu}|\tilde{\mathbf{\eta}}_{i}^{\mu})=-\left[x(d,r,s,M)\log_{2}x(d, r,s,M)+y(d,r,s,M)\log_{2}y(d,r,s,M)\right] \tag{114}\]
where
\[x(d,r,s,M)=\frac{(1-d)}{2}\left\{1-\text{erf}\left[\left(2\tilde{\rho}-\frac{d (1-s)}{(1-d)M\tilde{r}^{2}}\right)^{-1/2}\right]\right\}+\frac{d}{2}\ \left\{1-\text{erf}\left[\left(\frac{1-s}{M\tilde{r}^{2}}\right)^{-1/2} \right]\right\}\]
\[y(d,r,s,M)=1-x(d,\tilde{\rho}) \tag{115}\]
Whatever the model, the conditional entropies \(H_{d,r,M}(\xi_{i}^{\mu}|\mathbf{\eta}_{i}^{\mu})\) and \(H_{d,r,s,M}(\xi_{i}^{\mu}|\tilde{\mathbf{\eta}}_{i}^{\mu})\) are a monotonic increasing functions of \(\rho\) and \(\tilde{\rho}\) respectively, hence the reason to calling \(\rho\) and \(\tilde{\rho}\) the entropy of the data-set.
## Appendix C Stability analysis: an alternative approach
### Stability analysis via signal-to-noise technique
The standard signal-to-noise technique [8] is a powerful method to investigate the stability of a given neural configuration in the noiseless limit \(\beta\to\infty\): by requiring that each neuron is aligned to its field (the post-synaptic potential that it is experiencing, i.e. \(h_{i}\sigma_{i}\geq 0\ \ \forall i\in(1,...,N)\)) this analysis allows to correctly classify which solution (stemming from the self-consistent equations for the order parameters) is preferred as the control parameters are made to vary and thus it can play as an alternative route w.r.t the standard study of the Hessian of the statistical pressure reported in the main text (see Sec. 3.2).
In particular, recently, a revised version of the signal-to-noise technique has been developed [11; 12] and in this new formulation it is possible to obtain the self-consistency equations for the order parameters explicitly so we can compare directly outcomes from signal-to-noise to outcomes from statistical mechanics. By comparison of the two routes that lead to the same picture, that is statistical mechanics and revised signal-to-noise technique, we can better comprehend the working criteria of these neural networks.
We suppose that the network is in the hierarchical configuration prescribed by eq. (6), that we denote as \(\mathbf{\sigma}=\mathbf{\sigma}^{*}\), and we must evaluate the local field \(h_{i}(\mathbf{\sigma}^{*})\) acting on the generic neuron \(\sigma_{i}\) in this configuration to check that \(h_{i}(\mathbf{\sigma}^{*})\sigma_{i}^{*}>0\) is satisfied for any \(i=1,\ldots,N\): should this be the case, the configuration would be stable, vice versa unstable.
Focusing on the supervised setting with no loss of generality (as we already discussed that the teacher essentially plays no role in the low storage regime) and selecting (arbitrarily) the hierarchical ordering as a test case to be studied, we start by re-writing the Hamiltonian (11) as
\[-\mathcal{H}_{N,K,M,r}(\mathbf{\sigma}|\mathbf{\eta})=\sum_{i=1}^{N}h_{i}(\mathbf{\sigma}) \sigma_{i}\,, \tag{116}\]
where the local fields \(h_{i}\) appear explicitly and are given by
\[h_{i}(\mathbf{\sigma})= \frac{1}{2N\,r^{2}M^{2}(1-d)(1+\rho)}\sum_{\mu=1}^{K}\sum_{j\neq i }^{N}\sum_{a,b}^{M_{\mu},M_{\mu}}\eta_{i}^{\mu,a}\eta_{j}^{\mu,b}\sigma_{j}\,. \tag{117}\]
The updating rule for the neural dynamics reads as
\[\sigma_{i}^{(n+1)}=\sigma_{i}^{(n)}\text{sign}\left(\tanh\left[\beta\sigma_{i}^{(n) }h_{i}^{(n)}(\boldsymbol{\sigma}^{(n)})\right]+\Gamma_{i}\right)\ \ \text{with}\ \ \Gamma_{i}\sim\mathcal{U}[-1;+1]\,, \tag{104}\]
that, in the zero fast-noise limit \(\beta\to+\infty\), reduces to
\[\sigma_{i}^{(n+1)}=\sigma_{i}^{(n)}\text{sign}\left(\sigma_{i}^{(n)}h_{i}^{(n) }(\boldsymbol{\sigma}^{(n)})\right). \tag{105}\]
To inspect the stability of the hierarchical parallel configuration, we initialize the network in such configuration, i.e., \(\boldsymbol{\sigma}^{(1)}=\boldsymbol{\sigma}^{*}\), then, following Hinton's prescription [21, 48]8 the one-step n-iteration \(\boldsymbol{\sigma}^{(2)}\) leads to an expression of the magnetization that reads as
Footnote 8: The _early stopping prescription_ given by Hinton and coworkers became soon very popular, yet it has been criticized in some circumstances (in particular where glassy features are expected to be strong and may keep the network out of equilibrium for very long times, see e.g. [7, 18, 39]): we stress that, in the present section, we are assuming the network has already reached the equilibrium further, confined to the low storage inspection, spin glass bottlenecks in thermalization should not be prohibitive.
\[m_{\mu}^{(2)}=\frac{1}{N}\sum_{i=1}^{N}\xi_{i}^{\mu}\sigma_{i}^{(2)}=\frac{1}{ N}\sum_{i=1}^{N}\xi_{i}^{\mu}\left[\sigma_{i}^{*}\text{sign}\left(\sigma_{i}^{*}\,h_ {i}^{(1)}(\boldsymbol{\sigma}^{*})\right)\right]; \tag{106}\]
Next, using the explicit expression of the hierarchical parallel configuration (6), we get
\[m_{1}^{(2)}=\frac{1}{N}\sum_{i=1}^{N}\left(\xi_{i}^{1}\right)^{2 }\text{sign}\left(\sigma_{i}^{*}\,h_{i}^{(1)}(\boldsymbol{\sigma}^{*})\right); \tag{107}\] \[m_{\mu>1}^{(2)}=\frac{1}{N}\sum_{i=1}^{N}\left(\xi_{i}^{\mu} \right)^{2}\prod_{\rho=1}^{\mu-1}\delta\left(\xi_{i}^{\rho}\right)\text{sign} \left(\sigma_{i}^{*}\,h_{i}^{(1)}(\boldsymbol{\sigma}^{*})\right);\]
by applying the central limit theorem to estimate the sums appearing in the definition of \(h_{i}^{(1)}\) for \(i=1,\ldots,N\), we are able to split, mimicking the standard signal-to-noise technique, a signal contribution (\(\kappa_{1,\mu}^{(1)}\)) and a noise contribution (\(\kappa_{2,\mu}^{(1)}\)) as presented in the following
\[\sigma_{i}^{*}h_{i}^{(1)}(\boldsymbol{\sigma}^{*})\sim\kappa_{1,\mu}^{(1)}+z_ {i}\sqrt{\kappa_{2,\mu}^{(1)}}\ \ \ \ \text{with}\ \ z_{i}\sim\mathcal{N}(0,1) \tag{108}\]
where
\[\kappa_{1,\mu}^{(1)}\coloneqq\mathbb{E}_{\xi}\mathbb{E}_{(\eta|\xi)}\left[ \sigma_{i}^{*}\,h_{i}^{(1)}(\boldsymbol{\sigma}^{*})\right]\ \ \ \ \kappa_{2,\mu}^{(1)}\coloneqq\mathbb{E}_{\xi}\mathbb{E}_{(\eta|\xi)}\left[ \sigma_{i}^{*}\,h_{i}^{(1)}(\boldsymbol{\sigma}^{*})\right]^{2} \tag{109}\]
Thus, Eq. (106) becomes
\[m_{\mu}^{(2)} = \left[\frac{1}{N}\sum_{i=1}^{N}\left(\xi_{i}^{\mu}\right)^{2} \prod_{\rho=1}^{\mu-1}\delta\left(\xi_{i}^{\rho}\right)\text{sign}\left( \kappa_{1,\mu}^{(1)}+z_{i}\sqrt{\kappa_{2,\mu}^{(1)}}\right)\right]\ \ \ \ \text{ with}\ \ \mu=1,2,\ldots,K\,. \tag{110}\]
For large values of \(N\), the arithmetic mean coincides with the theoretical expectation, thus
\[\frac{1}{N}\sum_{i=1}^{N}g(\xi_{i},z_{i})\ \xrightarrow[N\to\infty]{} \mathbb{E}_{\xi,z}[g(\xi,z)]=\mathbb{E}_{\xi}\left[\int\frac{dz}{\sqrt{2\pi}} e^{-\frac{z^{2}}{2}}g(\xi,z)\right]\,. \tag{111}\]
therefore, we can rewrite Eq. (104) as
\[m_{\mu}^{(2)}\,=\,(1-d)d^{\mu-1}\mathrm{erf}\left[\frac{\kappa_{1,\mu}^{(1)}}{ \sqrt{2\Big{(}\kappa_{2,\mu}^{(1)}-\Big{(}\kappa_{1,\mu}^{(1)}\Big{)}^{2}\, \Big{)}}}\right]\quad\text{ with }\;\mu=1,2,\ldots,K\,. \tag{105}\]
While we carry out the computations of \(\kappa_{1,\mu}^{(1)}\) and \(\kappa_{2,\mu}^{(1)}\) in Appendix C.2, here we report only their values, which are
\[\kappa_{1,\mu}^{(1)}=\frac{1}{2}\frac{1}{(1+\rho)}d^{\mu-1}\,,\quad\kappa_{2, \mu}^{(1)}=\frac{1}{4}\frac{1}{1+\rho}d^{\mu-1}\left[\frac{1+d+d^{2}-d^{2K-2 \mu+2}}{1+d}\right]. \tag{106}\]
So we get
\[m_{\mu}^{(2)}(d,K,\rho)\,=\,(1-d)d^{\mu-1}\mathrm{erf}\left[\frac{1}{\sqrt{2}} \frac{\sqrt{(1+d)d^{\mu-1}}}{\sqrt{(1+\rho)(1+d+d^{2}-d^{2K-2\mu+2})-d^{\mu}-d ^{\mu-1}}}\right] \tag{107}\]
Figure 9: Signal to Noise numerical inspection of the Mattis magnetizations for a diluted network with \(r=0.1\) and \(K=3\) in the hierarchical regime (at levels of pattern’s dilution \(d<d_{c}\) as reported in their titles): we highlight the agreement, as the saturation level is reached, among Signal to Noise analysis (orange dots) and the value of the magnetization of the first pattern found by the mechanical statistical approach (reported as solid red line). The dashed lines represent the Hebbian storing prescriptions at which the values of the magnetizations converge. The vertical black line depicts the critical amount of examples \(M_{\otimes}\) that must be experienced by the network to properly depict the archetypes: note that this value is systematically above, in M, that the point where all the bifurcations happened, hence all the magnetizations stabilized on their hierarchical displacements.
As shown in Fig. 9, as a critical amount of perceived examples is collected, this expression is in very good agreement with the estimate stemming from the numerical solution of the self-consistent equations and indeed we can finally state the last
**Theorem 4**.: _In the zero fast-noise limit (\(\beta\to+\infty\)), if the neural configuration_
\[\tilde{\mathbf{\sigma}}=\tilde{\mathbf{\sigma}}(\mathbf{\xi}) \tag{102}\]
_is a fixed point of the dynamics described by the sequential spin upload rule_
\[\sigma_{i}^{(n+1)}=\sigma_{i}^{(n)}\mathrm{sign}\left[\beta\sigma_{i}^{(n)}h_{i }^{(n)}(\mathbf{\sigma}^{(n)})\right] \tag{103}\]
_where_
\[h_{i}^{(n)}(\mathbf{\sigma})= \frac{1}{N\,M^{2}(1+\rho)r^{2}}\sum\limits_{\mu=1}^{K}\sum\limits _{j\neq i}^{N}\sum\limits_{a,b}^{M_{\mu},M_{\mu}}\eta_{i}^{\mu,a}\eta_{j}^{\mu,b }\sigma_{j}^{(n)}\,, \tag{104}\]
_then the order parameters \(n_{\mu}(\mathbf{\sigma})=[NM(1+\rho)r]^{-1}\sum\limits_{i}^{N}\sum\limits_{a}^{M} \eta_{i}^{\mu,a}\sigma_{i}\) must satisfy the following self equations_
\[n_{\mu} = \frac{1}{(1+\rho)}\mathbb{E}_{\mathbf{\xi}}\mathbb{E}_{(\mathbf{\eta}|\bm {\xi})}\Bigg{\{}\hat{\eta}^{\mu}\tilde{\sigma}(\mathbf{\xi})\,\mathrm{sign}\left[ \sum\limits_{\nu=1}^{K}n_{\nu}\hat{\eta}^{\nu}\tilde{\mathbf{\sigma}}(\mathbf{\xi}) \right]\Bigg{\}}\,, \tag{105}\]
_where we set \(\hat{\eta}^{\mu}=(Mr)^{-1}\sum\limits_{a}^{M}\eta^{\mu,a}\)._
**Remark 4**.: _The empirical evidence that, via early stopping criterion, we still obtain the correct solution proves a posteriori the validity of Hinton's recipe in the present setting and it tacitly candidate statistical mechanics as a reference also to inspect computational shortcuts._
Proof.: The local fields \(h_{i}\) can be rewrite using the definition of \(n_{\mu}\) as
\[h_{i}^{(n)}(\mathbf{\sigma})=\sum\limits_{\mu=1}^{K}n_{\mu}^{(n)}(\mathbf{\sigma}) \hat{\eta}_{i}^{\mu}\,, \tag{106}\]
in this way the upload rule can be recast as
\[\sigma_{i}^{(n+1)}=\sigma_{i}^{(n)}\mathrm{sign}\left[\sum\limits_{\mu=1}^{K} n_{\mu}^{(n)}(\mathbf{\sigma})\hat{\eta}_{i}^{\mu}\sigma_{i}^{(n)}\right]. \tag{107}\]
Computing the value of the \(n_{\mu}\)-order parameters at the \((n+1)\) step of uploading process we get
\[n_{\mu}^{(n+1)}(\mathbf{\sigma}) = \frac{1}{N(1+\rho)}\sum\limits_{i}^{N}\hat{\eta}_{i}^{\mu} \sigma_{i}^{(n+1)}\] \[= \frac{1}{N(1+\rho)}\sum\limits_{i}^{N}\hat{\eta}_{i}^{\mu}\sigma_ {i}^{(n)}\mathrm{sign}\left[\,\sum\limits_{\nu=1}^{K}n_{\nu}^{(n)}(\mathbf{\sigma} )\hat{\eta}_{i}^{\nu}\sigma_{i}^{(n)}\right]\,.\]
If \(\tilde{\mathbf{\sigma}}(\mathbf{\xi})\) is a fixed point of our dynamics, we must have \(\tilde{\mathbf{\sigma}}^{(n+1)}\equiv\tilde{\mathbf{\sigma}}^{(n)}\) and \(n^{(n+1)}(\mathbf{\sigma})\equiv n^{(n)}(\mathbf{\sigma})\), thus (114) becomes
\[n_{\mu}(\mathbf{\sigma})\,=\,\frac{1}{N(1+\rho)}\sum_{i}^{N}\hat{\eta}_{i}^{\mu} \tilde{\sigma}_{i}(\mathbf{\xi})\operatorname{sign}\left[\sum_{\nu=1}^{K}n_{\nu}( \mathbf{\sigma})\hat{\eta}_{i}^{\nu}\tilde{\sigma}_{i}(\mathbf{\xi})\right]. \tag{115}\]
For large value of \(N\), the arithmetic mean coincides with the theoretical expectation, thus
\[\frac{1}{N}\sum_{i=1}^{N}g(\eta_{i})\xrightarrow[N\to\infty]{}\mathbb{E}_{\eta }\Big{[}g(\eta)\Big{]} \tag{116}\]
therefore, (115) reads as
\[n_{\mu}\,=\,\frac{1}{(1+\rho)}\mathbb{E}_{\mathbf{\xi}}\mathbb{E}_{(\mathbf{\eta}|\bm {\xi})}\left\{\hat{\eta}^{\mu}\tilde{\sigma}(\mathbf{\xi})\operatorname{sign}\left[ \,\sum_{\nu=1}^{K}n_{\nu}\hat{\eta}^{\nu}\tilde{\sigma}(\mathbf{\xi})\right]\right\}\,. \tag{117}\]
where we used \(\mathbb{E}_{\mathbf{\eta}}=\mathbb{E}_{\mathbf{\xi}}\mathbb{E}_{(\mathbf{\eta}|\mathbf{\xi})}\).
**Corollary 2**.: _Under the hypothesis of the previous theorem, if the neural configuration coincides with the parallel configuration_
\[\tilde{\mathbf{\sigma}}(\mathbf{\xi})=\mathbf{\sigma}^{*}=\mathbf{\xi}^{1}+\sum_{\nu=2}^{K} \mathbf{\xi}^{\nu}\prod_{\rho=1}^{\nu-1}\delta\left(\mathbf{\xi}^{\rho}\right) \tag{118}\]
_then the order parameters \(n_{\mu}(\mathbf{\sigma})\) must satisfy the following self equation_
\[n_{\mu}\,=\,\frac{1}{(1+\rho)}\mathbb{E}_{\mathbf{\xi}}\mathbb{E}_{(\mathbf{\eta}|\bm {\xi})}\Bigg{\{}\hat{\eta}^{\mu}\left(\xi^{1}+\sum_{\lambda=2}^{K}\xi^{\lambda} \prod_{\rho=1}^{\lambda-1}\delta\left(\xi^{\rho}\right)\right)\,\operatorname {sign}\left[\sum_{\nu=1}^{K}n_{\nu}\hat{\eta}^{\nu}\left(\xi^{1}+\sum_{\lambda =2}^{K}\xi^{\lambda}\prod_{\rho=1}^{\lambda-1}\delta\left(\xi^{\rho}\right) \right)\right]\Bigg{\}}\,. \tag{119}\]
Proof.: We only have to replace in (109) the explicit form of \(\mathbf{\sigma}^{*}\) and we get the proof.
### Evaluation of momenta of the effective post-synaptic potential
In this section we want to describe the computation of first and second momenta \(\kappa_{1,\mu}^{(1)}\) and \(\kappa_{2,\mu}^{(1)}\) in Sec. C.1, we will present only the case of \(\mu=1\).
Let us start from \(\kappa_{1,\mu}^{(1)}\):
\[\kappa_{1,\mu}^{(1)}\Big{|}_{\xi_{i}^{1}=\pm 1}\coloneqq\mathbb{E}_{\xi} \mathbb{E}_{(\eta|\xi)}\left[\sigma_{i}^{*}h_{i}^{(1)}(\mathbf{\sigma}^{*})\Big{|} _{\xi_{i}^{1}=\pm 1}\right]=\frac{1}{2}\sum_{j\neq i}^{N}\frac{1}{r_{1}M_{1}^{2}(1+ \rho)}\mathbb{E}_{\xi}\mathbb{E}_{(\eta|\xi)}\left[\sum_{a,b}^{M_{1},M_{1}} \eta_{j}^{1}\left(\xi_{j}^{1}+\sum_{\nu=2}^{K}\xi_{j}^{\nu}\prod_{\rho=1}^{ \nu-1}\delta_{\xi_{j}^{\rho},0}\right)\right];\]
since \(\mathbb{E}_{\xi}[\xi_{i}^{\mu}]=0\) the only non-zero terms are the ones with \(\mu=1\):
\[\kappa_{1,\mu}^{(1)}\Big{|}_{\xi_{i}^{1}=\pm 1} \coloneqq\,\frac{1}{2}\sum_{j\neq i}^{N}\frac{1}{r_{1}M_{1}^{2}(1+ \rho)}\mathbb{E}_{\xi}\left[\sum_{a,b}^{M_{1},M_{1}}r_{1}\xi_{j}^{1}\left(\xi_ {j}^{1}+\sum_{\nu=2}^{K}\xi_{j}^{\nu}\prod_{\rho=1}^{\nu-1}\delta_{\xi_{j}^{ \rho},0}\right)\right] \tag{120}\] \[=\,\frac{1}{2NM_{1}^{2}r_{1}(1+\rho)}\sum_{j\neq 1}^{N}\sum_{a,b}^{M_{ 1},M_{1}}r_{1}=\frac{1}{2(1+\rho)}\]
where we used \(\mathbb{E}_{(\eta|\xi)}[\eta_{i}^{\mu,a}]=r\xi_{i}^{\mu}\). Moving on, we start the computation of \(\kappa_{2,\mu}^{(1)}\), due to \(\mathbb{E}_{\xi}[\xi_{i_{1}}^{\mu}\xi_{i_{1}}^{\nu}]=\delta^{\mu\nu}\), the only non-zero terms are:
\[\kappa_{2,\mu}^{(1)}\Big{|}_{\xi_{i}^{1}=\pm 1} \coloneqq \mathbb{E}_{\xi}\mathbb{E}_{(\eta|\xi)}\left[\{\sigma_{i}^{*}h_{ i}^{(1)}(\mathbf{\sigma}^{*})^{2}\Big{|}_{\xi_{i}^{1}=\pm 1}\right]\] \[\frac{1}{4N^{2}(1-d)^{2}}\sum_{\mu=1}^{K}\mathbb{E}_{\xi} \mathbb{E}_{(\eta|\xi)}\frac{1}{M_{\mu}^{4}r_{\mu}^{4}(1+\rho)^{2}}\sum_{k,j \neq i}^{N,N}\left(\sum_{\alpha_{1},a_{2},b_{1},b_{2}}^{M_{\mu}}\eta_{i}^{\mu,a _{1}}\eta_{i}^{\mu,a_{2}}\eta_{j}^{\mu,b_{1}}\eta_{k}^{\mu,b_{2}}\right)\] \[\left(\xi_{j}^{1}+\sum_{\nu_{1}=2}^{K}\xi_{j}^{\nu_{1}}\prod_{ \rho_{1}=1}^{\nu_{1}-1}\delta\left(\xi_{j}^{\rho_{1}}\right)\right)\left(\xi_ {k}^{1}+\sum_{\nu_{2}=2}^{K}\xi_{\nu_{2}}^{\nu_{2}-1}\prod_{\rho_{2}=1}^{\nu_{ 2}-1}\delta\left(\xi_{k}^{\rho_{2}}\right)\right)=A_{\mu=1}+B_{\mu>1}\]
namely we will analyze separately the case for \(\mu=1\) (\(A_{\mu=1}\)) and \(\mu>1\) (\(B_{\mu>1}\)).
\[A_{\mu=1} = \frac{1}{4N^{2}(1-d)^{2}}\mathbb{E}_{\xi}\mathbb{E}_{(\eta|\xi)} \frac{1}{M_{1}^{4}r_{1}^{4}(1+\rho)^{2}}\sum_{k,j\neq i}^{N,N}\left(\sum_{a_{1 },a_{2},b_{1},b_{2}}^{M_{1}}\eta_{i}^{1,a_{2}}\eta_{j}^{1,b_{1}}\eta_{k}^{1,b_{ 2}}\right)\left(\xi_{j}^{1}\xi_{k}^{1}\right) \tag{101}\] \[= \frac{1}{4N^{2}(1-d)^{2}}\frac{1}{M_{1}^{4}r_{1}^{4}(1+\rho)^{2} }\mathbb{E}_{(\eta|\xi=\pm 1)}\left[\sum_{a_{1},a_{2}}^{M_{1}}\eta_{i}^{1,a_{1}} \eta_{i}^{1,a_{2}}\right]\sum_{k,j\neq i}^{N,N}\mathbb{E}_{\xi}\mathbb{E}_{( \eta|\xi)}\left[\sum_{b_{1},b_{2}}^{M_{1}}\eta_{j}^{1,b_{1}}\eta_{k}^{1,b_{2} }\right]\left(\xi_{j}^{1}\xi_{k}^{1}\right)\] \[= \frac{1}{4}\frac{1}{(1+\rho)}\,;\]
\[B_{\mu>1} \coloneqq \frac{1}{4N^{2}(1-d)^{2}}\sum_{\mu>1}^{K}\mathbb{E}_{\xi} \mathbb{E}_{(\eta|\xi)}\frac{1}{M_{\mu}^{4}r_{\mu}^{4}(1+\rho)^{2}}\sum_{k,j \neq i}^{N,N}\left(\sum_{\alpha_{1},a_{2},b_{1},b_{2}}^{M_{\mu}}\eta_{i}^{\mu, a_{1}}\eta_{i}^{\mu,a_{2}}\eta_{j}^{\mu,b_{1}}\eta_{k}^{\mu,b_{2}}\right)\] \[\left(\prod_{\rho_{1}=1}^{\mu-1}\delta\left(\xi_{j}^{\rho_{1}} \right)\right)\left(\prod_{\rho_{2}=1}^{\mu-1}\delta\left(\xi_{k}^{\rho_{2}} \right)\right)\] \[= \frac{(1-d)^{-2}}{4N^{2}}\sum_{\mu=2}^{K}\frac{1}{M_{\mu}^{2}(1+ \rho)}\sum_{k,j\neq i}^{N,N}\sum_{b_{1},b_{2}}^{M_{\mu}}\mathbb{E}_{\xi} \left[\xi_{j}^{\mu}\xi_{k}^{\mu}\left(\prod_{\rho_{1}=1}^{\mu-1}\delta\left( \xi_{j}^{\rho_{1}}\right)\right)\left(\prod_{\rho_{2}=1}^{\mu-1}\delta\left( \xi_{k}^{\rho_{2}}\right)\right)\right]\] \[= \frac{1-d}{4(1+\rho)}\sum_{\mu=2}^{K}d^{2(\mu-1)}=\frac{1-d}{4(1+ \rho)}\frac{d^{2}-d^{2K}}{1-d^{2}}=\frac{1}{4(1+\rho)}\frac{d^{2}-d^{2K}}{1+d}\,.\]
Putting together Eq. (101) and Eq. (100) we reach the expression of \(\kappa_{2,\mu}^{(1)}\Big{|}_{\xi_{i}^{1}=\pm 1}\).
## Appendix D Explicit Calculations and Figures for the cases \(K=2\) and \(K=3\)
In this appendix, we collect the explicit expression of the self-consistent equations in (3.10) and (3.11) (focusing only on the cases of \(K=2\) and \(K=3\)) and some Figures obtained from their numerical solution.
### \(K=2\)
Fixing \(K=2\) and explicitly perform the mean with respect to \(\xi\), (3.10) and (3.11) read as
\[\begin{array}{rcl}\bar{n}_{1}&=&\frac{\bar{m}_{1}}{(1+\rho)}+\frac{\beta^{{}^{ \prime}}(1-d)\rho\,\bar{n}_{1}}{(1+\rho)}\bigg{[}1-d\,\mathcal{S}_{2}(\bar{n}_ {1},0)-\frac{(1-d)}{2}\mathcal{S}_{2}(\bar{n}_{1},-\bar{n}_{2})-\frac{(1-d)}{2 }\mathcal{S}_{2}(\bar{n}_{1},\bar{n}_{2})\bigg{]}\\ \\ \bar{n}_{2}&=&\frac{\bar{m}_{2}}{(1+\rho)}+\frac{\beta^{{}^{\prime}}(1-d)\rho \,\bar{n}_{2}}{(1+\rho)}\bigg{[}1-d\,\mathcal{S}_{2}(0,\bar{n}_{2})-\frac{(1-d) }{2}\mathcal{S}_{2}(\bar{n}_{1},-\bar{n}_{2})-\frac{(1-d)}{2}\mathcal{S}_{2}( \bar{n}_{1},\bar{n}_{2})\bigg{]}\\ \\ \bar{m}_{1}&=&\frac{(1-d)^{2}}{2}\bigg{[}\mathcal{T}_{2}(\bar{n}_{1},\bar{n}_ {2})+\mathcal{T}_{2}(\bar{n}_{1},-\bar{n}_{2})\bigg{]}+d(1-d)\mathcal{T}_{2}( \bar{n}_{1},0)\\ \\ \bar{m}_{1}&=&\frac{(1-d)^{2}}{2}\bigg{[}\mathcal{T}_{2}(\bar{n}_{1},\bar{n}_ {2})-\mathcal{T}_{2}(\bar{n}_{1},-\bar{n}_{2})\bigg{]}+d(1-d)\mathcal{T}_{2}(0,\bar{n}_{2})\end{array}\] (D.1)
where we used
\[\begin{array}{rcl}\mathcal{T}_{2}(x,y)&=&\mathbb{E}_{\lambda}\tanh\left[ \beta^{{}^{\prime}}\left(x+y+\lambda\sqrt{\rho\Big{(}x^{2}+y^{2}\Big{)}} \right)\right],\\ \\ \mathcal{S}_{2}(x,y)&=&\mathbb{E}_{\lambda}\tanh^{2}\left[\beta^{{}^{\prime}} \left(x+y+\lambda\sqrt{\rho\Big{(}x^{2}+y^{2}\Big{)}}\right)\right].\end{array}\] (D.2)
Solving numerically this set of equations we construct the plots presented in Fig.10.
### \(K=3\)
Moving on the case of \(K=3\) by following the same steps of the previous subsection, we get
\[\bar{n}_{1} = \frac{\bar{m}_{1}}{(1+\rho)}+\frac{\beta^{{}^{\prime}}(1-d)\rho\, \bar{n}_{1}}{(1+\rho)}\bigg{\{}1-d\,\frac{(1-d)}{2}\left[\mathcal{S}_{3}(\bar{ n}_{1},\bar{n}_{2},0)+\mathcal{S}_{3}(\bar{n}_{1},0,\bar{n}_{3})+\mathcal{S}_{3}( \bar{n}_{1},-\bar{n}_{2},0)+\mathcal{S}_{3}(\bar{n}_{1},0,-\bar{n}_{3})\right]\] \[-d^{2}\mathcal{S}_{3}(\bar{n}_{1},0,0)-\frac{(1-d)^{2}}{4}\left[ \mathcal{S}_{3}(\bar{n}_{1},\bar{n}_{2},\bar{n}_{3})+\mathcal{S}_{3}(\bar{n}_ {1},\bar{n}_{2},-\bar{n}_{3})+\mathcal{S}_{3}(\bar{n}_{1},-\bar{n}_{2},\bar{n} _{3})+\mathcal{S}_{3}(\bar{n}_{1},-\bar{n}_{2},-\bar{n}_{3})\right]\bigg{\}}\,,\] \[\bar{m}_{1} = \frac{(1-d)^{3}}{4}\bigg{[}\mathcal{T}_{3}(\bar{n}_{1},\bar{n}_{ 2},\bar{n}_{3})+\mathcal{T}_{3}(\bar{n}_{1},\bar{n}_{2},-\bar{n}_{3})+ \mathcal{T}_{3}(\bar{n}_{1},-\bar{n}_{2},\bar{n}_{3})+\mathcal{T}_{3}(\bar{n}_ {1},-\bar{n}_{2},-\bar{n}_{3})\bigg{]}\] \[+d\frac{(1-d)^{2}}{2}\bigg{[}\mathcal{T}_{3}(\bar{n}_{1},\bar{n}_ {2},0)+\mathcal{T}_{3}(\bar{n}_{1},0,\bar{n}_{3})+\mathcal{T}_{3}(\bar{n}_{1},- \bar{n}_{2})+\mathcal{T}_{3}(\bar{n}_{1},0,-\bar{n}_{3})\bigg{]}+d^{2}(1-d) \mathcal{T}_{3}(\bar{n}_{1},0,0)\,,\]
where we used
\[\begin{array}{rcl}\mathcal{T}_{3}(x,y,z)&=&\mathbb{E}_{\lambda}\tanh\left[ \beta^{{}^{\prime}}\left(x+y+z+\lambda\sqrt{\rho\Big{(}x^{2}+y^{2}+z^{2}\Big{)} }\right)\right]\,,\\ \\ \mathcal{S}_{3}(x,y,z)&=&\mathbb{E}_{\lambda}\tanh^{2}\left[\beta^{{}^{\prime}} \left(x+y+z+\lambda\sqrt{\rho\Big{(}x^{2}+y^{2}+z^{2}\Big{)}}\right)\right]\,. \end{array}\] (D.4)
In order to lighten the presentation, we reported only the expression of \(\bar{m}_{1}\) and \(\bar{n}_{1}\), the related expressions of \(\bar{m}_{2}(\bar{m}_{3})\) and \(\bar{n}_{2}(\bar{n}_{3})\) can be obtained by making the simple substitutions \(\bar{m}_{1}\longleftrightarrow\bar{m}_{2}(\bar{m}_{3})\), and \(\bar{n}_{1}\longleftrightarrow\bar{n}_{2}(\bar{n}_{3})\) in (D.3). The numerical solution of the previous set of equations is depicted in Fig.11.
## Appendix E Proofs
### Proof of Theorem 1
In this subsection we show the proof Proposition 1. In order to prove the aforementioned proposition, we put in front of it the following
**Lemma 1**.: _The \(t\) derivative of interpolating pressure is given by_
\[\frac{d\mathcal{A}^{(sup,unsup)}_{N,K,\beta,d,M,r}}{dt} = \frac{\beta}{2(1-d)}(1+\rho)\sum_{\mu=1}^{K}\mathbb{E}\,\omega_{ t}[n_{\mu}^{2}]-\sum_{\mu=1}^{K}\psi_{\mu}\mathbb{E}\,\omega_{t}[n_{\mu}]. \tag{100}\]
Since the computation is lengthy but not cumbersome we decided to omit it.
**Proposition 3**.: _In the low load regime, in the thermodynamic limit the distribution of the generic order parameter \(X\) is centres at its expectation value \(\bar{X}\) with vanishing fluctuations. Thus, being
Figure 10: Numerical resolution of the system of equations (100) for \(K=2\): we plot the behaviour of the magnetization \(\tilde{m}\) versus degree of dilution \(d\) for fixed \(r=0.2\) and different value of \(\beta\) (from right to left \(\beta=1000,6.66,3.33\)) and \(\rho\) (from top to bottom \(\rho=0.8,0.2,0.0\)). We stress that for \(\rho=0.0\) we recover the standard diluted model presented in Fig.1.
\(\Delta X=X-\bar{X}\), in the thermodynamic limit, the following relation holds_
\[\mathbb{E}\,\omega_{t}[(\Delta X)^{2}]\xrightarrow[N\to+\infty]{}0\,. \tag{100}\]
**Remark 5**.: _We stress that afterwards we use the relations_
\[\mathbb{E}\,\omega_{t}[(n_{\mu}-\bar{n}_{\mu})^{2}]=\mathbb{E}\,\omega_{t}[n_{ \mu}^{2}]-2\,\bar{n}_{\mu}\mathbb{E}\,\omega_{t}[n_{\mu}]+\bar{n}_{\mu}^{2}\,. \tag{101}\]
_which are computed with brute force with Newton's Binomial._
_Now, using these relations, if we fix the constants as_
\[\psi_{\mu}=\frac{\beta}{1-d}(1+\rho)\bar{n}_{\mu} \tag{102}\]
_in the thermodynamic limit, due to Proposition 3, the expression of derivative w.r.t. \(t\) becomes_
\[\frac{d\mathcal{A}^{(sup,unsup)}_{K,\beta,d,M,r}}{dt} = -\frac{\beta}{2(1-d)}(1+\rho)\sum_{\mu=1}^{K}\bar{n}_{\mu}^{2}. \tag{103}\]
Proof.: Let us start from finite size \(N\) expression. We apply the Fundamental Theorem of Calculus:
\[\mathcal{A}^{(sup,unsup)}_{N,K,\beta,d,M,r}=\mathcal{A}^{(sup,unsup)}_{N,K, \beta,d,M,r}(t=1)=\mathcal{A}^{(sup,unsup)}_{N,K,\beta,d,M,r}(t=0)+\int\limits _{0}^{1}\left.\partial_{s}\mathcal{A}^{(sup,unsup)}_{N,K,\beta,d,M,r}(s) \right|_{s=t}dt. \tag{104}\]
Figure 11: Numerical solution of the system of equations (101) for \(K=3\): we plot the behavior of the magnetization \(\bar{m}\) versus the degree of dilution \(d\) for fixed \(r=0.2\) and different value of \(\beta\) (from left to right \(\beta=1000,6.66,3.33\)) and \(\rho\) (from top to bottom \(\rho=0.8,0.2,0.0\)).
We have already computed the derivative w.r.t. \(t\) in Eq. (102). It only remains to calculate the one-body term:
\[\mathcal{Z}^{(sup,unsup)}_{N,K,\beta,d,M,r}(t=0)=\sum_{\{\sigma\}}\exp\Bigg{[} \sum_{i=1}^{N}\left(\sum_{\mu=1}^{K}\frac{\psi_{\mu}}{2(1+\rho)}\hat{\eta}^{\mu }+J\xi^{\mu}\right)\sigma_{i}\Bigg{]}. \tag{103}\]
Using the definition of quenched statistical pressure (100) we have
\[\mathcal{A}^{(sup,unsup)}_{K,\beta,d,M,r}(J,t=0)=\ln\,\left[2\cosh \left(\sum_{\mu=1}^{K}\frac{\psi_{\mu}}{2(1+\rho)}\hat{\eta}^{\mu}+J\xi^{\mu} \right)\right] \tag{104}\] \[=\mathbb{E}\left\{\ln 2\cosh\left[\frac{\beta}{1-d}\sum_{\mu=1}^{K }\bar{n}_{\mu}\hat{\eta}^{\mu}+J\xi^{\mu}\right]\right\}\]
where \(\mathbb{E}=\mathbb{E}_{\xi}\mathbb{E}_{(\eta|\xi)}\). Finally, putting inside (100) (104) and (102), we reach the thesis.
### Proof of Proposition 1
In this subsection we show the proof Proposition 1.
Proof.: For large data-sets, using the Central Limit Theorem we have
\[\hat{\eta}^{\mu}\sim\xi^{\mu}\left(1+\sqrt{\rho}\;Z_{\mu}\right)\,. \tag{105}\]
where \(Z_{\mu}\) is a standard Gaussian variable \(Z_{\mu}\sim\mathcal{N}(0,1)\). Replacing Eq. (105) in the self-consistency equation for \(\bar{n}\), namely Eq. (101), and applying Stein's lemma9
Footnote 9: This lemma, also known as Wick’s theorem, applies to standard Gaussian variables, say \(J\sim\mathcal{N}(0,1)\), and states that, for a generic function \(f(J)\) for which the two expectations \(\mathbb{E}\left(Jf(J)\right)\) and \(\mathbb{E}\left(\partial_{J}f(J)\right)\) both exist, then
\[\mathbb{E}\left(Jf(J)\right)=\mathbb{E}\left(\frac{\partial f(J)}{\partial J} \right)\,. \tag{106}\]
in order to recover the expression for \(\bar{m}_{\mu}\), we get the large data-set equation for \(\bar{n}_{\mu}\), i.e. Eq. (104).
We will use the relation
\[\mathbb{E}_{\lambda_{\mu}}\left[F\left(a+\sum_{\mu=1}^{K}b_{\mu}\lambda_{\mu} \right)\right]=\mathbb{E}_{x}\,\left[F\left(a+Z\sqrt{\sum_{\mu=1}^{K}b_{\mu} ^{2}}\right)\right]\,, \tag{107}\]
where \(\lambda_{\mu}\) and \(Z\) are i.i.d. Gaussian variables. Doing so, we obtain
\[g(\beta,\boldsymbol{\xi},Z,\bar{n})=\beta^{{}^{\prime}}\sum_{\nu=1}^{K}\bar{n }_{\nu}\xi^{\nu}+\beta^{{}^{\prime}}\sqrt{\rho}\sum_{\nu=1}^{K}Z_{\nu}\bar{n} _{\nu}^{2}\left(\xi^{\nu}\right)^{2}=\beta^{{}^{\prime}}\left(\sum_{\nu=1}^{K }\bar{n}_{\nu}\xi^{\nu}+Z\sum_{\nu=1}^{K}\sqrt{\rho\,\bar{n}_{\nu}^{2}\left( \xi^{\nu}\right)^{2}}\right)\,, \tag{108}\]
thus we reach the thesis.
**Corollary 3**.: _The self consistency equations in the large data-set assumption and null-temperature limit are_
\[\bar{m}_{\mu}\,=\,\mathbb{E}_{\xi}\left\{\operatorname{erf}\left[\left(\sum_{ \nu=1}^{K}\bar{m}_{\nu}\xi^{\nu}\right)\left(2\rho\sum_{\nu=1}^{K}\bar{m}_{\nu }^{2}\left(\xi^{\nu}\right)^{2}\right)^{-1/2}\right]\xi^{\mu}\right\}. \tag{109}\]
Proof.: In order to lighten the notation we rename
\[C=\tanh^{2}\left[g(\beta,\boldsymbol{\xi},Z,\bar{\boldsymbol{n}})\right]\,. \tag{101}\]
We start by assuming finite the limit
\[\lim_{\beta^{{}^{\prime}}\to\infty}\beta^{{}^{\prime}}(1-C)=D\in\mathbb{R} \tag{102}\]
and we stress that as \(\beta^{{}^{\prime}}\to\infty\) we have \(C\to 1\). As a consequence, the following reparametrization is found to be useful,
\[C=1-\frac{\delta C}{\beta^{{}^{\prime}}}\quad\text{as}\quad\beta^{{}^{\prime} }\to\infty. \tag{103}\]
Therefore, as \(\beta^{{}^{\prime}}\to\infty\), it yields
\[\bar{n}_{\mu}=\frac{\bar{m}_{\mu}}{1+\rho-\rho\,\delta C\,(1-d)} \tag{104}\]
to reach this result, we have also used the relation
\[\mathbb{E}_{z}\text{sign}[A+Bz]=\text{erf}\left[\frac{A}{\sqrt{2}B}\right]\, \tag{105}\]
where \(z\) is a Gaussian variable \(\mathcal{N}(0,1)\) and the truncated expression \(\bar{n}_{\mu}=\bar{m}_{\mu}/(1+\rho)\) for the first equation in (104).
This research has been supported by Ministero degli Affari Esteri e della Cooperazione Internazionale (MAECI) via the BULBUL grant (Italy-Israel), CUP Project n. F85F21006230001, and has received financial support from the Simons Foundation (grant No. 454949, G. Parisi) and ICSC - Italian Research Center on High Performance Computing, Big Data and Quantum Computing, funded by European Union - NextGenerationEU.
Further, this work has been partly supported by The Alan Turing Institute through the Theory and Methods Challenge Fortnights event _Physics-informed machine learning_, which took place on 16-27 January 2023 at The Alan Turing Institute headquarters.
E.A. acknowledges financial support from Sapienza University of Rome (RM120172B8066CB0).
E.A., A.A. and A.B. acknowledges GNFM-INdAM (Gruppo Nazionale per la Fisica Mamematica, Istituto Nazionale d'Alta Matematica), A.A. further acknowledges UniSalento for financial support via PhD-AI and AB further acknowledges the PRIN-2022 project _Statistical Mechanics of Learning Machines: from algorithmic and information-theoretical limits to new biologically inspired paradigms_.
|
2309.02886 | Symmetric-Reciprocal-Match Method for Vector Network Analyzer
Calibration | This paper proposes a new approach, the symmetric-reciprocal-match (SRM)
method, for calibrating vector network analyzers (VNAs). The method involves
using multiple symmetric one-port loads, a two-port reciprocal device, and a
matched load. The load standards consist of two-port symmetric one-port
devices, and at least three unique loads are used. However, the specific
impedances of the loads are not specified. The reciprocal device can be any
transmissive device, although a non-reciprocal device can also be used if only
the one-port error boxes are of interest. The matched load is fully defined and
can be asymmetric. We numerically demonstrated the proposed method's accuracy
with synthetic data and with measurements of coaxial standards using a
commercial short-open-load-reciprocal (SOLR) calibration kit with verification
standards. An advantage of the proposed method is that only the match standard
is defined, whereas the remaining standards are partially defined, either
through symmetry or reciprocity. | Ziad Hatab, Michael Ernst Gadringer, Wolfgang Bösch | 2023-09-06T10:20:57Z | http://arxiv.org/abs/2309.02886v2 | # Symmetric-Reciprocal-Match Method for
###### Abstract
This paper proposes a new approach, the symmetric-reciprocal-match (SRM) method, for calibrating vector network analyzers (VNAs). The method involves using multiple symmetric one-port loads, a two-port reciprocal device, and a matched load. The load standards consist of two-port symmetric one-port devices, and at least three unique loads are used. However, the specific impedances of the loads are not specified. The reciprocal device can be any transmissive device, although a non-reciprocal device can also be used if only the one-port error boxes are of interest. The matched load is fully defined and can be asymmetric. We numerically demonstrated the proposed method's accuracy with synthetic data and with measurements of coaxial standards using a commercial short-open-load-reciprocal (SOLR) calibration kit with verification standards. An advantage of the proposed method is that only the match standard is defined, whereas the remaining standards are partially defined, either through symmetry or reciprocity.
vector network analyzer, calibration, microwave measurement +
Footnote †: Software implementation and measurements are available online: [https://github.com/ZiadHatab/srm-calibration](https://github.com/ZiadHatab/srm-calibration)
## I Introduction
The most commonly used method for calibrating vector network analyzers (VNAs) is the short-open-load-thru (SOLT) method [1], which requires that all four standards to be fully characterized or modeled. In the past, many VNAs used a three-sampler architecture with three receivers.To account for the non-driving port's termination mismatches (switch terms), the VNA is modeled with the well-known 12-term model [2]. This model forms the foundation of the SOLT calibration.
Nowadays, modern VNAs use a full-reflectometry architecture that allows for sampling all waves, thus directly measuring the switch terms of a VNA by simply connecting a transmissive device between the ports [3]. This upgraded architecture enabled the use of the simplified error box model of VNAs [4], which has led to many new advanced calibration methods that surpass the accuracy of SOLT [2]. Furthermore, even with the three-sampler VNA architecture, it is possible to indirectly measure the switch terms of the VNA using a set of reciprocal devices, which enable the application of the error box model [5]. A well-known family of calibration methods based on the error box model is the self-calibration methods [2], which do not require full characterization of some of the standards. One of the most used self-calibration methods nowadays is the short-open-load-reciprocal (SOLR) method [6], which is the same as SOLT, but with any transmissive reciprocal device instead of the thru standard. SOLR has proven useful in scenarios where a direct connection is unavailable. However, the drawback of the SOLR method is the requirement of the full definition of the short-open-load (SOL) standards, which bounds the accuracy of SOLR to the SOL standards.
Other self-calibration methods include thru-reflect-line (TRL) and multiline TRL [7, 8, 9, 10], which use line standards of different lengths, thru connection, and symmetric unknown reflect standard. The thru standard in TRL is fully defined. However, there is an implementation that eliminates the requirement of the thru standard for any transmissive device with an additional reflect standard [11]. While multiline TRL is a very accurate calibration method, especially at millimeter-wave frequencies, it cannot be applied at lower frequencies, as it results in using an extremely long line standard. A common replacement for the multiline TRL method for on-wafer application is the line-reflect-match (LRM), thru-match-reflect-reflect (TMRR), and line-reflect-reflect-match (LRRM) methods [12, 13, 14, 15]. These methods use unknown symmetric reflect standards and one known match standard. However, these methods suffer from some impracticality, especially in defining the line standard and shifting the reference plane, as opposed to the TRL method. These methods can also be extended to account for crosstalk [16, 17, 18]. Additionally, due to the requirement of defining the thru/line standard, such methods can be challenging to use in on-wafer measurement scenarios where the probes are orthogonal or at an angle [19].
In this paper, we propose a new approach to self-calibration of VNAs using multiple symmetric one-port loads, a two-port reciprocal device, and a matched load. The multi-load one-port standards are two-port symmetric loads, and at least three unique loads must be used. The values of the loads themselves are not specified. For example, a short, an open, and any finite impedance load would be suitable. The reciprocal device can be any transmissive device. In fact, if we only care about the one-port error boxes of the VNA, then the two-port device can be any transmissive device, even if it is non-reciprocal. Lastly, the matched load is fully defined but can be asymmetric. The match standard can be implemented as part of the symmetric one-port loads to reduce the number of standards. We refer to this calibration method as the symmetric-reciprocal-match (SRM) method. All standards are generally partially defined, except for the match standard. We demonstrate the method using synthetic data of coplanar waveguide (CPW) structures, as well as measurements with commercial SOLR coaxial standards.
A significant benefit of the proposed approach is that all the standards are partially defined, except for the match standard. This is in contrast to LRRM/LRM/TMRR approaches, which necessitate fully defined thru/line standards. As a result, such techniques can be challenging in the case of on-wafer setups where the probes are positioned at an orthogonal angle. Equivalently, the SOLR calibration addresses the problem of the thru/line connection by using any two-port reciprocal device instead but necessitates the specification of the remaining standards. In brief, our SRM technique combines the benefits of LRRM/LRM/TMRR techniques in utilizing undefined symmetric standards, as well as the SOLR technique in utilizing a two-port reciprocal device. This revised definition of the standards enables accurate calibration by limiting the definition to solely the match standard.
The remainder of this article is structured as follows. In Section Section Section II, we discuss our SRM method when using a thru standard instead of any reciprocal device, highlighting the method's fundamentals. Afterward, in Section III, we extend the mathematics of the calibration to consider any transmissive reciprocal device. Section IV introduces a special case of the SRM method when considering a fixed distance between measuring ports, which is often the case in on-wafer applications. Lastly, in Section V, we provide numerical analysis using synthetic data and experimental measurements using commercial coaxial \(2.92\,\mathrm{mm}\) calibration and verification standards and conclude in Section VI.
## II The Simple Case Using a Thru Standard
In the general case of SRM calibration, no thru standard is required. Any transmissive reciprocal device would suffice. If only the one-port error boxes are desired, any transmissive device would be acceptable. However, the derivation of the generalized SRM calibration is based on creating an artificial thru standard via mathematical reformulation and additional one-port measurements. The handling of the artificial thru standard is explained in more detail in Section III. In this section, we assume a fully defined thru standard to derive the calibration workflow and extend it to the general case in Section III.
To start the derivation, we use the error box model of a two-port VNA, as illustrated in Fig. 1. This model is expressed in T-parameters as follows:
\[\mathbf{M}_{\mathrm{stand}}=\underbrace{k_{a}k_{b}}_{k}\underbrace{\begin{bmatrix}a _{11}&a_{12}\\ a_{21}&1\end{bmatrix}}_{\mathbf{A}}\mathbf{T}_{\mathrm{stand}}\underbrace{\begin{bmatrix} b_{11}&b_{12}\\ b_{21}&1\end{bmatrix}}_{\mathbf{B}}, \tag{1}\]
where \(\mathbf{M}_{\mathrm{stand}}\) and \(\mathbf{T}_{\mathrm{stand}}\) represent the measured and actual T-parameters of the standard, respectively. The matrices \(\mathbf{A}\) and \(\mathbf{B}\) are the one-port error boxes containing the first six error terms, and \(k\) is the seventh error term that describes the transmission error between the ports.
For a thru standard, the measured T-parameters are provided as follows:
\[\mathbf{M}_{\mathrm{thru}}=k\mathbf{A}\mathbf{B}. \tag{2}\]
In the next step, we will focus on measuring one-port standards. For the SRM method, we require at least three symmetric two-port standards made from one-port devices, and at least three of them should exhibit unique electrical responses. Examples of such standards include short, open, and impedance. It is not necessary to know the exact response of the standards themselves. Fig. 2 provides an illustration of the error box for one-port measurements.
The measured input reflection coefficient seen from each port is given as follows:
\[\Gamma_{a}^{(i)}=\frac{a_{11}\rho^{(i)}+a_{12}}{a_{21}\rho^{(i)}+1};\quad\Gamma _{b}^{(i)}=\frac{b_{11}\rho^{(i)}-b_{21}}{1-b_{12}\rho^{(i)}}, \tag{3}\]
where \(\Gamma_{a}^{(i)}\) and \(\Gamma_{b}^{(i)}\) are the \(i\)th measured reflection coefficients from the left and right ports, respectively. The actual response of the standard, which is assumed to be unknown, is denoted by \(\rho^{(i)}\).
The expression for the input reflection coefficient, as given in (3), is in the form of a Mobius transformation (also known as a bilinear transformation) [20, Chapter 3]. One important property of the Mobius transformation is that it can be described by an equivalent \(2\times 2\) matrix notation. For instance, (4) provides a general Mobius transformation with coefficients \(a,b,c,d\in\mathbb{C}\), along with its corresponding \(2\times 2\) matrix representation.
\[f(z)=\frac{az+b}{cz+d}\quad\longleftrightarrow\quad[f]=\begin{bmatrix}a&b\\ c&d\end{bmatrix} \tag{4}\]
In (4), we use brackets \([\cdot]\) to describe matrices associated with a Mobius transformation. The transformation coefficients are only unique up to a complex scalar multiple. This property of the Mobius transform can be easily shown by multiplying the numerator and denominator with a non-zero complex scalar. In terms of matrix notation, scaling the matrix with a complex scalar still represents the same Mobius transformation. Therefore,
\[[f]\equiv\kappa[f],\quad\kappa\neq 0 \tag{5}\]
Fig. 1: Two-port VNA error box model. Matrices are given as T-parameters.
Fig. 2: Two-port VNA error box model that illustrates the measurement of one-port standards. All matrices are provided as T-parameters. The index \(i\) indicates the measured standard, where \(i=1,2,\ldots,M\), with \(M\geq 3\).
The matrix representation of the Mobius transformation possesses an elegant property in its ability to describe composite Mobius transformations. In essence, when we compose one Mobius transformation with another, we obtain a new Mobius transformation with updated coefficients. This property can be expressed in matrix notation by computing the matrix product of the individual Mobius transformations. To illustrate this concept, we provide an example of the composition of two Mobius transformations \(f_{1}(z)\) and \(f_{2}(z)\), which are defined as follows:
\[f_{1}(z)=\frac{a_{1}z+b_{1}}{c_{1}z+d_{1}}\quad\longleftrightarrow\quad[f_{1 }]=\begin{bmatrix}a_{1}&b_{1}\\ c_{1}&d_{1}\end{bmatrix} \tag{6a}\] \[f_{2}(z)=\frac{a_{2}z+b_{2}}{c_{2}z+d_{2}}\quad\longleftrightarrow \quad[f_{2}]=\begin{bmatrix}a_{2}&b_{2}\\ c_{2}&d_{2}\end{bmatrix} \tag{6b}\]
The composite transformation is given as follows:
\[g(z)=(f_{1}\circ f_{2})(z) =\frac{a_{1}f_{2}(z)+b_{1}}{c_{1}f_{2}(z)+d_{1}} \tag{7}\] \[=\frac{(a_{1}a_{2}+b_{1}c_{2})z+a_{1}b_{2}+b_{1}d_{2}}{(a_{2}c_{1} +c_{2}d_{1})z+b_{2}c_{1}+d_{1}d_{2}}\]
Therefore, the corresponding matrix equivalent of the composite Mobius transformation \(g(z)\) is given as follows:
\[[g]=\begin{bmatrix}a_{1}a_{2}+b_{1}c_{2}&a_{1}b_{2}+b_{1}d_{2}\\ a_{2}c_{1}+c_{2}d_{1}&b_{2}c_{1}+d_{1}d_{2}\end{bmatrix}=[f_{1}][f_{2}] \tag{8}\]
which is the same as multiplying the matrices \([f_{1}]\) and \([f_{2}]\).
Using matrix notation for the Mobius transformation, we can describe the input reflection coefficient measured from the left port as follows:
\[\Gamma_{a}^{(i)}=\frac{a_{11}\rho^{(i)}+a_{12}}{a_{21}\rho^{(i)}+1}\longleftrightarrow [\Gamma_{a}^{(i)}]=\underbrace{\begin{bmatrix}a_{11}&a_{12}\\ a_{21}&1\end{bmatrix}}_{\mathbf{A}} \tag{9}\]
To address the error box on the right side, we perform a similar process as before, but instead of using the measured reflection coefficient, we reformulate in terms of the reflection coefficient \(\rho^{(i)}\) as a function of the measured reflection coefficient \(\Gamma_{b}^{(i)}\), which is given as follows:
\[\rho^{(i)}=\frac{\Gamma_{b}^{(i)}+b_{21}}{b_{12}\Gamma_{b}^{(i)}+b_{11}} \longleftrightarrow[\rho^{(i)}]=\underbrace{\begin{bmatrix}1&b_{21}\\ b_{12}&b_{11}\end{bmatrix}}_{\mathbf{PBP}} \tag{10}\]
where \(\mathbf{P}\) is a \(2\times 2\) permutation matrix defined as
\[\mathbf{P}=\mathbf{P}^{T}=\mathbf{P}^{-1}=\begin{bmatrix}0&1\\ 1&0\end{bmatrix}. \tag{11}\]
By composing (10) with (9), we obtain a new Mobius transformation that describes the input reflection coefficient from the left port using measurements of the right port. This relationship can be written as follows:
\[\Gamma_{a}^{(i)}=\frac{h_{11}\Gamma_{b}^{(i)}+h_{12}}{h_{21}\Gamma_{b}^{(i)}+h _{22}}\longleftrightarrow[\Gamma_{a}^{(i)}]=\mathbf{H}=\begin{bmatrix}h_{11}&h_{ 12}\\ h_{21}&h_{22}\end{bmatrix} \tag{12}\]
Here, we use the variable \(\mathbf{H}\) to describe the Mobius transformation in (12) and differentiate it from the Mobius transformation in (9) to avoid confusion. It is important to note that both transformations are different, as they have distinct input parameters.
Due to the composite property of Mobius transformations, the coefficients of the transformation can be expressed as follows:
\[\mathbf{H}=\nu\mathbf{A}\mathbf{P}\mathbf{B}\mathbf{P},\qquad\forall\,\nu\neq 0. \tag{13}\]
It is important to note that the constant \(\nu\) is included because the coefficients of a Mobius transformation can only be defined up to a non-zero complex-valued scalar constant.
By solving for the coefficients \(h_{ij}\), we can determine (13). This equation is later used for establishing the calibration procedure by combining it with the thru standard. Since the coefficients \(h_{ij}\) are defined by the Mobius transformation in (12), which is based on the measurements of the symmetric one-port standards, we can rewrite the Mobius transformation as a linear system of equations in terms of its coefficients. Assuming that \(M\geq 3\) one-port standards were measured, the coefficients \(h_{ij}\) can be described as follows:
\[\underbrace{\begin{bmatrix}-\Gamma_{b}^{(1)}&-1&\Gamma_{b}^{(1)}\Gamma_{a}^{ (1)}&\Gamma_{a}^{(1)}\\ -\Gamma_{b}^{(2)}&-1&\Gamma_{b}^{(2)}\Gamma_{a}^{(2)}&\Gamma_{a}^{(2)}\\ \vdots&\vdots&\vdots&\vdots\\ -\Gamma_{b}^{(M)}&-1&\Gamma_{b}^{(M)}\Gamma_{a}^{(M)}&\Gamma_{a}^{(M)}\end{bmatrix}}_{ \mathbf{G}}\underbrace{\begin{bmatrix}h_{11}\\ h_{12}\\ h_{22}\end{bmatrix}}_{\mathbf{h}}=\mathbf{0} \tag{14}\]
The solution for the vector \(\mathbf{h}\) is found in the nullspace of \(\mathbf{G}\), as the system matrix \(\mathbf{G}\) contains at least one nullspace due to the equality to zero in (14). We may have more than one nullspace, but only if \(\mathrm{rank}(\mathbf{G})<3\), which can only happen if we do not use at least three unique one-port standards.
While the nullspace \(\mathbf{G}\) satisfies the solution of (14), we can optimally estimate the nullspace of \(\mathbf{G}\) in the presence of disturbance by computing its singular value decomposition (SVD) and using the right singular vector that corresponds to the smallest singular value [21]. As \(\mathbf{G}\) is of dimension 4 (i.e., number of columns), it has four singular values and vectors. We decompose the matrix \(\mathbf{G}\) using SVD as follows:
\[\mathbf{G}=\sum_{i=1}^{4}s_{i}\mathbf{u}_{i}\mathbf{v}_{i}^{H} \tag{15}\]
where \(s_{i}\) is the \(i\)th singular value, while \(\mathbf{u}_{i}\) and \(\mathbf{v}_{i}\) are the \(i\)th left and right singular vectors, respectively. The conventional ordering of the singular values is in decreasing order. Therefore, the smallest singular value is \(s_{4}\). Hence, the solution for \(\mathbf{h}\) is given by the fourth right singular vector as follows:
\[\mathbf{h}=\mathbf{v}_{4} \tag{16}\]
Now that we have solved for \(\mathbf{h}\), and hence \(\mathbf{H}\) in (13), we can combine the measurements of the thru standards with the results of \(\mathbf{H}\) to form an eigenvalue problem regarding the error box coefficients. The combined result for the left error box is defined as follows:
\[\mathbf{M}_{\mathrm{thru}}\mathbf{P}\mathbf{H}^{-1}=\frac{k}{\nu}\mathbf{A}\mathbf{P}\mathbf{A}^{-1} \tag{17}\]
Although (17) is not strictly in the canonical form for an eigenvalue decomposition, as the middle matrix is not diagonal, it can still be decomposed because the middle matrix is a constant permutation matrix. If we apply the eigendecomposition to (17), we obtain the following decomposition:
\[\boldsymbol{M}_{\mathrm{thru}}\boldsymbol{P}\boldsymbol{H}^{-1}=\frac{k}{\nu} \boldsymbol{A}\boldsymbol{P}\boldsymbol{A}^{-1}=\boldsymbol{W}_{a}\boldsymbol {\Lambda}\boldsymbol{W}_{a}^{-1}, \tag{18}\]
where the matrix \(\boldsymbol{W}_{a}\) corresponds to the eigenvectors, and the matrix \(\boldsymbol{\Lambda}\) corresponds to the eigenvalues. Both are calculated as follows:
\[\boldsymbol{W}_{a} =\begin{bmatrix}w_{11}^{(a)}&w_{12}^{(a)}\\ w_{21}^{(a)}&w_{22}^{(a)}\end{bmatrix}=\begin{bmatrix}\frac{a_{11}+a_{12}}{a_ {21}+1}&\frac{-a_{11}+a_{12}}{-a_{21}+1}\\ 1&1\end{bmatrix} \tag{19a}\] \[\boldsymbol{\Lambda} =\begin{bmatrix}\lambda_{1}&0\\ 0&\lambda_{2}\end{bmatrix}=\begin{bmatrix}\frac{k}{\nu}&0\\ 0&-\frac{k}{\nu}\end{bmatrix} \tag{19b}\]
Generally, the order of the eigenvectors and eigenvalues is not unique. To ensure the correct order, we need to know the value of \(k/\nu\). However, this term is still unknown at this stage. After solving for the error terms using both possible solutions, the sorting is done through trial and error. For instance, once the error terms have been solved, we could use one of the one-port standards as a metric to determine the correct order.
We can solve the eigenvalue problem for matrix \(\boldsymbol{B}\) by reversing the multiplication order of the matrices in (17). This gives us the following equation:
\[\left(\boldsymbol{P}\boldsymbol{H}^{-1}\boldsymbol{M}_{\mathrm{ thru}}\right)^{T}=\frac{k}{\nu}\boldsymbol{B}^{T}\boldsymbol{P}\boldsymbol{B}^{-T}= \boldsymbol{W}_{b}\boldsymbol{\Lambda}\boldsymbol{W}_{b}^{-1} \tag{20}\]
Using the transpose operation is optional, but it allows us to derive the eigenvectors in a similar order as with the left error box. As a result, the eigenvectors and eigenvalues are given as follows:
\[\boldsymbol{W}_{b} =\begin{bmatrix}w_{11}^{(b)}&w_{12}^{(b)}\\ w_{21}^{(b)}&w_{22}^{(b)}\end{bmatrix}=\begin{bmatrix}\frac{b_{11}+b_{12}}{b_ {12}+1}&\frac{-b_{11}+b_{12}}{-b_{12}+1}\\ 1&1\end{bmatrix} \tag{21a}\] \[\boldsymbol{\Lambda} =\begin{bmatrix}\lambda_{1}&0\\ 0&\lambda_{2}\end{bmatrix}=\begin{bmatrix}\frac{k}{\nu}&0\\ 0&-\frac{k}{\nu}\end{bmatrix} \tag{21b}\]
Finally, we need an additional equation for each port to calculate the error terms from each error box. This equation comes from the match standard, which defines the reference impedance of the calibration. In general, the match standard does not have to be the same at each port. However, since we are most likely to use an impedance standard as part of the symmetric one-port devices, it makes sense to reuse the match standards. For each port, the reflection coefficient of a match standard is given as follows:
\[\rho_{a}^{(m)}=\frac{Z_{a}^{(m)}-Z_{a}^{(ref)}}{Z_{a}^{(m)}+Z_{a}^{(ref)}}; \quad\rho_{b}^{(m)}=\frac{Z_{b}^{(m)}-Z_{b}^{(ref)}}{Z_{b}^{(m)}+Z_{b}^{(ref)}} \tag{22}\]
where \(Z_{a}^{(m)}\) and \(Z_{b}^{(m)}\) represent the complex impedance definition of the match standard from each port. The user sets the values of \(Z_{a}^{(ref)}\) and \(Z_{b}^{(ref)}\) to specify the reference impedance, for example, \(50\,\Omega\).
By utilizing knowledge of the match standard and the equation that describes the input reflection coefficient, as given in (3), we can combine this result with the eigenvectors to form a linear system of equations for each port. The following is for the left port:
\[\begin{bmatrix}-1&-1&w_{11}^{(a)}&w_{11}^{(a)}\\ 1&-1&-w_{12}^{(a)}&w_{12}^{(a)}\\ -\rho_{a}^{(m)}&-1&\Gamma_{a}^{(m)}\rho_{a}^{(m)}&\Gamma_{a}^{(m)}\end{bmatrix} \begin{bmatrix}a_{11}\\ a_{12}\\ a_{21}\\ 1\end{bmatrix}=\boldsymbol{0} \tag{23}\]
The system of equations for the right port can be obtained in a similar way, resulting in the following system of equations:
\[\begin{bmatrix}-1&-1&w_{11}^{(b)}&w_{11}^{(b)}\\ 1&-1&-w_{12}^{(b)}&w_{12}^{(b)}\\ -\rho_{b}^{(m)}&1&-\Gamma_{b}^{(m)}\rho_{b}^{(m)}&\Gamma_{b}^{(m)}\end{bmatrix} \begin{bmatrix}b_{11}\\ b_{21}\\ b_{12}\\ 1\end{bmatrix}=\boldsymbol{0} \tag{24}\]
The error terms are solved by finding the nullspace of the system matrix. However, since the nullspace is only unique up to a scalar factor, we normalize it by the last element to make it equal to 1. The system matrix can be extended by an arbitrary number of defined impedance standards to improve the solution. It is important to note that we obtain two systems of equations for each port since the order of the eigenvectors is unknown. As a result, we solve for both possible orderings and choose the answer that results in a calibrated measurement closest to a known estimate, like the usage of a reflect standard.
An interesting observation to note is the structure of (23) and (24), where the first two rows in the system matrix obtained from the eigenvectors resemble measurements of ideal short and open standards. In general, the expression of (23) and (24) are identical to that of a one-port SOL calibration when assuming ideal short and open standards. Thus, we were able to replicate measurements of ideal open and short standards by using symmetric undefined one-port devices and a thru standard.
The final error term that needs to be solved is the transmission error term \(k\). Since we are working with a thru standard, we can directly extract \(k\) by multiplying the inverse of the one-port error boxes by the measurements of the thru standard. In Section III, we introduce a different approach for computing \(k\) using any transmissive reciprocal standard, as done in SOLR calibration [6].
## III Generalization without a Thru Standard
In the previous section, we explained how to calculate the error terms using at least three symmetric one-port standards, a thru standard, and a match standard. The thru standard can cause difficulties, as it is not always possible to physically achieve such a standard.
The equations derived in the previous section can be used without changes if we obtain an equation similar to that of a thru standard, as given in (2). Therefore, this section aims to derive what we will refer to as a virtual thru standard by using additional one-port standards.
The necessary standards, excluding the match standard, for the generalized SRM calibration are shown in Fig. 3.
The network standard is an unknown transmissive two-port standard. This standard does not need to be reciprocal for deriving only one-port error terms. The additional network-load standard uses the same two-port network standard and the same one-port symmetric standards. As mentioned in the previous section, we require at least \(M\geq 3\) one-port symmetric standards. Hence, we also need a corresponding network-load standard for every symmetric one-port load standard. Generally, we only need the network-load standard from one port, which could be from either ports.
Based on the network standard, the following measurement is available:
\[\mathbf{M}_{\mathrm{net}}=k\mathbf{A}\underbrace{\begin{bmatrix}-\det(\mathbf{S})&\frac {S_{11}}{S_{21}}\\ \frac{-S_{22}}{S_{21}}&\frac{1}{S_{21}}\end{bmatrix}}_{\mathbf{N}}\mathbf{B} \tag{25}\]
where \(\det\left(\mathbf{S}\right)=S_{11}S_{22}-S_{12}S_{21}\).
A similar expression to the matrix \(\mathbf{H}\) in (13) can be obtained using the network-load standard from the left port and the load standards from the right ports. This results in an expression similar to (13), but with \(\mathbf{A}\) replaced by \(\mathbf{A}\mathbf{N}\) and with an adjustment to the scaling factor. The scaling factor is unknown and does not need to be equal to the constant in (13). We can also achieve the same result by considering the network-load standards from the right port and symmetric load standards from the left port. As a result, combining the network-load standards with the symmetric load standards, we obtain the following result for each port depending on where the network-load was implemented:
\[\mathbf{F}_{a} =\eta\mathbf{A}\mathbf{N}\mathbf{P}\mathbf{B}\mathbf{P}, \forall\,\eta\neq 0, \tag{26a}\] \[\mathbf{F}_{b} =\zeta\mathbf{A}\mathbf{P}\mathbf{N}\mathbf{B}\mathbf{P}, \forall\,\zeta\neq 0 \tag{26b}\]
Using the results of \(\mathbf{M}_{\mathrm{net}}\), \(\mathbf{H}\), and \(\mathbf{F}\) from (25), (13), and (26), respectively, we can create a virtual thru standard by combining them in the following manner:
\[\mathbf{M}_{\mathrm{thru}} =\mathbf{H}\mathbf{F}_{a}^{-1}\mathbf{M}_{\mathrm{net}}=\frac{\nu}{\eta}k\mathbf{ A}\mathbf{B} \tag{27a}\] \[\mathbf{M}_{\mathrm{thru}} =\mathbf{M}_{\mathrm{net}}\mathbf{P}\mathbf{F}_{b}^{-1}\mathbf{H}\mathbf{P}=\frac{\nu }{\zeta}k\mathbf{A}\mathbf{B} \tag{27b}\]
Therefore, we can obtain a thru measurement without measuring a thru standard using the results of (27). We simply use the results from the previous section and substitute (27) in place of the thru measurements. The only difference we obtain are the eigenvalues, which result in \(\pm k/\eta\) or \(\pm k/\zeta\). However, this change does not affect anything, as \(\nu\), \(\eta\), and \(\zeta\) are the result of the normalization choice of the Mobius transformation and are assumed regardless unknown.
To complete the two-port calibration, we must solve for the transmission error term \(k\). We can use the same method as in SOLR calibration [6] by calculating \(k\) through the determinate of the one-port corrected measurement of the network standard, given that it is reciprocal (i.e., \(S_{21}=S_{12}\)). Assuming the network standard is indeed reciprocal, we can solve for \(k\) by first applying the one-port error boxes to the measurement of the network standard as follows:
\[\mathbf{A}^{-1}\mathbf{M}_{\mathrm{net}}\mathbf{B}^{-1}=k\mathbf{N} \tag{28}\]
Afterward, by taking the determinant from both sides, we obtain the following:
\[\det\left(\mathbf{A}^{-1}\mathbf{M}_{\mathrm{net}}\mathbf{B}^{-1}\right)=k^{2}\underbrace {\det\left(\mathbf{N}\right)}_{=1} \tag{29}\]
Hence, \(k\) is solved as follows:
\[k=\pm\sqrt{\det\left(\mathbf{A}^{-1}\mathbf{M}_{\mathrm{recip}}\mathbf{B}^{-1}\right)} \tag{30}\]
where the selection of the appropriate sign is determined by comparing it to a known estimate of the network.
## IV Special Layout for On-wafer Application
The presented SRM calibration method applies to any measurement setup where the standards can be implemented. However, a particular case for on-wafer calibration arises when considering that the distance between the probes must remain constant. Semi-automatic probe station users often request this requirement, where only the chuck platform is motorized. For these measurement setups, the standards must be implemented with a constant distance between the probes to perform the calibration automatically.
Considering the standards depicted in Fig. 3, we can see that the right probe would need to be moved to the right to measure the network-load standard. The network standard already dictates the distance between the probes, and cascading another
Fig. 3: Two-port VNA error box model illustrating the standards used to create a virtual thru standard. All matrices are provided as T-parameters. The index \(i\) indicates the measured standard, where \(i=1,2,\ldots,M\), with \(M\geq 3\).
standard would naturally increase the spacing, requiring probe movement.
In planar circuit calibration, as in on-wafer measurement setups, we can advantageously apply the property of the network standard to represent any symmetric transmissive network. Hence, we can split the network into two cascaded flipped asymmetric networks. With this notation, we can use half of the network to define the network-load standard. An illustration of coplanar waveguide (CPW) standards is depicted in Fig. 4.
For any symmetric network (i.e., \(S_{ij}=S_{ji}\)), we can divide its T-parameters into two cascaded networks that are identical and flipped [22]. This network can be expressed as follows:
\[\mathbf{N}=\mathbf{R}\mathbf{P}\mathbf{R}^{-1}\mathbf{P} \tag{31}\]
where \(\mathbf{P}\) represents the permutation matrix, as defined in (11), and \(\mathbf{R}\) is the half-asymmetric part of the network standard.
By substituting the new definition of the network standard from (31) into the expressions (25), (13), and (26), we arrive at the following expressions:
\[\mathbf{M}_{\mathrm{net}} =k\mathbf{A}\mathbf{R}\mathbf{P}\mathbf{R}^{-1}\mathbf{P}\mathbf{B} \tag{32a}\] \[\mathbf{F}_{a} =\eta\mathbf{A}\mathbf{R}\mathbf{P}\mathbf{B}\mathbf{P},\qquad\forall\,\eta\neq 0\] (32b) \[\mathbf{F}_{b} =\zeta\mathbf{A}\mathbf{R}^{-1}\mathbf{P}\mathbf{B}\mathbf{P},\qquad\forall\,\zeta\neq 0. \tag{32c}\]
Therefore, by combining the results of the above expressions with \(\mathbf{H}\) from (13), we create a virtual thr standard as follows:
\[\mathbf{M}_{\mathrm{thru}} =\mathbf{H}\mathbf{F}_{a}^{-1}\mathbf{M}_{\mathrm{net}}\mathbf{P}\mathbf{H}^{-1}\mathbf{F} _{a}\mathbf{P}=k\mathbf{A}\mathbf{B}, \tag{33a}\] \[\mathbf{M}_{\mathrm{thru}} =\mathbf{F}_{b}\mathbf{H}^{-1}\mathbf{M}_{\mathrm{net}}\mathbf{P}\mathbf{F}_{b}^{-1} \mathbf{H}\mathbf{P}=k\mathbf{A}\mathbf{B}. \tag{33b}\]
With the virtual thr standard being established, the remaining calibration process follows the same procedure discussed in the previous section.
One elegant application using half-network standards is the use of angled calibration. This method involves positioning the probes at an angle rather than facing each other. Traditional calibration methods such as TRL, LRM, and LRRM do not allow this type of calibration, whereas SOLR is often used for such scenarios [19]. Fig. 5 illustrates a potential implementation of the network and half-network standards at a \(90^{\circ}\) angle.
## V Experiments
This section discusses two experiments. The first experiment involves numerical analysis using synthetic data to demonstrate different aspects of SRM calibration. It includes a demonstration of the SRM method using network-load standards with a full-network (as discussed in Section III) and with half-network (as discussed in Section IV). In the second experiment, we present measurements using SOLR coaxial standards and compare the SRM method against SOLR calibration using characterized verification standards with defined uncertainties.
### _Numerical Analysis_
The procedure for the numerical analysis involves creating synthetic data of CPW standards using the model developed in [23, 24, 25]. To emulate an on-wafer setup accurately, we utilize error boxes from an actual on-wafer setup that was extracted using multiline TRL calibration on an impedance substrate standard (ISS). Further details on the measurement setup can be found in [10], where the accuracy of the CPW model was tested against the measurements. The measurement data set is available via [26]. In this numerical setting, the aim is to generate SRM standards based on the CPW model and embed them within the error boxes of the actual VNA setup. A block diagram summarizing this numerical experiment is depicted in Fig. 6.
Regarding the geometric parameters of the CPW structure used for simulation, we employed the following dimensions, which are based on the actual measured ISS: signal width of \(49.1\,\mathrm{\mu m}\), ground width of \(273.3\,\mathrm{\mu m}\), conductor spacing of \(25.5\,\mathrm{\mu m}\), and conductor thickness of \(4.9\,\mathrm{\mu m}\). The substrate is made of lossless Alumina with a dielectric constant of \(9.9\)
Fig. 4: Illustration of CPW structures implementing the proposed half-network approach of SRM calibration. The match standard is optional if the symmetric impedance standard is reused as the match standard.
Fig. 5: Illustration of CPW structures implementing the half network-load standards in an orthogonal orientation. The symmetric one-port standards are not shown, as they do not pose any mechanical challenge in orthogonal orientation.
Fig. 6: Block diagram illustration of the numerical simulation concept to generate realistic synthetic data.
The conductor is made of gold with relative conductivity to copper of \(70\%\), where the conductivity of copper is \(58\,\mathrm{MS/m}\).
For the SRM standards, we implemented match, short, and open standards as non-ideal standards, as shown in Fig. 7. To create the network-load standards, we used a \(4\,\mathrm{mm}\) CPW line as the reciprocal standard, which is combined with the non-ideal match, short, and open standards. Additionally, as discussed in Section IV, we created half network-load standards using half of the reciprocal standard, i.e., a \(2\,\mathrm{mm}\) CPW line. The CPW standards are similar to the illustration in Fig. 4 for the half network-load standards, except that the match is reused from the symmetric standards.
We used \(Z^{ref}=50\,\mathrm{\SIUnitSymbolOhm}\) as the reference impedance for calibration for both ports. In the SRM calibration procedure, all standards are not specified except for the match that enables the definition of the reference impedance.
To verify the accuracy of the calibration, we included a stepped impedance line as DUT, which uses the same CPW structure with the only exception of signal width equal to \(15\,\mathrm{\SIUnitSymbolMicro m}\). The data has been processed using Python with the help of the package _scikit-rf_[27]. Fig. 8 shows the DUT before and after embedding in the error boxes.
To verify the numerical accuracy of the calibration, we define an error metric as the magnitude of the error vector of the calibrated response to the actual response given by
\[\text{Error}_{ij}\ \left(\mathrm{dB}\right)=20\log_{10}\left|S_{ij}^{\mathrm{ cal}}-S_{ij}^{\mathrm{true}}\right| \tag{34}\]
where \(S_{ij}^{\mathrm{cal}}\) represents the calibrated value and \(S_{ij}^{\mathrm{true}}\) is the corresponding true value.
Applying the SRM method using both full-network and half-network variants, we observe in Fig. 9 that both methods yield errors approaching zero, constrained only by the numerical precision of the software.
### _Coaxial Measurements_
The measurement involves comparing the proposed SRM method with a SOLR calibration using a commercial SOLR coaxial calibration kit with a \(2.92\,\mathrm{mm}\) interface [28]. The calibration results are compared to fully characterized verification standards with defined uncertainty bounds. The VNA used for the measurement is a ZVA from Rohde&Schwarz (R&S), and the used calibration kit is the ZN-Z229 \(2.92\,\mathrm{mm}\) calibration kit from R&S. The standards used from the kit are short, open, and match standards with female interfaces, along with two adapters, one female-female and one female-male of equal length. This data is used to conduct the SOLR calibration. The adapter standard is assumed to be unknown during the SOLR calibration process.
For the implementation of SRM standards, the symmetrical standards are directly measured by connecting the three one-port devices at both ports: short, open, and match. The female-female adapter is used to represent the reciprocal network. For the network-load standard, the symmetrical one-port devices are connected to the female-male adapter and measured at the left port. In all steps, the standards are assumed unknown, except for the match standard, which is only defined in the final step of the calibration via (23) and (24). An example that illustrates the measurement of the standards is shown in Fig. 10.
The verification kit utilized for the comparison is the ZV-Z429 \(2.92\,\mathrm{mm}\) verification kit from R&S. The kit contains a
Fig. 8: DUT S-parameter response before and after embedding within the error boxes.
Fig. 10: Example photos of measured coaxial standards. (a) load standard, (b) adapter (network), and (c) load connected with an adapter (network-load).
Fig. 7: Models used to simulate non-ideal load standards (a) \(50\,\mathrm{\SIUnitSymbolOhm}\) match standard with \(L_{0}=5\,\mathrm{pH},C_{0}=0.5\,\mathrm{fF}\), (b) short standard with \(L_{0}=10\,\mathrm{pH},C_{0}=0.5\,\mathrm{fF}\), and open standard with \(C_{0}=10\,\mathrm{fF},L_{0}=0.5\,\mathrm{pH}\). All standards are offset by a \(200\,\mathrm{\SIUnitSymbolOhm}\) CPW line segment.
Fig. 9: The error of the calibrated DUT using the SRM method, once with a full-network approach and secondly with a half-network approach.
mismatch standard and an offset short standard with female interfaces. These verification standards have been previously characterized by the manufacturer, and their S-parameters are provided with uncertainty bounds.
The results from calibrating the mismatch and offset short verification kit using both SOLR and SRM calibration methods are depicted in Fig. 11. The plots reveal that both calibration methods produced similar outcomes, with errors relative to the reference data of the verification kit remaining below \(-30\,\mathrm{dB}\). To facilitate visual comparison, we opted to plot the group delay instead of the phase. In both, the SOLR and SRM calibration, the group delay overlaps with the reference data for both mismatch and offset short. However, we observe a small discrepancy in the magnitude response of the offset short standard after \(15\,\mathrm{GHz}\), where ripples can be observed. Nevertheless, this falls within the uncertainty bounds of the magnitude response of the offset short.
A possible cause for the variation in the magnitude of the calibrated offset short with the SRM method is likely due to the pin gap of the connectors, as the SRM method involves more measurements using the network-load standards. We have summarized the pin gap after mating for the different standards in Table I. The table shows that the adapter standard used to create the network-load standard has the most significant pin gap distance (i.e., \(54.61\,\mathrm{\mu m}\)). This ripple is also noticeable when analyzing the difference between the error terms of SOLR and SRM calibrations, as shown in Fig. 12. It is evident that both ports exhibit ripple in source matching term, which is most likely caused by the pin gap, as the source match error term describes the reflection at the calibration plane, which is where the pin gap would show its most effects [29].
A final comparison is made between the calibrated female-female adapter of both calibration methods. In both SOLR and SRM methods, the adapter was assumed to be unknown but reciprocal during the calibration process. The reference S-parameters of the adapter were provided by the manufacturer and used to establish the error metric. However, no uncertainty bounds were available. Fig. 13 depicts the calibrated adapter derived from both SOLR and SRM methods. These measurement results are compared to the reference S-parameters of the adapter. Both calibration procedures deliver comparable results with similar errors.
Although SOLR and SRM delivered similar results in this experimental example, it is important to note that for the SOLR method, all SOL standards already have be characterized beforehand, whereas for the SRM method only the match standard must be characterized.
Fig. 11: Comparison of calibrated mismatch and offset short verification kits using SOLR and SRM methods. The uncertainty bounds are of the reference measurement and reported as \(95\%\) expanded uncertainty.
Fig. 12: The magnitude of the error vector of the VNA’s error terms obtained from SOLR and SRM calibration methods.
Fig. 13: Comparison of the calibrated female-female adapter using SOLR and SRM methods.
## VI Conclusion
This article presents a new VNA calibration method based on partially defined standards. The proposed SRM method uses one-port symmetric standards, a two-port reciprocal device, a combination of the reciprocal device with the one-port device, and a match standard. Only the match standard must be characterized among all standards, defining the calibration's reference impedance.
We have extended our proposed method to the particular case of an on-wafer setup, where the probes are fixed in distance. To do this, we restricted the two-port reciprocal device to be symmetric, allowing us to use half of it to define the network-load standards.
To demonstrate the SRM method, we performed numerical analysis using CPW synthetic data based on an actual on-wafer measurement setup. Additionally, we have shown the SRM method using measurements based on commercial \(2.92\,\mathrm{mm}\) coaxial standards, indicating that the method is compatible with commercial SOLR standards where only the match standard is specified. Overall, the proposed SRM method offers greater flexibility in standard definition, potentially decreasing errors associated with inadequate calibration standard specifications.
## Acknowledgment
The financial support by the Austrian Federal Ministry for Digital and Economic Affairs and the National Foundation for Research, Technology, and Development is gratefully acknowledged.
|
2303.06463 | Power efficient ReLU design for neuromorphic computing using spin Hall
effect | We demonstrate a magnetic tunnel junction injected with spin Hall current to
exhibit linear rotation of magnetization of the free-ferromagnet using only the
spin current. Using the linear resistance change of the MTJ, we devise a
circuit for the rectified linear activation (ReLU) function of the artificial
neuron. We explore the role of different spin Hall effect (SHE) heavy metal
layers on the power consumption of the ReLU circuit. We benchmark the power
consumption of the ReLU circuit with different SHE layers by defining a new
parameter called the spin Hall power factor. It combines the spin Hall angle,
resistivity, and thickness of the heavy metal layer, which translates to the
power consumption of the different SHE layers during spin-orbit
switching/rotation of the free FM. We employ a hybrid spintronics-CMOS
simulation framework that couples Keldysh non-equilibrium Green's function
formalism with Landau-Lifshitz-Gilbert-Slonzewski equations and the HSPICE
circuit simulator to account for diverse physics of spin-transport and the CMOS
elements in our proposed ReLU design. We also demonstrate the robustness of the
proposed ReLU circuit against thermal noise and non-trivial power-error
trade-off that enables the use of an unstable free-ferromagnet for
energy-efficient design. Using the proposed circuit, we evaluate the
performance of the convolutional neural network for MNIST datasets and
demonstrate comparable classification accuracies to the ideal ReLU with an
energy consumption of 75 $pJ$ per sample. | Venkatesh Vadde, Bhaskaran Muralidharan, Abhishek Sharma | 2023-03-11T17:27:22Z | http://arxiv.org/abs/2303.06463v1 | # Power efficient ReLU design for neuromorphic computing using spin Hall effect
###### Abstract
We demonstrate a magnetic tunnel junction injected with spin Hall current to exhibit linear rotation of magnetization of the free-ferromagnet using only the spin current. Using the linear resistance change of the MTJ, we devise a circuit for the rectified linear activation (ReLU) function of the artificial neuron. We explore the role of different spin Hall effect (SHE) heavy metal layers on the power consumption of the ReLU circuit. We benchmark the power consumption of the ReLU circuit with different SHE layers by defining a new parameter called the spin Hall power factor. It combines the spin Hall angle, resistivity, and thickness of the heavy metal layer, which translates to the power consumption of the different SHE layers during spin-orbit switching/rotation of the free FM. We employ a hybrid spintronics-CMOS simulation framework that couples Keldysh non-equilibrium Green's function formalism with Landau-Lifshitz-Gilbert-Slonzewski equations and the HSPICE circuit simulator to account for diverse physics of spin-transport and the CMOS elements in our proposed ReLU design. We also demonstrate the robustness of the proposed ReLU circuit against thermal noise and non-trivial power-error trade-off that enables the use of an unstable free-ferromagnet for energy-efficient design. Using the proposed circuit, we evaluate the performance of the convolutional neural network for MNIST datasets and demonstrate comparable classification accuracies to the ideal ReLU with an energy consumption of 75 \(pJ\) per sample.
## I Introduction
Artificial neural networks (ANNs) are widely used by machine learning and data science communities to solve complex problems. The ANNs are inspired by the biological brains, which have memory and computing intertwined to solve diverse problems while consuming low energy [1, 2]. The von Neumann-based modern computers that separate memory and computing are not suitable for hardware implementation of neural networks [3]. Neural networks contain highly interconnected perceptrons, which define the mathematical model of biological neurons as the sum of weighted inputs passed through an activation function [4].
Learning in neural networks can be achieved by using activation functions [5]. The activation function introduces non-linearity to the network and enables the network to learn complex data structures and differentiate between outputs. Traditionally, sigmoid and tanh activation functions have been widely utilized. But these standard functions limit the network's ability to learn since they saturate when the input is very high or very low [6, 7]. The sigmoid and tanh functions also face the vanishing gradient problem, where the gradient information used to learn networks becomes almost zero for deep networks, thus affecting the deep network's learning capacity [6]. Glorot et al. [7] showed that the rectified linear unit (ReLU) activation function can improve the learning speed of various neural networks. The ReLU function also overcomes the vanishing gradient and saturation problems that tanh and sigmoid functions face. It often produces better results than traditional functions such as tanh and sigmoid in neural networks [6, 7]. Thus, the ReLU has become a default activation function for various neural networks [5, 8, 9].
The ReLU function is described as
\[f(x)=\begin{cases}0&\text{if }x\leq 0\\ x&\text{if }x>0\end{cases}\]
The activation function using CMOS technology has been explored by a few works [10, 11, 12], but these realizations of the activation function require additional interconnect circuits to interface with synaptic and max-pooling layers [13] of the neural network. The CMOS implementation is also limited by the area and energy requirements [3, 14]. On the other hand, spintronics provides a wide range of devices that can be engineered to have non-volatility, plasticity, oscillatory and stochastic behavior [15, 16, 17, 18, 19, 20]. These properties suit well for in-memory computing, enabling neuromorphic computing and taking advantage of the paradigm "let physics do the computing" [21]. In this paper, we demonstrate an MTJ-based design to emulate the ReLU function, which can be easily connected to cross-bar-based synaptic layers [22, 23] and the max-pooling layers [13].
Current induced spin-orbit torques (SOTs) originating from spin Hall effect [24, 25, 26, 27] in heavy metal(HM)/ferromagnet(FM) hetero-structure, have recently emerged as an energy-efficient manipulation of magnetization at the nanoscale. The efficient conversion of the charge current to the spin current via SHE is quantified by the spin Hall angle(\(\theta_{SH}\)). There have been consistent efforts [28, 29, 30, 31, 32, 33, 34] to increase \(\theta_{SH}\), but it came at the cost of increased resistivity, which increases the power consumption of the HM. We define the spin Hall power parameter(\(\epsilon_{SHE}\)) that directly relates to the power consumption of the free FM switching process, accounting for the \(\theta_{SH}\), \(\rho\), and the thickness(t) of the HM. The \(\epsilon_{SHE}\) defined in this work can be used to compare the power consumption of different SHE layers for the free FM switching/rotation via spin-orbit torque.
This paper is organized as follows. In Sec. II, we give the details on the design of the ReLU circuit and introduce our
spin Hall power factor. In Sec. III, we describe our simulation platform where we couple Keldysh non-equilibrium Green's function formalism with Landau-Lifshitz-Gilbert-Slonzewski equations and the HSPICE circuit simulator to account for diverse physics of spin-transport and the CMOS elements in our proposed ReLU design. In Sec. IV, we present the results of our ReLU design and the performance of the proposed circuit against the thermal stability factor. Here we also show the result of our investigation into various heavy metals. We show that our design is resistant to thermal noise and that there exists a non-trivial power-error trade-off that leads to the energy-efficient circuit design using unstable free-ferromagnets. In Sec. V we explore the potential application of our ReLU design using convolutional neural networks and show that our network achieves practically the same classification accuracy as ideal ReLU implementation. We conclude in Sec. VI.
## II Design
### _ReLU circuit_
MTJs are traditionally used as binary memories, here we show that an MTJ can be designed to have linear functionality. This is achieved when a spin current whose polarization is orthogonal to the anisotropy direction is applied to the free-FM. In this work, we inject a \(\hat{y}-\) polarized spin current to the perpendicular magnetic anisotropy (PMA)-FM(CoFeB) in order to produce a linear rotation in \(\hat{x}\)- component of the magnetization as shown in Fig. 1a.
The linear rotation in magnetization is translated to the change in resistance via the TMR effect of the MTJ. The injected current(\(I_{bias}\)) renders the resistance change to the voltage change across the MTJ, which drives the CMOS inverter to obtain the ReLU functionality, as shown in Fig. 1b. The CMOS inverter operates in the linear region to invert and amplifies the voltage change across the MTJ, which results in ReLU function output as shown in Fig. 3b. The current source \(I_{bias}\) can be replaced with a resistor to obtain the ReLU functionality at the cost of decreased output (\(V_{out}\)) swing.
### _Spin Hall power factor_
The SHE-driven MTJs are being explored extensively for low-energy applications using spintronic devices. Charge-to-spin conversion via spin Hall effect exhibited by the heavy metal is utilized for switching the free-FM layer in SHE-driven MTJs [29, 35]. The power consumption of the switching can be minimized using a large charge-to-spin conversion factor (Eq. 2). There have been consistent efforts [28, 29, 30, 31, 32, 33, 34] to increase the spin Hall angle (\(\theta\)) through heavy metal engineering for enhanced charge-to-spin conversion factor. But an increase in \(\theta\) usually comes with an increase in the resistivity of the heavy metal resulting in large power consumption. Traditionally, the spin Hall conductivity [34]\(\sigma_{SH}=\theta/\rho\) has been used to characterize the SHE, although this includes the effect of resistivity and spin Hall angle, it lacks analytical reasoning to represent power consumption of the SHE layer. The spin Hall angle and resistivity also depend on the thickness of heavy metal [31, 32], compelling us to define a parameter that can unequivocally benchmark the various heavy metals for SHE switching power consumption.
The charge to spin conversion [24, 25, 36] of the SHE layer and the polarization and direction [24, 25, 26, 27] of the generated spin current are given by
\[\theta_{SH}=\frac{J_{s}}{J_{c}} \tag{1}\]
\[I_{s}=\theta_{SH}\frac{L}{t}I_{c} \tag{2}\]
\[\hat{I}_{s}=\hat{I}_{c}\times\sigma \tag{3}\]
Here, \(I_{s}\) is the spin current generated, \(\theta_{SH}\) is the spin Hall angle of the heavy metal, L is the length of the heavy metal, t is the thickness of the heavy metal, and \(I_{c}\) is the charge current injected. \(\hat{I}_{s}\) is the direction of generated spin current flow, \(\hat{I}_{c}\) is the direction of input charge current, and \(\sigma\) is the polarization of the generated spin current. From Eq. 3, injection of charge current to heavy metal in \(\hat{x}-\) direction results in y-polarized spin current injection to the free-FM (z-direction) on top of the HM layer.
The resistance (R) of the heavy metal is given by
\[R=\rho\frac{L}{Wt} \tag{4}\]
Here, \(\rho\) and W are the resistivity and width of the heavy metal respectively.
The power consumed by the heavy metal is given by \(P_{HM}=I_{c}^{2}R\). Here, \(I_{c}\) can be written as
\[I_{c}=\frac{I_{s}}{\sqrt{RV/d}}\frac{\sqrt{\rho t}}{\theta_{SH}} \tag{5}\]
where V, d are the volume and thickness of the free-FM layer. The power consumed by the heavy metal is given by
\[P_{HM}=\frac{I_{s}^{2}}{V/d}\frac{\rho t}{\theta_{SH}^{2}} \tag{6}\]
Here the \(I_{s}\) represents the spin current needed for switching the ferromagnet. For a given free-FM layer \(I_{s}\), V and d are
Fig. 1: Design schematics. (a) The MTJ device is stacked on top of the heavy metal layer. Charge current is injected into the HM layer along \(\hat{x}\), which injects \(\hat{y}\)-polarized spin current into the free-FM of the MTJ. (b). Circuit design for ReLU functionality. The current source \(I_{bias}\) converts the change in resistance to a change in voltage that is connected to the CMOS inverter.
constants, so from the above derivation we define a spin Hall power factor \(\epsilon_{SHE}(=\frac{\sqrt{\rho l}}{\theta_{SH}})\) that can be used to compare different heavy metal. The material with the lowest \(\epsilon_{SHE}\) will consume less power.
In our proposed circuit, \(440\mu A\) is required to achieve an output voltage of \(0.35V\) for free-FM with a thermal stability factor of 45. The proposed factor is not limited to this work and can also be used in SHE-driven FM switching mechanisms. Some HMs such as Pt [28] affect the damping factor (\(\alpha\)) of the free layer as well, leading to an increase in the spin current (\(I_{s}\)) required. In such cases, the \(\epsilon_{SHE}\) needs to be multiplied by the change in the damping factor (\(\frac{\alpha_{old}}{\alpha_{mel}}\)). In our proposed ReLU circuit, the increased \(\alpha\) has a negligible effect on the spin current requirement for the linear rotation.
## III Simulation Methods
Figure 2 shows the schematic overview of the hybrid spintronics-CMOS simulation framework. The MTJ and current source (\(I_{bias}\)) parameters are given to the NEGF simulator as shown in Fig. 2. The NEGF simulator is self-consistently coupled with \(I_{bias}\), since the device resistance depends on the MTJ angle and the voltage across the MTJ. The NEGF produces the device resistance vs MTJ angle plot. This result is coupled to the HSPICE circuit simulator via VerilogA. The circuit simulator simulates the entire circuit including the magnetization dynamics [38, 39] and it also simulates the approximation of the CMOS inverter pair based on a 16nm predictive technology model (PTM) [40].
### _Quantum Transport_
We use the Keldysh NEGF technique [41, 42, 43, 17] to describe the transport through MTJ that has MgO sandwiched between free and fixed CoFeB FM layers. The NEGF formalism is given by
\[G(E)=[EI-H-\Sigma]^{-1}, \tag{7}\] \[A(E)=i[G-G^{\dagger}],\] (8) \[\Gamma_{T,B}(E)=i([\Sigma_{T,B}(E)]-[\Sigma_{T,B}(E)]^{\dagger}),\] (9) \[\Sigma^{in}(E)=[\Gamma_{T}(E)]f_{T}(E)+[\Gamma_{B}(E)]f_{B}(E),\] (10) \[G^{n}=\int dE[G(E)][\Sigma^{in}(E)][G(E)]^{\dagger},\] (11) \[\Sigma=\Sigma_{T}+\Sigma_{B} \tag{12}\]
Here \([H]\) is the device Hamiltonian, \([H]=[H_{0}]+[U]\), comprising device tight-binding matrix \([H_{0}]\) and the Coulomb charging matrix \([U]\), and \([I]\) is the identity matrix, \(E\) is the energy variable. The charging matrix \([U]\) is calculated self-consistently using Poisson's equation. \(G(E)\) is the Green's function matrix, \(\Gamma_{T,B},f_{T,B},\Sigma_{T,B}\) are the broadening matrix, the Fermi function, and the self-energy matrices for the top (fixed) and bottom (free) FM layers respectively. \(A\) is the spectral function, \(\Sigma^{in}\) is the in-scattering function, and \(G^{n}\) is the correlation matrix.
The quantum transport segment culminates with the calculation of the current operator (\(I_{op}\)) that represents the charge current between two lattice points i and i+1 is given by
\[I_{op}=\frac{i}{\hbar}(H_{i,i+1}G_{i+1,i}^{n}-H_{i+1,i}G_{i,i+1}^{n}) \tag{13}\]
The current operator \(I_{op}\) is \(2\times 2\) matrix in the spin space of the lattice point. Using this the charge current can be evaluated as
\[I=q\int Real[Trace(\hat{I}_{op})]dE, \tag{14}\]
where q is the quantum of electronic charge.
### _Current injected MTJ_
In the ReLU circuit, we have employed the current source \(I_{bias}\) to translate the change in MTJ resistance to voltage variation. The MTJ resistance depends not just on the free-FM magnetization but also on the voltage across MTJ [13]. The
Fig. 2: Hybrid NEGF-CMOS simulation platform setup. (a) The NEGF is self-consistently coupled with \(I_{bias}\) to calculate the resistance of the MTJ. (b) The MTJ resistance is coupled to the HSPICE circuit simulator using VerilogA. The LLGS equation is coupled with other parts of the circuit to calculate free-FM magnetization.
MTJ voltage itself depends on the resistance and the current source \(I_{bias}\), so the device resistance and voltage need to be found self consistently. Figure 2(a) shows the algorithm for this self-consistent calculation of the resistance while accounting for its voltage and current source \(I_{bias}\) dependence. The self-consistent loop is run until the MTJ current is equal to the applied \(I_{bias}\).
### _Magnetization dynamics_
The LLGS equation [44, 45] is used to describe the magnetization dynamics of the free-FM. The LLGS equation is given by
\[(\frac{1+\alpha^{2}}{\gamma H_{k}})\frac{d\hat{m}}{dt}=-\hat{m} \times\vec{h}_{eff}-\alpha\hat{m}\times\hat{m}\times\vec{h}_{eff}\\ -\hat{m}\times\hat{m}\times\vec{i}_{s}+\alpha\hat{m}\times\vec{i}_ {s}, \tag{15}\]
where \(\hat{m}\) is the unit vector along the direction of magnetization of the free magnet, \(\gamma\) is the gyromagnetic ratio, \(\alpha\) is the Gilbert damping parameter, \(\vec{h}_{eff}=\frac{\vec{H}_{eff}}{H_{k}}\) is the reduced effective field and \(\vec{i}_{s}=\frac{\hbar\Gamma}{2qM_{s}V\hbar_{k}}\) is the normalized spin current. The term \(\vec{H}_{eff}\) includes the contribution of the anisotropy field (\(H_{k}\)) and the thermal noise (\(H_{th}\)). The thermal noise is given by \(\langle H_{th}^{2}\rangle=\frac{2\alpha k_{B}T}{\gamma M_{s}V}\) and \(\langle\rangle\) represents the ensemble average [46].
## IV Results
We show in Fig. 3a the linear rotation in magnetization of the free FM layer of the MTJ. The linear rotation is achieved by injecting \(\hat{y}-\)polarized spin current into the free FM layer. The spin current is generated by applying charge current to the HM layer along \(\hat{x}\). The TMR effect of the MTJ translates the linear magnetization change into a linear change in the resistance, this is shown in Fig. 3a. The resulting linear variation of MTJ resistance is employed in the circuit design (Fig. 1) to realize the ReLU output as shown in Fig. 3b. The output closely emulates the ReLU activation function for normalized inputs of less than 1. The parameters used in this design are given in Tab. I.
We evaluate the role of different HMs in our proposed ReLU design. The ReLU circuit's performance is assessed against the thermal stability factor (\(\Delta=\frac{H_{k}M_{s}V}{2k_{B}T}\)) of the free FM layer. The \(\Delta\) factor not only captures the stability of the free FM against thermal noise but also determines the spin current required for MTJ switching. We vary the \(\Delta\) factor of the free FM by changing the anisotropy field while keeping the dimensions of the free FM fixed. A decrease in the \(\Delta\) factor reduces the spin current needed for linear rotation, diminishing the HM's input charge current and power consumption.
The heavy metals used in this paper for the ReLU design are given in Tab. II along with respective parameters such as spin Hall angle \(\theta\), resistivity \(\rho\), and thickness \(t\) taken from experimental works [28, 29, 30, 31, 32, 33, 34]. Using these parameters, the length \(L\) and width \(W\) are calculated such that the HM offers a resistance of 50 \(\Omega\). The spin Hall power factor (\(\epsilon_{SHE}\)) and the normalizing current \(I_{0}\) (current required to achieve maximum magnetization rotation with the ReLU output of 0.35V) are also shown in the table. It is inferred from Tab. II that the current \(I_{0}\) accompanies \(\epsilon_{SHE}\). We can see from Fig. 4a and Tab. II, that the average static power consumption of
Fig. 3: (a) The free-FM magnetization(\(m_{x}\)) and the MTJ resistance as the normalized input (\(I_{in}/I_{0}\)) is varied with an initial magnetization oriented in the +z direction. \(I_{in}\) is applied along \(\hat{x}\). (b) The output of the ReLU circuit with varied normalized input. \(I_{0}=380\mu A\) and \(\Delta=45\) with \(Au_{0.25}Pt_{0.75}\) heavy metal.
Fig. 4: (a) Power consumption of the ReLU circuit with different HMs with varying \(\Delta\). With y-axis in log scale (b) Power consumption of the entire ReLU circuit and power consumption of only the HM layer using \(Au_{0.25}Pt_{0.75}\)
the ReLU circuit for different HMs also trails the \(\epsilon_{SHE}\). The static power consumption decreases with \(\Delta\) as it decimates the spin current required for the ReLU functionality. We show in Fig. 4b the contribution of HM power consumption to that of the entire ReLU circuit for \(Au_{25}Pt_{75}\) as the SHE layer. Along with the HM, MTJ consumes a fixed amount of power for translating the magnetization changes to voltage changes hence not affected by the change in \(\Delta\). The MTJ sensing power consumption dominates the total ReLU power at lower \(\Delta\)s.
We show in Fig. 5 the static power consumption and the average absolute percentage error of the ReLU circuit for different HMs. The average of absolute error increases as \(\Delta\) is decreased since the contribution of the thermal noise (\(\langle H_{th}^{2}\rangle=\frac{2\alpha k_{B}T}{\gamma M_{s}V}\)) increases in the effective magnetic field (\(H_{eff}\)). The \(H_{eff}\) includes the effect from both thermal noise and the anisotropy field. As the \(\Delta\) is reduced, the \(\langle H_{th}^{2}\rangle\) stays constant, but the \(H_{k}\) decreases. The effect of the thermal noise on the output is estimated using 100 Monte Carlo [47] simulations, with a normalized input (\(I_{in}/I_{0}\)) of 0.5. All the HMs show a decrease in power consumption and an increase in the error while the \(\Delta\) is reduced, presenting an opportunity to optimize the circuit to consume less power while obtaining reliable results. For \(Au_{0.25}Pt_{0.75}\) the static power consumption is 1.37 \(\mu W\) while the absolute error percentage is \(2.98\%\) for \(\Delta=20\) as shown in Fig. 5d. These results suggest that an unstable (\(\Delta<40\)) free FM-based MTJ can be used to obtain reliable results for the ReLU circuit.
## V Application: Convolutional Neural Networks
Convolutional Neural Networks (CNN) are a class of artificial neural networks that produces excellent results in problems involving image data. Figure 6 shows the architecture of the CNN used for classifying the MNIST and fashion MNIST data sets. Here we use our developed ReLU circuit instead of the ideal ReLU in the feature extraction stage of the CNN to train the network.
Fig. 5: Static power consumption of the ReLU circuit averaged over the entire input range, and the average absolute percentage error for a normalized input of 0.5 with (a) Pt, (b) \(\beta-\)Ta, (c) \(\alpha+\beta-\)W, (d) \(Au_{0.25}Pt_{0.75}\), (e) \(\beta-\)W, (f) W(O), and (g) \(W_{0.88}Ta_{0.12}\) as SHE layer.
Fig. 6: Schematic of CNN architecture used for training in TensorFlow for the image classification task. The architecture has a \(3\times 3\) convolution layer followed by the ReLU activation function, max-pooling layers, and finally the fully connected layer.
We show in Fig (a)a \(\&\) (b)b the accuracy and loss of the network during training and testing for MNIST and fashion MNIST datasets respectively. We notice an accuracy of \(98.76\%\) on the test data for the MNIST dataset and an accuracy of \(91.3\%\) for the fashion MNIST dataset. The accuracies for full software implementation(with ideal ReLU) are \(98.86\%\) and \(90.41\%\) for MNIST and fashion MNIST datasets. The accuracies using the non-ideal ReLU (our developed ReLU circuit) are closer to the ideal ReLU, this demonstrates the robustness of our ReLU circuit for neuromorphic applications.
Figure 8 shows the energy consumption for ReLU implementation in our CNN architecture for different SHE layers, along with their spin Hall power factors. Here the free-FMs have a thermal stability factor of \(20\). Here we see that the energy consumption of the ReLU implementation follows the spin Hall power factor. Our results suggest that \(75pJ\) of energy is consumed by the ReLU implementation in testing a single sample in our CNN architecture with \(Au_{0.25}Pt_{0.75}\) as heavy metal and a free-FM with \(\Delta=20\).
## VI Conclusion
In this paper, we showcased the linear rotation in the magnetization of free-FM and proposed a circuit design that effectively emulates the ReLU function, a fundamental component of deep learning neural networks. We introduced a new metric, the spin Hall power factor, to unequivocally quantify the SHE layers' power consumption. Our simulation results not only confirm the validity of this factor but also demonstrate its potential to significantly impact the design of SHE driven devices and circuits. We deploy our developed simulation framework that combines current injected MTJ with NEGF, LLG, and HSPICE circuit simulator, enabling us to design and analyze the proposed ReLU circuit with varying HMs.
We demonstrated the existence of a non-trivial power error trade-off that enables the use of unstable free-FM for energy-efficient ReLU design. We demonstrated that the most energy-efficient realization of the proposed ReLU circuit consumes 1.37 \(\mu W\) of static power with a low error rate of 2.98% using the HM \(Au_{0.25}Pt_{0.75}\) at \(\Delta=20\). Furthermore, we showed the potential of our ReLU design in CNNs, producing classification accuracies close to the software ReLU implementation with an energy consumption of 75 \(pJ\) per sample.
## Acknowledgements
The author AS acknowledges the support of ISIRD phase-1 project of IIT Ropar. The author BM wishes to acknowledge the Science and Engineering Board (SERB), Government of India, for funding under the MATRICS grant (Grant No. MTR/2021/000388).
## Conflict of Interest
The authors have no conflicts to disclose.
## Data availability
Data is available on request from the authors.
|
2302.09479 | Delving into the Adversarial Robustness of Federated Learning | In Federated Learning (FL), models are as fragile as centrally trained models
against adversarial examples. However, the adversarial robustness of federated
learning remains largely unexplored. This paper casts light on the challenge of
adversarial robustness of federated learning. To facilitate a better
understanding of the adversarial vulnerability of the existing FL methods, we
conduct comprehensive robustness evaluations on various attacks and adversarial
training methods. Moreover, we reveal the negative impacts induced by directly
adopting adversarial training in FL, which seriously hurts the test accuracy,
especially in non-IID settings. In this work, we propose a novel algorithm
called Decision Boundary based Federated Adversarial Training (DBFAT), which
consists of two components (local re-weighting and global regularization) to
improve both accuracy and robustness of FL systems. Extensive experiments on
multiple datasets demonstrate that DBFAT consistently outperforms other
baselines under both IID and non-IID settings. | Jie Zhang, Bo Li, Chen Chen, Lingjuan Lyu, Shuang Wu, Shouhong Ding, Chao Wu | 2023-02-19T04:54:25Z | http://arxiv.org/abs/2302.09479v1 | # Delving into the Adversarial Robustness of Federated Learning
###### Abstract
In Federated Learning (FL), models are as fragile as centrally trained models against adversarial examples. However, the adversarial robustness of federated learning remains largely unexplored. This paper casts light on the challenge of adversarial robustness of federated learning. To facilitate a better understanding of the adversarial vulnerability of the existing FL methods, we conduct comprehensive robustness evaluations on various attacks and adversarial training methods. Moreover, we reveal the negative impacts induced by directly adopting adversarial training in FL, which seriously hurts the test accuracy, especially in non-IID settings. In this work, we propose a novel algorithm called Decision Boundary based Federated Adversarial Training (DBFAT), which consists of two components (local re-weighting and global regularization) to improve both **accuracy** and **robustness** of FL systems. Extensive experiments on multiple datasets demonstrate that DBFAT consistently outperforms other baselines under both IID and non-IID settings.
## Introduction
Nowadays, end devices are generating massive amounts of potentially sensitive user data, raising practical concerns over security and privacy. Federated Learning (FL) [14] emerges as a privacy-aware learning paradigm that allows multiple clients to collaboratively train neural networks without revealing their raw data. Recently, FL has attracted increasing attention from different areas, including medical image analysis [13, 15], recommender systems [11, 14], natural language processing [17, 18], etc.
Prior studies have demonstrated that neural networks are vulnerable to evasion attacks by adversarial examples [11] during inference time. The goal of inference-time adversarial attack [11, 12, 13] is to damage the global model by adding a carefully generated imperceptible perturbation on the test examples. As shown in Table 1, federated models are as fragile to adversarial examples as centrally trained models (i.e. zero accuracy under PGD-40 attack [1]). Hence, it is also important to consider how to defend against adversarial attacks in federated learning.
There are several works that aim to deal with adversarial attacks in FL [15, 18], i.e, federated adversarial training (FAT) [16, 17, 18, 19]. However, the proposed to conduct adversarial training (AT) on a proportion of clients but conduct plain training on other clients. [18] investigated the impact of local training rounds in FAT. Nevertheless, these methods all ignore the issue that the clean accuracy of federated adversarial training is very low.
To further show the problems of federated adversarial training, we first begin with the comparison between the plainly-trained models and AT-trained [1] models in both the IID (Independent and Identically Distributed) and non-IID FL settings, measured by clean accuracy \(A_{cln}\) and robust accuracy \(A_{rob}\), respectively. We show the test accuracy of plain training and adversarial training (AT) on CIFAR10 dataset under both IID and non-IID FL settings in Fig. 1 (left sub-figure). We summarize some valuable observations as follows: 1) Compared with the plainly-trained models, AT-trained models achieve a lower accuracy, which indicates that directly adopting adversarial training in FL can hurt \(A_{cln}\); 2) \(A_{cln}\) drops heavily for both the plainly-trained models and AT-trained models under non-IID distribution, which is exactly the challenge that typical federated learning with heterogeneous data encountered [19]; 3) The performance of AT-trained models with non-IID data distribution decrease significantly compared with IID data distribution. Motivated by these observations, we focus on improving both adversarial robustness and clean accuracy of adversarial training in FL, i.e., we aim to increase \(A_{cln}\) while keeping \(A_{rob}\) as high as possible.
To achieve this goal, in this paper, we investigate the impact of decision boundary, which can greatly influence the performance of the model in FAT. Specifically, 1) we apply adversarial training with a re-weighting strategy in local update to get a better \(A_{rob}\). Our method takes the limited data of each client into account, those samples that are close to/far from the decision boundary are assigned larger/smaller weight. 2) Moreover, since the global model in FL has a
more accurate decision boundary through model aggregation, we take advantage of the logits from the global model and introduce a new regularization term to increase \(A_{cln}\). This regularization term aims to alleviate the accuracy reduction across distributed clients.
We conclude our major contributions as follows:
* We conduct systematic studies on the adversarial robustness of FL, and provide valuable observations from extensive experiments.
* We reveal the negative impacts of adopting adversarial training in FL, and then propose an effective algorithm called Decision Boundary based Federated Adversarial Training (DBFAT), which utilized local re-weighting and global regularization to improve both the accuracy and robustness of FL systems.
* Extensive experiments on multiple datasets demonstrate that our proposed DBFAT consistently outperforms other baselines under both IID and non-IID settings. We present the performance of our method in Fig. 1 (right sub-figure), which indicates the improvement in both robustness and accuracy of adversarial training in FL.
## Related Works
Federated Learning.Following the success of DNNs in various tasks [10, 11, 2, 23], FL has attracted increasing attention. A recent survey has pointed out that existing FL systems are vulnerable to various attacks that aim to either compromise data privacy or system robustness [13]. In particular, robustness attacks can be broadly classified into training-time attacks (data poisoning and model poisoning) and inference-time attacks (evasion attacks, i.e., using adversarial examples to attack the global model during inference phase). In FL, the architectural design, distributed nature, and data constraints can bring new threats and failures [15].
Adversarial Attacks.The white-box attacks have access to the whole details of threat models, including parameters and architectures. Goodfellow et al. [1] introduced the Fast Gradient Sign Method (FGSM) to generate adversarial examples, which uses a single-step first-order approximation to perform gradient ascent. Kurakin et al. [11] iteratively applied FGSM with a small step-size to develop a significantly stronger multi-step variant, called Iterative FGSM (I-FGSM). Based on these findings, more powerful attacks have been proposed in recent years including MIM [14], PGD [13], CW [12], and AA [1].
Adversarial Training.Adversarial training has been one of the most effective defense strategies against adversarial attacks. Madry et al. [13] regarded adversarial training as a min-max formulation using empirical risk minimization under PGD attack. Kannan et al. [11] presented adversarial logit pairing (ALP), a method that encourages logits for pairs of
\begin{table}
\begin{tabular}{c|c|c|c|c|c|c} \hline Type & Dataset & MNIST & FMNIST & ImageNet-12 & CIFAR10 & CIFAR100 & Tiny-ImageNet \\ \hline \multirow{2}{*}{Centralized} & \(A_{cln}\) & 99.42 & 92.47 & 78.96 & 94.26 & 86.93 & 57.93 \\ & \(A_{rch}\) & 0 & 0 & 0 & 0 & 0 & 0 \\ \hline \multirow{2}{*}{Federated} & \(A_{cln}\) & 99.01 & 88.51 & 71.65 & 85.81 & 81.28 & 49.79 \\ & \(A_{rch}\) & 0 & 0 & 0 & 0 & 0 & 0 \\ \hline \end{tabular}
\end{table}
Table 1: The accuracy (%) is tested under PGD-40 attack [13]. For MNIST, FMNIST, CIFAR10, ImageNet-12, CIFAR100, and Tiny-ImageNet, the perturbation bound is \(\{0.3,32/255,0.031,0.031,0.031\}\), respectively. \(A_{cln}\) and \(A_{rob}\) refer to clean accuracy and robust accuracy.
Figure 1: **Left:** Test accuracy reduces for plainly trained model and adversarially trained model under non-IID data. Meanwhile, adversarial training hurts the performance. **Right:** Evaluations on CIFAR10 for both accuracy and robustness, including several state-of-the-art defense methods combined with FL. Our method outperforms existing baselines on both metric dimensions.
examples to be similar, to improve robust accuracy. To quantify the trade-off between accuracy and robustness, Zhang et al. [22] introduced a TRADES loss to achieve a tight upper bound on the gap between clean and robust error. Based on the margin theory and soft-labeled data augmentation, Ding et al. [20] proposed Max-Margin Adversarial (MMA) training and Lee et al. [14] introduced Adversarial Vertex mixup (AVmixup).
Federated Adversarial Training.In terms of the adversarial robustness, Zizzo et al. [22] investigated the effectiveness of the federated adversarial training protocol for idealized federated settings, and showed the performance of their models in a traditional centralized setting and a distributed FL scenario. Zhou et al. [20] decomposed the aggregation error of the central server into bias and variance. However, all these methods sacrificed clean accuracy (compared to plainly trained models) to gain robustness. In addition, certified defense [14] against adversarial examples in FL is another interesting direction, which will be discussed in the future.
## Adversarial Robustness of FL
In this section, we briefly define the goal of federated adversarial training. Then we conduct a systematic study on some popular federated learning algorithms with the combination of various adversarial training methods and evaluate their robustness under several attacks. Besides, we further reveal the challenges of adversarial training in non-IID FL.
### Problem Definition
In typical federated learning, training data are distributed across all the \(K\) clients, and there is a central server managing model aggregations and communications with clients. In general, federated learning attempts to minimize the following optimization:
\[\min_{w}f(w)=\sum_{k=1}^{K}\frac{n_{k}}{n}F_{k}(w). \tag{1}\]
Here, we denote that the global approximate optimal is a sum of local objectives weighted by the local data size \(n_{k}\), and \(n\) is the total data size of all clients that participate in a communication round. Moreover, each local objective measures the empirical risk over possibly different data distributions \(D_{k}\), which can be expressed as:
\[F_{k}(w):=\mathbb{E}_{x_{k}\sim\mathcal{D}_{k}}\left[f_{k}\left(w;x_{k}\right) \right]. \tag{2}\]
Let \(x\) denote the original image, \(x^{adv}\) denote the corresponding adversarial example, and \(\delta\) denote the perturbation added on the original image, then \(x^{adv}=x+\delta\). To generate powerful adversarial examples, we attempt to maximize the loss \(L(x+\delta;w)\), where \(L\) is the loss function for local update.
To improve the robustness of the neural networks, many adversarial defense methods have been proposed. Among them, adversarial training [1] is one of the most prevailing and effective algorithms. Combined with adversarial training, the local objective becomes solving the following min-max optimization problem:
\[F_{k}(w)=\min\mathbb{E}_{x_{k}\sim\mathcal{D}_{k}}\left[\max_{\|x^{adv}-x\|_{ \infty}\leq\delta}L(w,x^{adv},y)\right]. \tag{3}\]
The inner maximization problem aims to find effective adversarial examples that achieve a high loss, while the outer optimization updates local models to minimize training loss.
In this work, we conduct a systematic study on several state-of-the-art FL algorithms including FedAvg [1], FedProx [10], FedNova [20] and Scaffold [15], and explore their combinations with AT methods to defend against adversarial attacks. We report detailed results in Table 2, here robustness is averaged over four popular attacks [12], PIM [13], PGD [16], and CW [10]. Besides, we implement some prevailing adversarial training methods including PGD_AT [16], TRADES [22], ALP [14], MMA [20] and AVMixup [14]. We observe that there is no federated adversarial learning algorithm that can outperform all the others in all cases. Moreover, the clean accuracy drops heavily under non-IID distribution. As such, we are motivated to develop a more effective method. Due to the similar performance of these FL methods observed from Table 2, we design our method based on FedAvg - a representative algorithm in FL.
### Adversarial Traning with non-IID Data
Federated learning faces the statistical challenge in real-world scenarios. The IID data makes the stochastic gradient as an unbiased estimate of the full gradient [10]. However, the clients are typically highly heterogeneous with various kinds of non-IID settings, such as
\begin{table}
\begin{tabular}{c|c c c c c c c c|c c c c c c} \hline \hline Type & \multicolumn{8}{c|}{IID} & \multicolumn{8}{c}{Non-IID} \\ \hline Methods & \multicolumn{2}{c}{FedAvg} & \multicolumn{2}{c}{FedProx} & \multicolumn{2}{c}{FedNova} & \multicolumn{2}{c|}{Scaffold} & \multicolumn{2}{c}{FedAvg} & \multicolumn{2}{c}{FedProx} & \multicolumn{2}{c}{FedNova} & \multicolumn{2}{c}{Scaffold} \\ \hline Performance & \(A_{den}\) & \(A_{orb}\) & \(A_{den}\) & \(A_{orb}\) & \(A_{den}\) & \(A_{den}\) & \(A_{orb}\) & \(A_{den}\) & \(A_{orb}\) & \(A_{den}\) & \(A_{den}\) & \(A_{orb}\) & \(A_{den}\) & \(A_{orb}\) & \(A_{den}\) & \(A_{orb}\) \\ \hline PGD-AT & 57.99 & 31.95 & 58.17 & 32.06 & 58.45 & 31.74 & 56.84 & 29.26 & 46.84 & 26.79 & 48.03 & 27.46 & 46.95 & 26.54 & 42.44 & 27.19 \\ ALP & 62.81 & 31.84 & 62.88 & 31.20 & 62.91 & 31.79 & 60.30 & 29.58 & 56.16 & **28.78** & 55.79 & **29.06** & 55.80 & **29.18** & 48.29 & 26.56 \\ TRADES & 64.94 & **32.93** & 64.29 & 32.97 & 64.46 & 33.29 & 63.14 & **33.58** & 60.94 & 27.06 & 61.05 & 27.94 & 60.34 & 28.78 & 59.53 & 27.78 \\ MMA & 65.14 & 30.29 & 63.65 & 31.29 & **65.27** & 29.31 & 64.28 & 32.98 & 59.69 & 28.64 & 60.17 & 28.09 & 61.03 & 28.47 & 61.53 & 28.13 \\ AVMixup & **66.14** & 32.27 & **65.12** & **33.19** & 65.14 & **33.75** & **65.11** & 33.32 & **61.17** & 28.56 & **61.47** & 28.34 & **62.04** & 28.12 & **61.91** & **28.81** \\ \hline \hline \end{tabular}
\end{table}
Table 2: An empirical study on the adversarial robustness of FL, measured by various combination of defense methods and FL algorithms. We report the clean accuracy and robust accuracy, respectively. Best results are in bold.
label skewness and feature skewness (Li et al., 2021). According to previous studies (Wang et al., 2020; Karimireddy et al., 2020), the non-IID data settings can degrade the effectiveness of the deployed model.
Similarly, due to the non-IID data, the performance of AT may vary widely across clients. To better understand the challenge of adversarial training with non-IID data, we examine the performance of both clean accuracy and robustness on a randomly selected client and report the results in Fig. 2. Observed from Fig. 2, we can find that: 1) \(A_{cln}\) on the plainly trained model drops from majority classes to minority classes, which is exactly what traditional imbalanced learning attempts to solve; 2) A similar decreasing tendency reasonably occurs in \(A_{rob}\). It is obvious that adopting adversarial training in federated learning with non-IID data is more challenging.
According to above observations, we conjecture that AT-trained local models with imbalanced data lead to a more biased decision boundary than plainly trained ones. Since adversarial examples need a larger number of epochs to achieve near-zero error (Zhang et al., 2021), it becomes harder to fit adversarial examples than clean data. However, for the local client itself, imbalanced clean data generates imbalanced adversarial examples, making it more difficult for training and enlarging the accuracy gap, which can reduce the performance both in accuracy and robustness. In Fig. 3, we also show the differences between plain training and adversarial training in federated settings. Compared with the plainly trained models, the aggregation of adversarially trained models can enlarge the accuracy gap, which results in poor consistency between different clients. To overcome this problem, we propose a novel method to utilize local re-weighting and global regularization to improve both the accuracy and robustness of FL systems.
## Methodology
The generalization performance of a neural network is closely related to its decision boundary. However, models trained in the federated setting are biased compared with the centrally trained models. This is mainly caused by heterogeneous data and objective inconsistency between clients (Kairouz, 2021). Moreover, a highly skewed data distribution can lead to an extremely biased boundary (Wang et al., 2020). We tackle this problem in two ways: 1) locally, we take full advantage of the limited data on the distributed client; 2) globally, we utilize the information obtained from the global model to alleviate the biases between clients.
Subsequently, we propose a simple yet effective approach called Decision Boundary based Federated Adversarial Training (DBFAT), which consists of two components. For local training, we re-weight adversarial examples to improve robustness; while for global aggregation, we utilize the global model to regularize the accuracy for a lower boundary error \(A_{bdy}\). We show the training process of DB-FAT in the supplementary and illustrate an example of the decision boundary of our approach in Fig. 4.
### Re-weighting with Limited Data
Adversarial examples have the ability to approximately measure the distances from original inputs to a classifier's decision boundary (Heo et al., 2018), which can be calculated by the least number of steps that iterative attack (e.g. PGD attack (Madry et al., 2017)) needs in order to find its misclassified adversarial variant. To better utilize limited adversarial examples, we attempt to re-weight the adversarial examples to guide adversarial training. For clean examples that
Figure 4: **Left panel: Decision boundary of plainly trained model. Middle panel: Decision boundary of AT-trained model. Right panel: Decision boundary of DBFAT-trained model. We use the dotted line to represent the boundary of the clean model, and solid line to represent the boundary of the robust model. The size of the shape represents the value of the weight. Those samples that are close to/far from boundary are assigned larger/smaller weight. The decision boundary of DBFAT-trained model (see the right sub-figure) can achieve a higher \(A_{rob}\) and meanwhile maintain \(A_{cln}\).**
Figure 3: Plain training and adversarial training under non-IID setting. Compared with plainly trained situation, the aggregation of adversarially trained models can lead to a more biased model which enlarges accuracy gap. Consequently, it results in poor consistency between different clients.
Figure 2: Test accuracy on a randomly selected client.
are close to the decision boundary, we assign larger weights; while those examples that are far from the boundary are assigned with smaller weights.
In this paper, we use PGD-\(S\) to approximately measure the geometric distance to the decision boundary, \(S\) denotes the number of maximum iteration. We generate adversarial examples as follows [14]:
\[x^{adv}\leftarrow\Pi_{\mathcal{B}[x,e]}\left(x^{adv}+\alpha\cdot\mathrm{sign}( \nabla_{x^{adv}}\ell(x^{adv},y))\right). \tag{4}\]
Here \(\Pi_{\mathcal{B}[x,e]}\) is the projection function that projects the adversarial data back into the \(\epsilon\)-ball centered at natural data, \(\alpha\) is the steps size, \(\epsilon\) is perturbation bound.
We find the minimum step \(d\), such that after \(d\) step of PGD, the adversarial variant can be misclassified by the network, i.e., \(arg\ max_{c}f^{(c)}(x^{adv})\neq y\), where \(f^{(c)}(x^{adv})\) is the logits of the \(c\)-th label.
In this way, given a mini-batch samples \(\{(x_{i},y_{i})\}_{i=1}^{m}\), then the weight list \(\rho\) can be formulated as :
\[\rho\gets 1-\{\frac{d_{i}}{\sum_{i=1}^{m}d_{i}}\}. \tag{5}\]
### Regularization with Global Model
Early work [16, 10] claims that there exists a trade-off between accuracy and robustness, standard adversarial training can hurt accuracy. To achieve a lower boundary error \(A_{bdy}\), we take advantage of logits from the global model \(f^{glo}\), which is trained after aggregation. Particularly, in federated learning, the model owns the information obtained from the averaged parameters on distributed clients.
Let \(f^{loc}\) denote the adversarially trained model at each local client, \(f^{glo}\) has the most desirable classifier boundary for natural data. Then we can modify the local objective mentioned in Equation 3 as below:
\[\min\underbrace{\ell_{ce}(\rho\cdot f^{loc}(x^{adv}),y)}_{\text{for robustness}}+\beta\cdot\underbrace{\ell_{kl}(f^{loc}(x^{adv}),f^{glo}(x))}_{ \text{for accuracy regularization}}. \tag{6}\]
Where \(\ell_{ce}\) denotes the cross-entropy loss to improve the robustness, and \(\ell_{kl}\) is the KL divergence loss to constrain the logits of global model and local model. Here, \(\ell_{kl}\) appears as an additional regularization term, which is designed to reduce the boundary error \(A_{bdy}=A_{cln}-A_{rob}\). Additionally, \(\rho\) is the weight calculated by Equation 5, \(\beta\) is the parameter to be tuned.
To show the difference between our DBFAT and existing defense methods, we list the loss functions of different adversarial training methods in Table 3.
## Experimental Results
### Experimental Setup
Following the previous work of FL [10], we distribute training data among 100 clients in both IID and non-IID fashion. For each communication round, we randomly select 10 clients to average the model parameters. All experiments are conducted with 8 Tesla V100 GPUs. More details can be referred to the supplemental material.
DatasetsIn this section, we show that DBFAT improves the robust generalization and meanwhile maintains a high accuracy with extensive experiments on benchmark CV datasets, including MNIST [10], FashionMNIST [12] (FM-NIST), CIFAR10 [15], CIFAR100 [16], Tiny-ImageNet [14], and ImageNet-12 [6]. The ImageNet-12 is generated via [13], which consists of 12 classes. We resize the original image with size 224*224*3 to 64*64*3 for fast training.
Data partitioningIn the federated learning setup, we evaluate all algorithms on two types of non-IID data partitioning: **Dirichlet sampled data** and **Sharding**. For Dirichlet sampled data, each local client is allocated with a proportion of the samples of each label according to Dirichlet distribution [10]. Specifically, we follow the setting in [21], for each label \(c\), we sample \(p_{c}\sim\mathrm{Dir}_{J}(0.5)\) and allocate \(p_{c,j}\) proportion of the whole dataset of label \(c\) to client \(j\). In this setting, some clients may entirely have no examples of a subset of classes. For Sharding [10], each client owns data samples of a fixed number of labels. Let \(K\) be the number of total clients, and \(q\) is the number of labels we assign to each client. We divide the dataset by label into \(K*q\) shards, and the amount of samples in each shard is \(\frac{n}{K\cdot q}\). We denote this distribution as shards_\(q\), where \(q\) controls the level of difficulty. If \(q\) is set to a smaller value, then the partition is more unbalanced. An example of these partitioning strategies is shown in Fig. 5, in which we visualize IID and non-IID distribution (Dirichlet sampled with \(p_{c}\sim\mathrm{Dir}_{J}(0.5)\) and Sharding with shards_5) on five randomly selected clients.
\begin{table}
\begin{tabular}{c c} \hline \hline Defense & Loss Function \\ \hline PGD-AT & \(\mathrm{CE}\left(f\left(s^{adv}\right),y\right)\) \\ ALP & \(\mathrm{CE}\left(f\left(s^{adv}\right),y\right)+\beta\cdot\left\|f\left(s^{adv} \right)-f\left(x\right)\right\|_{2}^{2}\) \\ TRADES & \(\mathrm{CE}\left(f\left(s\right),y\right)+\beta\cdot\mathrm{KL}\left(f\left(s^{adv }\right)\|f\left(x\right)\right)\) \\ MMA & \(\mathrm{CE}\left(f\left(s^{adv}\right),y\right)\cdot\mathrm{Re}\left(\mathrm{ he}\left(\mathrm{des}\right)+\mathrm{CE}\left(x,y\right)\cdot\mathrm{Re}\left( \mathrm{des}\right)\neq y\right)\) \\ AVINaup & \(\mathrm{CE}\left(f\left(s^{adv}\right),y^{adv}\right)\) \\
**DBFAT(ours)** & \(\rho\cdot\mathrm{CE}(f(s^{adv}),y)+\beta\cdot\mathrm{KL}\left(f\left(s^{adv }\right)\|f^{glo}(x)\right)\) \\ \hline \hline \end{tabular}
\end{table}
Table 3: Loss functions of different adversarial training methods.
Figure 5: Visualizations of IID and non-IID distribution (Dirichlet sampled and Sharding) across 5 clients on CIFAR10 dataset. Shards_5 is a type of non-IID setting, in which each client has five categories of data [10]. From left to right: client ID number #1-5.
MNIST and FMNIST setupWe use a simple CNN with two convolutional layers, followed by two fully connected layers. Following the setting used in [11], for MNIST, we set perturbation bound \(\epsilon=0.3\), and step size \(\alpha=0.01\), and apply adversarial attacks for 20 iterations. For FMNIST, we set perturbation bound \(\epsilon=32/255\), and step size \(\alpha=0.031\), we adversarially train the network for 10 steps and apply adversarial attacks for 20 iterations. Due to the simplicity of MNIST and FMNIST, we mainly use non-IID data (Sharding), which is hard to train.
Cifar10, CIFAR100, Tiny-ImageNet and ImageNet-12 setupWe apply a larger CNN architecture, and follow the setting used in [1], i.e., we set the perturbation bound \(\epsilon=0.031\), step size \(\alpha=0.007\). To evaluate the robustness, we conduct extensive experiments with various data partitioning.
BaselinesFor attack methods, we perform five popular attacks including FGSM [13], MIM [14], PGD [15], CW [16] and AA [17]. We further use Square [1] for black-box attack. To investigate the effectiveness of existing FL algorithms, we implement FedAvg[15], FedProx[11], FedNova[14] and Scaffold[16]. To defend against adversarial attacks, we implement four most prevailing methods including PGD_AT[15], TRADES [14], ALP [16], MMA [14] and AVMixup [18]. We compare the performance of our DBFAT with various kinds of defense methods combined with FL methods.
### Convergence For Local Training
To show the convergence rate of DBFAT, we use the Dirichlet sampled CIFAR10 dataset, where each client owns 500 samples from 5 classes. Fig. 6 (left sub-figure) shows the impact of local epoch \(E\) during adversarial training. Indeed, for a very small epoch (e.g., \(E=2\)), it has an extremely slow convergence rate, which may incur more communications. Besides, a large epoch (e.g., \(E=20\)) also leads to a slow convergence, as model may overfit to the local data. Considering both the communication cost and convergence issues, we set \(E=5\) in our experiments, which can maintain a proper communication efficiency and fast convergence.
### Effectiveness of Our Method
We verify the effectiveness of our method compared with several adversarial training techniques on Dirichlet sampled CIFAR10. Evaluation of model robustness is averaged under four attacks using the the same setting for a fair comparison and all defense methods are combined with FedAvg.
To show the differences between DBFAT and above mentioned defense methods, we report the training curves on
\begin{table}
\begin{tabular}{c|c|c c c c c c|c c c c c} \hline Type & & \multicolumn{8}{c|}{IID} & \multicolumn{8}{c}{Non-IID} \\ \hline Dataset & Method & Clean & FGSM & MIM & PGD-20 & CW & AA & Clean & FGSM & MIM & PGD-20 & CW & AA \\ \hline \multirow{8}{*}{MNIST} & Plain & 99.01 & 28.35 & 8.65 & 5.29 & 3.84 & 3.02 & 98.45 & 11.78 & 14.06 & 8.44 & 9.51 & 7.45 \\ & PGD\_AT & 98.52 & 76.01 & 60.18 & 54.50 & 55.23 & 50.43 & 97.82 & 67.58 & 52.89 & 48.03 & 47.43 & 43.75 \\ & ALP & 98.46 & 57.37 & 55.61 & 48.74 & 51.17 & 44.25 & 97.92 & 46.49 & 51.01 & 46.41 & 46.24 & 41.95 \\ & TRADES & 97.89 & 76.79 & 63.29 & 58.25 & 57.24 & 53.72 & 92.03 & 48.45 & 51.56 & 47.21 & 45.81 & 42.36 \\ & AVMixup & 98.63 & 61.41 & 53.34 & 42.33 & 46.95 & 37.78 & 97.47 & 56.50 & 51.86 & 46.28 & 44.46 & 41.84 \\ & Ours & **98.86** & **78.06** & **70.97** & **68.39** & **63.09** & **59.39** & **97.95** & **68.54** & **54.18** & **50.33** & **49.12** & **44.32** \\ \hline \multirow{8}{*}{FMNIST} & Plain & 88.50 & 17.89 & 3.55 & 2.57 & 0.40 & 0.17 & 84.60 & 17.86 & 3.25 & 2.93 & 3.05 & -1.40 \\ & PGD\_AT & 76.05 & 68.53 & 65.24 & 65.40 & 64.26 & 60.89 & 72.93 & 60.11 & 54.42 & 54.33 & 52.19 & 49.88 \\ \cline{1-1} & ALP & 75.99 & 67.31 & 63.66 & 63.79 & 61.55 & 59.19 & 75.34 & 57.67 & 53.37 & 55.11 & 51.12 & 51.04 \\ \cline{1-1} & TRADES & 78.13 & 59.33 & 52.65 & 52.78 & 51.44 & 48.78 & 74.93 & 56.53 & 44.01 & 44.01 & 31.80 & 39.61 \\ & AMMixup & 79.34 & 61.22 & 54.93 & 54.67 & 49.48 & 50.07 & 72.06 & 56.26 & 49.21 & 49.72 & 47.99 & 45.15 \\ & Ours & **81.49** & **69.23** & **66.22** & **66.24** & **65.71** & **61.49** & **76.19** & **63.11** & **56.45** & **58.31** & **56.96** & **53.91** \\ \hline \multirow{8}{*}{CIFAR10} & Plain & 78.80 & 6.87 & 1.15 & 1.06 & 1.30 & 1.23 & 61.10 & 7.58 & 2.94 & 2.67 & 2.87 & 1.28 \\ & PGD\_AT & 58.75 & 30.62 & 27.23 & 26.11 & 28.47 & 22.09 & 15.27 & 13.27 & 13.00 & 13.00 & 12.99 & 8.63 \\ \cline{1-1} & ALP & 63.23 & 29.42 & 26.75 & 28.49 & 28.13 & 23.97 & 32.91 & 21.41 & 20.26 & 20.19 & 17.74 & 15.83 \\ \cline{1-1} & TRADES & 68.58 & 31.53 & 25.92 & 25.49 & 23.07 & 20.89 & 46.30 & 24.81 & 22.20 & 22.05 & 19.59 & 17.85 \\ \cline{1-1} & AVMixup & 70.28 & 29.51 & 26.22 & 26.34 & 24.07 & 22.25 & 48.23 & 25.29 & 21.42 & 24.25 & 20.25 & 19.43 \\ \cline{1-1} & Ours & **72.21** & **31.47** & **28.57** & **29.03** & **29.31** & **24.25** & **52.24** & **27.03** & **24.12** & **27.02** & **22.13** & **21.20** \\ \hline \end{tabular}
\end{table}
Table 4: Accuracy and adversarial robustness on MNIST, FMNIST and CIFAR10 under both IID and non-IID distribution. An empirical study of FedAvg combined with several defense methods, more detailed comparisons are reported in the supplementary (Section B). Our method significantly outperforms other baselines.
Figure 6: **Left: Convergence rate for different local epochs. Right: Training curves of FedAvg combined with different AT methods.**
non-IID CIFAR10 dataset in the right sub-figure of Fig. 6. Fig. 6 confirms that our DBFAT achieves the highest clean accuracy. We speculate that this benefit is due to the regularization term and re-weighting strategy introduced in Equation 6. It is worth mentioning that in the training curves, the model trained with PGD_AT performs very poorly. It indicates that standard AT may not be a suitable choice for adversarial robustness in FL, as it only uses cross-entropy loss with adversarial examples, but ignores the negative impact on clean accuracy. We further report the results on various datasets under both IID and non-IID settings in Table 4, which indicates that DBFAT significantly outperforms other methods in terms of both accuracy and robustness.
Performance on large datasetsIn Table 5, we show the accuracy and robustness of each method on large datasets (e.g., CIFAR100, Tiny-ImageNet, and ImageNet-12). All results are tested under PGD-20 attack [1], AutoAttack [10], and Square attack [1] in non-IID settings. From the results reported in Table 5, we can find that our method still outperforms other baselines in terms of both clean accuracy and robustness. Note that our method can achieve the highest accuracy and robustness of 61.38% and 22.08% under AutoAttack, respectively. It thus proves that our method can also be used to improve the accuracy and robustness of the model on large datasets. We think that the higher clean accuracy is a result of the regularization term introduced in Equation 6, while maintaining a high robustness. |
2302.08767 | Compositionality of planar perfect matchings | We exhibit a strong connection between the matchgate formalism introduced by
Valiant and the ZW-calculus of Coecke and Kissinger. This connection provides a
natural compositional framework for matchgate theory as well as a direct
combinatorial interpretation of the diagrams of ZW-calculus through the perfect
matchings of their underlying graphs.
We identify a precise fragment of ZW-calculus, the planar W-calculus, that we
prove to be complete and universal for matchgates, that are linear maps
satisfying the matchgate identities. Computing scalars of the planar W-calculus
corresponds to counting perfect matchings of planar graphs, and so can be
carried in polynomial time using the FKT algorithm, making the planar
W-calculus an efficiently simulable fragment of the ZW-calculus, in a similar
way that the Clifford fragment is for ZX-calculus. This work opens new
directions for the investigation of the combinatorial properties of ZW-calculus
as well as the study of perfect matching counting through compositional
diagrammatical technics. | Titouan Carette, Etienne Moutot, Thomas Perez, Renaud Vilmart | 2023-02-17T09:04:35Z | http://arxiv.org/abs/2302.08767v1 | # Compositionality of planar perfect matchings
###### Abstract
We exhibit a strong connection between the matchgate formalism introduced by Valiant and the ZW-calculus of Coecke and Kissinger. This connection provides a natural compositional framework for matchgate theory as well as a direct combinatorial interpretation of the diagrams of ZW-calculus through the perfect matchings of their underlying graphs.
We identify a precise fragment of ZW-calculus, the planar W-calculus, that we prove to be complete and universal for matchgates, that are linear maps satisfying the matchgate identities. Computing scalars of the planar W-calculus corresponds to counting perfect matchings of planar graphs, and so can be carried in polynomial time using the FKT algorithm, making the planar W-calculus an efficiently simulable fragment of the ZW-calculus, in a similar way that the Clifford fragment is for ZX-calculus. This work opens new directions for the investigation of the combinatorial properties of ZW-calculus as well as the study of perfect matching counting through compositional diagrammatical technics.
## 1 Introduction
A quantum computation mapping \(n\) qubits to \(m\) qubits corresponds to an isometric linear map \(\mathbb{C}^{2^{n}}\rightarrow\mathbb{C}^{2^{m}}\). Due to the exponential size of their matrix representation, those linear maps are traditionally depicted as quantum circuits, an assemblage of elementary quantum gates similar to the more common boolean circuits. Given a quantum circuit \(n\to m\), evaluating a coefficient of the corresponding \(2^{m}\times 2^{n}\) matrix (i.e. evaluating the circuit with a given input) typically requires an exponential time. However, there are some specific classes of quantum circuits - or fragments -, that can be classically simulated in polynomial time. Examples are the Clifford fragment (as asserted by the Gottesman-Knill theorem) as well as the fragment that will particularly interest us in this paper, the nearest-neighbour matchgates [24]. Investigating those tractable fragments allows a better understanding of the computational advantage of quantum computing. The reference for all elementary results on quantum circuits is [19].
Taking the diagrammatical circuit representation seriously led to developing graphical languages for quantum computing [8]. Those languages are equational theories described by elementary gates and local identities between diagrams. Such languages come with an
interpretation into linear maps. A language is said universal for a class of linear maps if any linear map in the class is the interpretation of a diagram in the language. A language is said complete if two diagrams with the same interpretation are equivalent up to the equational theory, which means that they can be rewritten from one to the other using the local rewriting rules of the equational theory. In general, completeness is the most challenging property to prove.
The first quantum graphical language to appear was the ZX-calculus in 2008 [8]. It was rapidly known to be universal for all linear maps. However, providing a complete set of rewriting rules took another ten years (see [26] for an history of completeness) and first required a translation through another language, the ZW-calculus [18].
The ZW-calculus was introduced in [9] as a graphical representation of the two kinds of tripartite entanglement for qubits, namely the GHZ-states and W-states. It then appeared that this calculus had very nice algebraic properties allowing the internal encoding of arithmetical operations. Those properties allowed the ZW-calculus to be the first proven universal and complete language for linear maps [13]. Despite this historical importance, the ZW-calculus gathered less attention than other languages, seen as more connected to quantum computing. Still, we must mention interesting connections with fermionic quantum computing [12], and recent works importing some ZW-calculus primitives into ZX-calculus to exploit their algebraic properties [20, 28]. In this paper, we show that ZW-calculus has very strong connections with a specific family of quantum circuits: the matchgates.
Matchgates were introduced in 2002 by Valiant [24]. They are linear maps defined by counting the perfect matching of a graph from which we remove some vertices depending on the inputs. This underlying combinatorial structure allows to classically simulate the corresponding quantum circuits by using the Polynomial FKT algorithm for perfect matchings counting [1, 22]. The theory of matchgates was then developed further to the concept of holographic algorithms [25]. We can notice that if some connections between graphical languages and holographic algorithms have been investigated [3], we are not aware of any diagrammatical approach to the original concept of matchgate before the present work, except a mention in [12].
The main contribution of this paper is the introduction of a fragment of the ZW-calculus, that we call planar W-calculus. We show that this language is universal and complete for the planar matchgate fragment of quantum computation. The completeness proof relies on designing a normal form and a rewriting strategy to reach it. We also define a pro of matchgate computations by showing the compositionality of the matchgate identities introduced in [5]. The combinatorial characterisation of matchgate computations then directly follows from the correspondence with the graphical language. Hence one can see this paper as a reformulation of matchgate theory in a compositional framework.
The paper is structured as follows. Section 2 introduces our graphical primitives, their interpretation as linear maps and their combinatorial properties: the interpretation of a diagram can be deduced by counting the number of perfect matching of the underlying weighted graph. We present the generators and elementary rewrite rules of the language as well as an essential syntactic sugar: the fermionic swap that emulates the swap gate, which is not part of our language. Section 3 introduces the normal form and proves the completeness of the language. In Section 4, we properly define a pro of matchgates characterised as the linear maps satisfying the matchgate identities. We show that our language is universal for matchgates, **i.e.**, that the interpretation of a diagram is always a matchgate and that all matchgates correspond to a diagram. Finally, in Section 5, we sketch future directions of research suggested by the connection we identified between ZW-calculus and perfect matching counting.
## 2 Perfect Matchings and Planar W-Calculus
We define our fragment of the ZW-calculus, the _planar W-calculus_, by defining its diagrams. Any diagram with \(n\) inputs and \(m\) outputs \(D:n\to m\) is interpreted as a linear map \([\![D]\!]:\mathbb{C}^{2^{n}}\to\mathbb{C}^{2^{m}}\) inductively as follows:
In particular, note that we do _not_ use the usual swap diagram, hence the name _planar_. We do have, however, the so-called cup, and cap, satisfying the "snake equations":
In the following, with \(D:n\to n\), we may use the following notation: \(D^{\otimes\vec{b}}\) when \(\vec{b}\) is a bitstring, to represent \(D^{b_{1}}\otimes...\otimes D^{b_{n}}\) with \(D^{0}=id_{n}\) and \(D^{1}=D\). We call a diagram \(D\) a scalar if it has no input and no output, i.e. \(D:0\to 0\). In the category-theoretic terminology, such a collection of diagrams defines a pro, a strict monoidal category whose monoid of objects is generated by a unique element, and not a pro, which requires the category to be symmetric, _i.e._ to have swap diagrams. Furthermore, the presence of the cups and caps make the category a compact-closed pro. We define \(\mathbf{Qubit}\) to be the prop whose \(n\to m\) morphisms are linear maps \(\mathbb{C}^{2^{n}}\to\mathbb{C}^{2^{m}}\). Hence \([\![\cdot]\!]:\mathbf{pW}\to\mathbf{Qubit}\) is a pro morphism.
We add the two following generators: the black spider and the binary white spider, whose interpretations are detailed in the next sub-sections.
### Black Spider
To manipulate binary words \(\alpha\in\{0,1\}^{n}\) and \(\beta\in\{0,1\}^{m}\), we will denote \(\alpha\oplus\beta\in\{0,1\}^{n}\) the bitwise XOR (if \(n=m\)), \(\alpha\cdot\beta\in\{0,1\}^{n+m}\) the concatenation, \(|\alpha|\in\{0,...,n\}\) the Hamming weight, _i.e._, the number of ones in the word \(\alpha\), and \(|\alpha|_{2}\in\{0,1\}\) the parity of this weight, \(0\) if even and \(1\) if odd. The _black spider_ (or black node) is given by the following interpretation:
In other words, the black spiders gives an output \(1\) if and only if exactly one of its legs (either input or outputs) has value \(|1\rangle\) and all the others \(|0\rangle\). As inputs and outputs behave exactly the same, one can use cup and caps in order to transform inputs into outputs and vice-versa:
Moreover, as input order do not matter, one can bend the wires and move black spiders around, without altering the resulting linear map, we say that the black nodes are flexsymmetric [7]. Flexsymmetry of the black spider allows us to see diagrams as graphs with fixed
inputs and outputs edges. Fixing the input and outputs edges, any graph isomorphism preserves the semantics.
With this graphical interpretation in mind, one can understand the interpretation of a scalar diagram, composed of only black spiders, as counting the number of perfect matchings in the underlying graph. To see this, one can use the interpretation of a single edge, which simply is the identity \(|0\rangle\!\langle 0|+|1\rangle\!\langle 1|\). This interpretation gives a useful insight in the diagrams: given an edge, one can partition the set of perfect matchings between those that have this edge and those that don't:
In the case where the graph is an actual graph, without half edges, the resulting map is a scalar (no input or outputs). One can show by induction that this scalar corresponds to the number of ways of choosing a set of edges such that each vertex is covered by exactly one edge. In other ways, _the number of perfect matchings_ of the graph.
### Binary White Spider
The last generator of the planar W-calculus is the _binary white spider_, given, for any \(r\in\mathbb{C}\), by:
which corresponds to the usual binary white spider with weight \(r\) of the ZW-calculus. This binary spider corresponds to having a weight \(r\) on an edge of the graph. When \(r\in\mathbb{N}\), the interpretation is straightforward: the white spider can be replaced by \(r\) edges:
And in particular,
Let us interpret the white spiders as weights on the edges of a planar graph \(G\) with black spiders on their vertices. Consider one perfect matching of the same graph \(G^{\prime}\) without weights and consider one perfect matching \(P\) of \(G^{\prime}\). If the edge \(e\) that belongs to \(P\) has a weight \(r\in\mathbb{N}\), then it can be replaced by \(r\) edges. In other words, the single perfect matching \(P\) is replaced by \(r\) perfect matchings when \(e\) has weight \(r\). By doing this for every edges, one can see that each perfect matching in \(G^{\prime}\) corresponds to a perfect matching of \(G\) with a _weight_ that is the product of all its edge weights, instead of weight \(1\) in \(G^{\prime}\). For \(r\in\mathbb{C}\), one cannot replace a white spider by a given number of edges, but the interpretation is the same: the edge contribute to the perfect matchings that contain it with a _weight_\(r\).
_Example 2.1_.:
Diagrams generated by the black and binary white node, within the framework described at the beginning of the section, are called \(\mathrm{p}\!\mathbf{W}\)-diagrams.
### The FKT Algorithm
In general, counting the number of perfect matchings in a graph is an #P-complete problem [23]. However, for planar graphs the same problem turns out to be surprisingly easy, as
Fisher, Temperley and Kastelyn showed that it is in P [14, 22]. The main idea behind the algorithm is that for planar graphs, it is possible to find a good orientation of the edges (called a Pfaffian orientation) in polynomial time such that the number of perfect matchings is the Pfaffian of the adjacency matrix \(A\) (actually its skew-symmetric version, called Tutte matrix) of the oriented graph. A result due to Cayley then shows that the Pfaffian is the square root of the determinant of \(A\).
Note that one can find such an orientation for any planar graph, even weighted with complex weights, and the equality \(pf(A)=\sqrt{det(A))}\) still holds. Therefore, computing the total _weight_ of perfect matchings in a complex-weighted graph is in P.
**Proposition 2.2**.: _Let \(D\) be a scalar \(\mathrm{p}\mathbf{W}\)-diagram. Then \(\llbracket D\rrbracket\) is computable in polynomial time in the number of black nodes._
### Fermionic Swap
The usual ZW-calculus does have another generator that we did not explicitly include in our fragment, called the _fermionic swap_:
\[\llbracket\,\,\rrbracket:=\sum_{x,y\in\{0,1\}}(-1)^{xy}\,|x\rangle\!\langle y|\]
However, it turns out that the fermionic swap is just syntactic sugar, and it is actually in our fragment:
\[\llbracket\,\,\rrbracket:=\,\,\,\rrbracket\]
Notice that the previous equation also appears in [6] to relate planar and non-planar matchgates. It is very useful to treat this piece of diagram as a generator of its own, especially as a particular kind of swap, which shares a lot of (but not all) properties of the symmetric braiding of props. In particular:
\[\llbracket\,\,\rrbracket=\llbracket\,\,\rrbracket\]
Where \(|D|\) is the number of black nodes in the diagram \(D\).
## 3 Completeness
The planar W-calculus is introduced with an equational theory, given in Figure 1, relating together diagrams with the same semantics. We write \(\mathrm{p}\mathbf{W}\vdash D_{1}=D_{2}\) when one can turn diagram \(D_{1}\) into diagram \(D_{2}\) by applying the equations of Figure 1 locally.
**Proposition 3.1**.: _The equational theory of Figure 1 preserves the semantics:_
\[\mathrm{p}\mathbf{W}\vdash D_{1}=D_{2}\quad\implies\quad\llbracket D_{1} \rrbracket=\llbracket D_{2}\rrbracket\]
In the following, we will show that the converse also holds, that is, that whenever two diagrams have the same semantics, they can be turned into one another using the equational theory. Intuitively, this implies that the equational theory completely captures the interaction of generators with one another in the fragment.
To show this result, we give a notion of normal form, which we call W-graph-state with X-gates (WGS-X for short), then a refinement of that normal form (reduced WGS-X form) which can be shown to be unique, and we give a rewrite strategy (derivable from the equational theory) to turn any \(\mathfrak{pW}\)-diagram into this form.
### Normal Form
The first step we take towards defining a normal form is a simplification, making use of the compact structure of the underlying pro, where we relate maps and states:
**Proposition 3.2**.: _There is an isomorphism between \(\mathfrak{pW}(n,m)\) and \(\mathfrak{pW}(0,n+m)\) defined as such:_
This isomorphism allows us only to consider states rather than maps in the following.
Then, we define W-graph-states, by first defining ordered weighted graphs:
**Definition 3.1** (Ordered \(R\)-Weighted Graph).: \(G=(V,E,w)\) is called an ordered \(R\)-weigthed graph if:
* \(V\) is a set endowed with a total order \(\prec\) (or equivalently a sequence)
* \(E\subset V\times V\) is such that \((u,v)\in E\implies u\prec v\)
* \(w:E\to R\setminus\{0\}\) maps each edge to its weight
**Definition 3.2** (W-Graph-State).: Let \(G=(V,E,w)\) be an ordered weighted graph. Then, \(\operatorname{WGS}(G)\) is defined as the \(\mathfrak{pW}\)-diagram where:
* Each vertex in \(V\) gives a W-spider linked to an output through an additional \(\bullet\) (the order on \(V\) gives the order of the outputs)
* Each (weighted) edge \((u,v)\) gives a white dot with parameter \(w((u,v))\) linked to the W-spiders obtained from \(u\) and \(v\)
Figure 1: Axioms of the planar W-calculus.
* All wire crossings in \(\mathrm{WGS}(G)\) are fermionic swaps
* No output wire crosses another wire
* There are no self-intersecting wires
When an edge has weight 1 we may ignore the white dot and represent the edge as a simple wire, since
* Notice that there are several ways to build \(\mathrm{WGS}(G)\), but all of them are equivalent thanks to
and the axioms on the fermionic swap
, together with the provable identities in Lemmas 3.3 and 3.4:
**Lemma 3.3**.: **Lemma 3.4**.: **Definition 3.3** (WGS-X form).: We say that a \(\mathrm{p}\mathbf{W}\)-state \(D\) on \(n\) qubits is in:
* **WGS-X form** if there exist \(s\in\mathbb{C}\), \(G=([1,n],E,w)\) an ordered graph, and \(\vec{b}\in\{0,1\}^{n}\) such that \(D=s\cdot\left(\begin{array}{c}1\\ 1\end{array}\right)^{\otimes\vec{b}}\circ\mathrm{WGS}(G)\).
* **pseudo-WGS-X form** if it is in WGS-X form with potentially vertices linked to several outputs, additional
(\(r\neq 0\)) on wires that do not correspond to edges in the graph, and potentially fermionic swaps
between outputs.
* **reduced WGS-X form** (rWGS-X) if it is in WGS-X form and:
\[\forall i,\ (b_{i}=0\implies\nexists j,\ (i,j)\in E)\]
i.e. \(b_{i}=0\) is only possible if vertex \(i\) has no neighbour on its right.
_Example 3.5_.: \(\mathrm{WGS}\left(\begin{array}{c}i\\ 2\end{array}\right)=\begin{array}{c}\includegraphics[scale=0.5]{fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/ fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/ fig/fig/fig/fig/fig/fig/fig/fig/fig/ fig/fig/fig/fig/fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig// fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig// fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig// fig/ fig/ fig/ fig/ fig/ fig// fig/ fig/ fig/ fig// fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig// fig/ fig// fig// fig/ fig/ fig/ fig/ fig/ fig/ fig// fig/ fig// fig/ fig/ fig// fig// fig/ fig// fig/ fig// fig/ fig/ fig/ fig/ fig// fig/ fig/ fig// fig/ fig/ fig/ fig// fig/ fig/ fig/ fig/ fig// fig/ fig/ fig// fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig// fig/ fig/ fig// fig/ fig/ fig// fig/ fig/ fig/ fig// fig/ fig/ fig// fig/ fig/ fig// fig/ fig/ fig/ fig// fig/ fig/ fig/ fig// fig/ fig/ fig/ fig//
### Rewrite Strategy
We define in this section a rewrite strategy, derived from the equational theory, that will terminate in a normal form (WGS-X). Doing this naively is made difficult by the potential presence of fermionic swaps \(\mathcal{F}\) wherever we are looking for patterns to rewrite. Thankfully, the last 5 equations in Figure 1,together with the above Lemmas 3.3 and 3.4 essentially tell us that we can treat those as usual swaps with the only catch that removing self loops or moving wires past black nodes adds a \(-1\) weight to the wires.
In the upcoming rewrite strategy, we will hence only specify the patterns without potential fermionic swaps inside. Should there be some present, it is understood that they will be moved out of the pattern, before the rewrite occurs. The rules necessary for the rewrite strategy are given in Figure 2.
**Proposition 3.6**.: _The rewrite rules of Figure 2 are derivable from the equational theory of Figure 1 and hence are sound._
For the rewrite strategy to terminate, we need to distinguish between different types of nodes:
Figure 3: Rules for reduced WGS-X form, together with rule \((*)\) when the leftmost black node is a type-0 boundary node.
Figure 2: Rewrite rules. All these rules except \((\star)\) are supposed to apply when any of the white nodes are replaced by identity (i.e. when their weight is 1). Rule \((*)\) can only be applied if at least one of the black nodes is internal, and if none of the other rules applies.
**Definition 3.4** (Boundary Node / Internal Node).: A node is a _boundary node of type 1_ if it is linked directly to an output. A node is a _boundary node of type 0_ if it is connected to a binary boundary node of type 1.
We say that a black node of \(D\) is _internal_ if it is not a boundary node.
The rewrite strategy is then laid out as follows:
**Definition 3.5** (Rewrite Strategy).: The rewrite strategy is defined in 3 steps:
1. Apply the rewrites of Figure 2 in any order but following constraints, until none apply anymore. The diagram ends up in pseudo-WGS-X form.
2. First, whenever a type-1 boundary is linked to \(n>1\) outputs directly, apply to the \(n-1\) rightmost such outputs (the top black node then becomes a type-0 boundary node, the bottom one a type-1 boundary node). Then, push all potential fermionic swaps between outputs inside the graph part. Finally, move boundary weights up into the edges of the WGS using. The diagram ends up in WGS-X form.
3. Whenever a type-0 vertex in the graph has a right neighbour, depending on the arity of the nodes, apply rule \((*)\) or one of the rules of Fig. 3 between the two nodes (and apply any other possible rule before going on).
A claim was made in Definition 3.5 about the form of the diagram at the end of each step. Those claims are going to be proven in the following (Proposition 3.7). At the same time, we are going to show that the rewrite terminates.
**Proposition 3.7** (Termination in rWGS-X form).: _The rewrite strategy terminates in polynomial time. Moreover, after Step 1 of the rewrite, the diagram is indeed in pseudo-WGS-X form, after Step 2, it is in WGS-X form, and after Step 3, it is in rWGS-X form._
An important operation on WGS-X states that has a simple graphical interpretation is the following:
**Lemma 3.8**.: _For any diagram \(D\) in WGS-X form \((s,G,\vec{b})\), applying on the \(i\)th output can be turned into the WGS-X form \((s,G\setminus\{i\},\vec{b}\setminus b_{i})\), where \(G\setminus\{i\}\) is defined as the graph \(G\) from which vertex \(i\) is removed (together with all edges linked to \(i\) and their weights), and similarly \(\vec{b}\setminus b_{i}\) is defined as the sequence \(\vec{b}\) from which \(i\)th element is removed._
This allows us to prove the following:
**Lemma 3.9**.: _For any diagram \(D\) in WGS-X form \((s,G,\vec{b})\):_
\[\llbracket D\rrbracket=0\iff s=0\]
We may then prove that 0-diagrams can be put in a very well-defined form:
**Lemma 3.10**.: _Let \(D\) be a WGS-X state such that \(\llbracket D\rrbracket=0\). Then \(D\) can be put in the WGS-X form \((0,G=([1,n],\emptyset,\_),\vec{0})\), i.e.:_
\[\mathrm{p}\mathbf{W}\vdash D=0\cdot\raisebox{-14.226378pt}{\includegraphics[]{ p00.eps}}\,...\raisebox{-14.226378pt}{\includegraphics[]{ p00.eps}}\]
We are now able to prove the completeness of the equational theory.
**Theorem 3.11**.: _Let \(D_{1}\) and \(D_{2}\) be two \(\mathrm{pW}\)-diagrams. Then:_
\[\llbracket D_{1}\rrbracket=\llbracket D_{2}\rrbracket\iff\mathrm{pW}\vdash D_{1 }=D_{2}\]
This last theorem, together with the fact that the rewriting in rWGS-X form is polynomial (Proposition 3.7) makes the problem of deciding whether two \(\mathrm{pW}\)-diagrams are semantically equivalent a P problem.
## 4 Matchgates
This section aims at characterising exactly the linear maps that \(W\)-diagrams represent.
### Matchgate Identities
Valiant first introduced matchgate identities to characterise \(2\to 2\) matchgates, a family of linear maps described in a combinatorial way [24]. In [5], the matchgate identities have been extended to characterise matchgates of any size. In the literature, there is a close link between matchgate identities and the Grassman-Plucker identities applied to Pfaffians. It is not the case here, as the diagrammatic technics allow us to directly link matchgate identities to matchings without the intermediate of the Pfaffian. We can fully recover the connection with Pfaffians through the Fetcher-Kasteleyn-Temperley algorithm for counting perfect matchings [1, 22], more details on this are outlined in Section 5. Many of the proofs of this section are inspired by the very useful clarification of matchgate theory presented in [6]. Notice that contrary to the literature that differentiates between matchgrids, matchcircuits or matchnets, we will only use the term matchgate for any linear map satisfying the matchgate identities.
Recall that for binary words \(\alpha\in\{0,1\}^{n}\) and \(\beta\in\{0,1\}^{m}\), \(\alpha\oplus\beta\in\{0,1\}^{n}\) is the bitwise XOR (if \(n=m\)), \(\alpha\cdot\beta\in\{0,1\}^{n+m}\) the concatenation, \(|\alpha|\in\{0,...,n\}\) the Hamming weight, _i.e._, the number of ones in the word \(\alpha\), and \(|\alpha|_{2}\in\{0,1\}\) the parity of this weight, \(0\) if even and \(1\) if odd.
**Definition 4.1** (Matchgate Identities).: A tensor \(\Gamma\in\mathbb{C}^{2^{n}}\) satisfies the **matchgate identities** (MGIs) if for all \(\alpha,\beta\in\{0,1\}^{n}\):
\[\sum_{k=1}^{|\alpha\oplus\beta|}(-1)^{k}\Gamma_{\alpha\oplus e_{p_{k}}}\Gamma_ {\beta\oplus e_{p_{k}}}=0\]
Where \(e_{p_{k}}\in\{0,1\}^{n}\) is the binary word which is zero everywhere except in position \(p_{k}\), which is the \(k\)th position in the set \(\{p_{1},...,p_{|\alpha\oplus\beta|}\}\subseteq\{1,...,n\}\) of positions in which the words \(\alpha\) and \(\beta\) differs.
The matchgate identities are not linear, so the set of matchgates is not a subspace of the vector space \(\mathbb{C}^{2^{n}}\) but an algebraic variety [5]. In general, those identities are not algebraically independent, _i.e._ are not all strictly necessary to describe match-tensors.
Indeed, there are numerous symmetries in those identities. For example, the case \(\alpha=\beta\) directly gives empty sums and exchanging \(\alpha\) and \(\beta\) gives the same identity. Interestingly, one can replace half of the identities with a parity condition.
**Proposition 4.1** (Parity condition [6]).: _If \(\Gamma\) satisfies the matchgate identities then it satisfies the **parity condition**: for all \(\alpha,\beta\in\{0,1\}^{n}\), \(|\alpha|_{2}\neq|\beta|_{2}\Rightarrow\Gamma_{\alpha}\Gamma_{\beta}=0\)._
The parity condition splits match-tensors into two groups, the one with odd parity, such that \(|\alpha|\) even implies \(\Gamma_{\alpha}=0\), and the one of even parity, such that \(|\alpha|\) odd implies \(\Gamma_{\alpha}=0\). In particular, the parity condition directly implies that all terms in identities with \(|\alpha|_{2}\neq|\beta|_{2}\) are zero. Notice that the parity condition is not sufficient. We still need matchgate identities in general.
However, the parity condition is sufficient for \(n\leq 3\), but not anymore for \(n=4\), the original case considered by Valiant [24]. In particular, for \(n=0\), the matchgate identities are trivially true; hence they are satisfied by all scalars (processes \(0\to 0\)).
### The Pro of Matchgates
We will now use the matchgates to define a pro. So far, matchgate identities have been used to characterise vectors seen as tensors, without consideration of inputs and outputs. To apply them to linear maps \(f:n\to m\), we will use the state form: \([f]:0\to n+m\) described in Proposition 3.2. It allows us to define matchgates.
**Definition 4.2** (Matchgates).: A **matchgate** is a linear map \(f:\mathbb{C}^{2^{n}}\to\mathbb{C}^{2^{m}}\) such that \([f]\) satisfies the matchgate identities.
We would like to define a sub-pro of **Qubit** whose processes are matchgates, however, there are a few properties to check before that. We start by showing stability by the tensor product.
**Lemma 4.2**.: _Given two linear maps \(f:a\to b\) and \(g:c\to d\) whose state forms \([f]\in\mathbb{C}^{2^{a+b}}\) and \([g]\in\mathbb{C}^{2^{c+d}}\) satisfy the matchgate identities, then \([f\otimes g]\in\mathbb{C}^{2^{a+c+b+d}}\) satisfies the matchgate identities._
The next thing to check is stability by composition; this follows from the following result:
**Lemma 4.3**.: _If \(\Gamma\in\mathbb{C}^{2^{n+2}}\) satisfies the matchgate identities, then the tensor obtained by contracting two consecutive indices satisfies the matchgate identities._
Notice that the consecutive indices assumption is essential here. Without it, we could easily construct the swap gate that does not satisfy the matchgate identities. To be able to contract consecutive indices is enough to show the stability by composition. The idea is to iterate contraction on consecutive indices until we obtain enough cups to use the snake equation, pictorially:
Now that we have stability by tensor and composition, it only remains to show the identities are matchgates. \(id_{0}\) is a scalar, so directly a matchgate. The state-form of \(id_{1}\) is the cap which is a matchgate as it satisfies the parity condition (sufficient for \(n=2\)). The fact that all \(id_{n}\) are matchgates follows from stability by the tensor product. We can now state the main theorem of this subsection.
**Theorem 4.4** (Match).: _The matchgates form a pro **Match**, which is a sub-pro of **Qubit**._
Notice that **Match** is compact closed since the cup and the cap are both matchgates. Hence we can also use process/state duality in **Match** without any worry. As expected, all \(W\)-diagrams represent matchgates.
**Lemma 4.5**.: _The functor \(\llbracket\_\rrbracket:\mathrm{pW}\rightarrow\textbf{Qubit}\) factorises through **Match**, i.e., the interpretations of diagrams in \(W\) are matchgates._
Proof.: We have to prove that the interpretation of any \(\mathrm{pW}\) diagram is a matchgate. To do so, as matchgates are stable by composition and tensor product we only have to check that the interpretations of the generators are matchgates. The state forms of the generators have at most three outputs (\(n\)-ary spiders can be decomposed into binary and ternary spiders), so it is sufficient to check the parity condition, which is indeed satisfied by the interpretations of the generators.
### Universality
Now that we proved that all \(\mathrm{pW}\)-diagrams represent matchgates, it remains to show that all matchgates can be represented by a \(\mathrm{pW}\) diagram, in other words, that \(\mathrm{pW}\) is universal for **Match**. This will require a few additional properties of matchgates, adapting some results of [6].
**Lemma 4.6**.: _If \(\Gamma\) satisfies the matchgate identities and \(\Gamma_{\mathbf{0}}=1\), where \(\mathbf{0}\) is binary word full of \(0\), then it is uniquely determined by its coefficients \(\Gamma_{\alpha}\) where \(|\alpha|=2\)._
Proof.: If \(|\alpha|=0\) then we already know that \(\Gamma_{\alpha}=1\) and the parity condition implies that \(\Gamma_{\alpha}=0\) if \(|\alpha|=1\). We show that for all \(\alpha\) with \(3\leq|\alpha|\), we can express \(\Gamma_{\alpha}\) from coefficients \(\Gamma_{\beta}\) where all \(\beta\)s have strictly smaller Hamming weights. Let \(i\) be the first position where \(\alpha\) and \(\mathbf{0}\) differ, the matchgate identity corresponding to \(\alpha\oplus e_{i}\) and \(\mathbf{0}\oplus e_{i}\) is:
\[\sum_{k=1}^{|\alpha|}(-1)^{k}\Gamma_{\alpha\oplus e_{i}\oplus e_{p_{k}}} \Gamma_{e_{i}\oplus e_{p_{k}}}=0\]
Here the \(p_{k}\) are exactly the position where \(\alpha\) is \(1\), in particular \(i=p_{1}\) so:
\[\Gamma_{\alpha}=\Gamma_{\alpha}\Gamma_{\mathbf{0}}=\sum_{k=2}^{|\alpha|}(-1)^{ k}\Gamma_{\alpha\oplus e_{i}\oplus e_{p_{k}}}\Gamma_{e_{i}\oplus e_{p_{k}}}\]
For \(k\geq 2\), We have \(|e_{i}\oplus e_{p_{k}}|=2\) and \(|\alpha\oplus e_{i}\oplus e_{p_{k}}|=|\alpha|-2\) so \(\Gamma_{\alpha}\) is completely determined by coefficients corresponding to strictly smaller Hamming weight. It follows that all \(\Gamma_{\alpha}\) can be expressed from the \(\Gamma_{\beta}\)s with \(|\beta|=2\).
We will now be able to reuse the normal form from Section 3 to construct diagrams representing any matchgate.
**Lemma 4.7** (Universality).: \(\mathrm{pW}\) _is universal for **Match**._
Proof.: Relying on process/state duality, we only consider states \(0\to n\). Given \(\Gamma\) satisfying the matchgate identities, we will construct a \(W\) diagram \(D\) such that \(\llbracket D\rrbracket=\Gamma\). We start by considering the case where \(\Gamma_{\mathbf{0}}=1\). Then we construct a weighted graph \(G\) on \(n\) vertices setting the weight of the edge \((i,j)\) to \(\Gamma_{e_{i}\oplus e_{j}}\). We then take \(D\) to be the diagram in graph form corresponding to \(G\). By construction we then have \(\llbracket D\rrbracket_{\mathbf{0}}=1\) and \(\llbracket D\rrbracket_{e_{i}\oplus e_{j}}=\Gamma_{e_{i}\oplus e_{j}}\) for all \(i\neq j\). Furthermore, by Lemma 4.5, \(\llbracket D\rrbracket\) is a matchgate so by Lemma 4.6, \(\llbracket D\rrbracket=\Gamma\).
Now if \(\Gamma_{\mathbf{0}}\neq 1\): First if \(\Gamma_{\mathbf{0}}\neq 0\) then \(\Gamma^{\prime}=\frac{1}{\Gamma_{\mathbf{0}}}\Gamma\) is of the right form so we can obtain \(D\) by adding a floating edge of weight \(\Gamma_{\mathbf{0}}\) to the diagram \(D^{\prime}\) representing \(\Gamma^{\prime}\). The last case is \(\Gamma_{\mathbf{0}}=0\), then if \(\Gamma=0\) we can represent \(\Gamma\) by any diagram and a floating black node, else let \(\beta\) be such that \(\Gamma_{\beta}\neq 0\), then \(\Gamma^{\prime}\) defined as \(\Gamma^{\prime}_{\alpha}=\Gamma_{\alpha\oplus\beta}\) satisfies \(\Gamma^{\prime}_{\mathbf{0}}\neq 0\) and there is a diagram \(D^{\prime}\) representing \(\Gamma^{\prime}\). A diagram \(D\) representing \(\Gamma\) is then obtained by plugging binary black nodes to the outputs of \(D^{\prime}\) corresponding to the positions where \(\beta\) is \(1\).
Notice that since **Match** is a sub-pro of **Qubit**, the completeness proof of Section 3 still holds in **Match**. It provides us with a universal and complete graphical language for matchgates.
**Theorem 4.8**.: p**W** _is universal and complete for **Match**._
## 5 Further Work
The proper definition and axiomatisation of the p**W**-calculus pave the way to diverse investigations of the connection between combinatorics and quantum computing. We briefly outline in this last section some very promising directions that are the subjects of ongoing research.
### New Simulation Techniques for Quantum Circuits
The identification of a fragment of the ZX-calculus exactly corresponding to the efficiently simulable Clifford gate [4] allows to design new rewrite-based simulation technics for quantum circuits introduced in [15]. Those algorithms have a parametrised complexity which is polynomial in the number of Clifford gated but exponential in the number of \(T\)-gates (a gate outside of the Clifford fragment sufficient to reach approximate universality).
Similarly, we have identified an efficiently simulable fragment of ZW-calculus: the p**W**-calculus exactly corresponding to matchgates. Adding the swap gate to p**W** we obtain another fragment of ZW which is exactly the fermionic ZW-calculus introduced in [12]. This calculus is universal for **Qubit** modulo an encoding trick: the dual-rail encoding. Equivalently, LFM is ZW where white nodes are contrived to have even arities, so adding arity one white nodes (corresponding to preparing \(|+\rangle\) states) is enough to recover the full ZW-calculus, which is universal for **Qubits**. This situation suggests the possibility of designing rewrite-based simulation algorithms with complexities parametrised by the number of swap gates and/or \(|+\rangle\) preparation. It would lead to a brand new kind of quantum simulation technics exploiting the combinatorial structure of matchgate and directly connected to classical perfect matching counting algorithms.
### Combinatorial Interpretation of Full ZW-Calculus
In Section 2, we provided a combinatorial interpretation of p**W**-diagrams _via_ perfect matchings in planar graphs. This combinatorial approach directly extends to LFM-calculus _via_ perfect matchings in arbitrary graphs (which is #P-complete). Furthermore, we can also extend the interpretation to the full ZW-calculus, where white nodes can have arbitrary arities. To do so, we must consider hypergraph matchings, _i.e._, subsets of hyperedges covering each vertex exactly once. The arbitrary arity white nodes here play the role of hyperedges, and the black nodes, the role of vertices. Thus, the interpretation of ZW-scalars is the number of hypergraph matchings of the hypergraph underlying the diagram. Notice that hypergraph matching is also presented as the set cover problem in the literature. The full ZW-calculus
could offer new perspectives on set cover in the same way that \(\mathfrak{pW}\) does for perfect matchings. In particular, some reduction results appear to have very clear diagrammatical proofs.
Aside from perfect matchings, it seems that graphical languages can encode other counting problems on graphs or hypergraphs. Designing such languages could shed a new tensorial/diagrammatical light on the corresponding combinatorial problems. Those approaches are reminiscent of the recent ZH-based algorithm for #Sat, introduced in [16] and related works linking graphical languages and counting complexity [10, 11]. Conversely, the question of applying similar combinatorial interpretations to other graphical languages as ZX-calculus [8], or ZH-calculus [2] is also worth being investigated.
### Towards a Diagrammatic Approach of Perfect Matching Counting
In Section 2, we used the Fletcher-Kasteleyn-Temperley algorithm as a black box to compute the interpretation of \(\mathfrak{pW}\)-scalars in polynomial time. However, it seems possible to achieve the same result with purely diagrammatical technics. In fact, applying the rewriting strategy described in Section 3 to a scalar reduces it to a normal form from which we can directly read the interpretation. It seems very probable that this requires only a polynomial number of rewrites.
This provides a way to count perfect matchings without referring to Pfaffian computation, and conversely, it gives a new algorithm to compute Pfaffians based on rewriting.
The FKT algorithm only applies to a specific class of graphs, called Pfaffian graphs, **i.e.**, the graphs admitting a Pfaffian orientation. In particular, all planar graphs are Pfaffian [14]. It seems that Pfaffian orientiation are directly connected to the behavior of fermionic swap and their lack of naturality which introduces \(-1\) weights in the edges. More generally, all graphs not containing \(K_{3,3}\) are Pfaffian [17, 27] (we recall that planar graphs are precisely the graphs not containing neither \(K_{3,3}\) nor \(K_{5}\) as minors). Moreover, there also exists a polynomial time algorithm for \(K_{5}\)-minor-free graphs [21] based on graph decomposition. There is a large amount of work in perspective, re-expressing in diagrammatic terms those different variations and understanding adequately how our rewriting rules could encode the minor constraints.
Formalising and implementing those different algorithms is the object of ongoing work. The main difficulty is to identify the suitable data structures to manipulate the topological data of a given diagram, equivalently, the specific planar embedding of the corresponding graph.
|
2310.07645 | Solutions of the Schrödinger equation with Quarkonium potential to
predict the mass-spectra of the heavy mesons via series expansion method | In this study, a quarkonium potential is adopted as the quark-antiquark
interaction potential for predicting the mass spectra of heavy mesons. We
solved the radial Schr\"odinger equation analytically using the series
expansion method and obtained the energy eigenvalues. The present results are
applied for predicting the mass spectra of heavy mesons such as charmonium and
bottomonium. The present potential provides satisfying results in comparison
with experimental data and the work of other researchers with a maximum error
of 0.058 GeV | E. P. Inyang, E. P. Inyang, I. O. Akpan, E. S. William | 2023-10-11T16:44:01Z | http://arxiv.org/abs/2310.07645v1 | ###### Abstract
###### Abstract
In this study, a quarkonium potential is adopted as the quark-antiquark interaction potential for predicting the mass spectra of heavy mesons. We solved the radial Schrodinger equation analytically using the series expansion method and obtained the energy eigenvalues. The present results are applied for predicting the mass spectra of heavy mesons such as charmonium and bottomonium. The present potential provides satisfying results in comparison with experimental data and the work of other researchers with a maximum error of \(0.058\,GeV\).
## I Introduction
The study of heavy quarkonium system such as charmonium and bottomonium has an important role in understanding the quantitative tests of quantum chromodynamics (QCD) and the standard model (Mutuk,2018). This system can be studied within the Schrodinger equation (SE)( Kumar and Chand,2014). The solution of SE with spherically symmetric potential is one of the important problems in Physics and Chemistry. This is because it plays an important role in understanding of the properties of constituents' particles and dynamics of their interactions (Abu-Shady _et al._, 2019). The fundamental potential used in the studying quarkonium system is the Cornell potential, also known as Killingbeck potential with two important features of the strong interaction, namely, asymptotic freedom and quark confinement (Rai and Rataud 2015; Bettini,2018). The SE has been solved using various analytical methods such as, the asymptotic iteration method ( Kumar and Chand,2014; Ikot _et al._, 2020), Laplace transformation method (Abu-Shady _et al._, 2018), super symmetric quantum mechanics method (SUSQM) (Abu-Shady _et al._, 2021)., Nikiforov-Uvarov(NU) method(Inyang _et al._, 2021;Edet _et al._, 2020;Inyang, _et al._, 2021; Inyang _et al._,Akpan _et al._,2021; Edet _et al._, 2019;Inyang _et al._, 2021;William _et al._,2020;Inyang _et al._,2021;William _et al._,2020;Inyang _et al._,2020;Inyang _et al._,2021;Edet _et al._, 2020;Omugbe _et al._, 2021),the series expansion method (Inyang _et al._, 2021) and so on.
Most researchers have studied the mass spectra of heavy mesons with Cornell potential (Rani et al., 2018; Ciftci and Kisoglu 2018; Al-Jamel, 2019; Mansour and Gamal 2018; Al-Oun _et al._, 2015;Omuge _et al._,2020). For instance, (Ali _et al._2015) studied the energy spectra of mesons and hadronic interactions using Numerov's method. Their solutions were used to describe the phenomenological interactions between the charm-anticharm quarks via the model. The model accurately predicts the mass spectra of charmed quarkonium as an example of mesonic systems. Also, (Inyang _et al._2021) obtained the Klein-Gordon equation solutions for the Yukawa potential using the NU method. The energy eigenvalues were obtained both in relativistic and non-relativistic regime. They applied the results to calculate heavy-meson masses of charmonium and bottomonium.
The quarkonium or heavy-quark potential model takes the form (Abu-Shady and Ikot 2019).
\[V(r,T)=\left(\frac{2a}{m_{\mathrm{D}}^{2}(T)}-a\right)\frac{e^{-m_{\mathrm{D} }(T)}}{r}-\frac{2b}{m_{\mathrm{D}}^{2}(T)r}+\frac{2b}{m_{\mathrm{D}}(T)}-am_{ \mathrm{D}}(T) \tag{1}\]
where \(a\) and \(b\) are potential strength parameters, \(m_{\mathrm{D}}\big{(}T\big{)}\) is Debye mass which is temperature dependent and vanishes as \(T=\mathrm{O}\).
The aim of this work is to investigate the SE with the quarkonium potential in the framework of the series expansion method to predict the mass spectra of heavy quark- antiquark system. To the best of our knowledge this is the first time quarkonium potential is being studied with the aim of determining the mass spectra of heavy mesons.
The paper is organized as follows: In section 2, the bound state energy eigenvalues is calculated via series expansion method.
In section 3, the results are discuss. In section 4, conclusion is presented.
## Methods
### Bound state solutions of the Schrodinger equation with quarkonium potential
We consider the radial SE of the form (Inyang _et al._, 2021).
\[\frac{d^{2}R(r)}{dr^{\prime}}+\frac{2}{r}\frac{dR(r)}{dr^{\prime}}+\Big{[} \frac{2\mu}{h^{2}}(E_{\omega}-V(r))-\frac{I(I+1)}{r^{2}}\Big{]}R(r)=0 \tag{2}\]
where \(I\) is angular quantum number taking the values
0,1,2,3,4..., \(\mu\) is the reduced mass for the quarkonium particle, \(r\) is the inter-nuclear separation and \(E_{nl}\) denotes the energy eigenvalues of the system.
The expansion of the exponential terms in Eq. (1) (up to order three, in order to model the potential to interact in the quark-antiquark system) yields,
\[\frac{e^{-\omega_{0}(r)r}}{r}=\frac{1}{r}-m_{\omega}(T)+\frac{m_{\omega}^{2}( \Upsilon)r}{2}-\frac{m_{\omega}^{2}(\Upsilon)r^{2}}{6}+... \tag{3}\]
Substituting Eq. (3) into Eq.(1) we have
\[\begin{array}{l}\mathbf{V(r,T)=-\frac{\mathbf{\alpha}_{0}}{r}+\mathbf{\alpha}_{1}r+\bm {\alpha}_{2}r^{2}+\mathbf{\alpha}_{3}}\end{array} \tag{4}\]
where
\[\begin{array}{l}-\alpha_{0}=\frac{2a}{m_{\omega}^{2}(T)}-a-\frac{2b}{m_{ \omega}^{2}(T)},\ \alpha_{1}=a-\frac{am_{\omega}^{2}(T)}{2}\\ \alpha_{2}=\frac{am_{\omega}^{2}(T)}{6}-\frac{am_{\omega}(T)}{3},\ \alpha_{3}=\frac{2b}{m_{ \omega}(T)}-\frac{2a}{m_{\omega}(T)}\end{array} \tag{5}\]
We substitute Eq.(4) into Eq.(2) and obtain
\[\begin{array}{l}\frac{d^{2}R(r)}{dr^{\prime}}+\frac{2}{r}\frac{dR(r)}{dr^{ \prime}}+\Big{[}\frac{c+\frac{G}{r}-H\mathbf{r}-J\mathbf{r}^{2}-\frac{\mathbf{\Lambda}( \mathcal{L}+1)}{r^{2}}}\Big{]}R(r)=0\end{array} \tag{6}\]
where
\[\begin{array}{l}\mathbf{\varepsilon}-\frac{2\mu\varepsilon}{h^{2}}(\mathbf{ \varepsilon}-\mathbf{\alpha}_{3}),\ \mathbf{\varsigma}-\frac{2\mu\varepsilon\varepsilon\varepsilon\varepsilon}{h^{2}} \end{array}\]
\[\begin{array}{l}\mathbf{\mathcal{L}(\mathcal{L}+1)=\mathcal{L}(\mathcal{L}+1)} \end{array} \tag{7}\]
From Eq. (8),
\[\begin{array}{l}\mathbf{L}=-\frac{1}{2}+\frac{1}{2}\sqrt{\left(2\mathcal{L}+1 \right)^{2}}\end{array} \tag{9}\]
Now make an anzats wave function
\[\begin{array}{l}\mathbf{\mathcal{R(r)}=e^{-\omega r^{2}-\beta r}\mathbf{F}^{\prime} \mathbf{\mathcal{L(r)}}}\end{array} \tag{10}\]
Where \(\alpha\) and \(\beta\) are positive constants whose values are to be determined in terms of potential parameters.
Differentiating Eq.(10) twice gave,
\[\begin{array}{l}\mathbf{R}^{\prime}(\mathbf{r})=F^{\prime}(\mathbf{r})\,\text{e}^{-\omega r ^{2}-\beta r}+\mathbf{F}^{\prime}(\mathbf{r})\big{(}-2\alpha r-\mathbf{\beta}\big{)}e^{- \omega r^{2}-\beta r}\\ +\big{[}(-2\alpha r)+(-2\alpha r-\mathbf{\beta})(-2\alpha r-\mathbf{\rho})\big{]}F( \mathbf{r})e^{-\omega r^{2}-\beta r}\end{array} \tag{11}\]
Substituting Eqs.(10), (11) and (12) into Eq.(6) we have,
\[\begin{array}{l}\mathbf{F}^{\prime}(\mathbf{\varsigma})+\Big{[}-4\alpha r-2\beta+ \frac{2}{r}\Big{]}F^{\prime}(\mathbf{\varsigma})+\Big{[}4\alpha^{\prime}-J\mathbf{ \varsigma}^{\prime}+(4\alpha\beta-H)r}\\ +\big{[}(2-2\beta)\frac{1}{r}\frac{LL+1}{r^{2}}+(\varepsilon+\beta^{\prime}-6 \alpha)\Big{]}F(\mathbf{\varsigma})\to 0\end{array} \tag{13}\]
The function \(\mathbf{F}(r)\) is considered as a series of the form
\[\begin{array}{l}\mathbf{F}^{\prime}(\mathbf{r})=\sum\limits_{n=0}^{\infty}\mathbf{ \varsigma}\mathbf{\varsigma}_{n}\mathbf{r}^{2n+L}\end{array} \tag{14}\]
Taking the first and second derivatives of Eq.(14) we obtain
\[\begin{array}{l}\mathbf{F}^{\prime}(\mathbf{r})=\sum\limits_{n=0}^{\infty}\mathbf{(2n+L )}\mathbf{\varsigma}_{n}\mathbf{r}^{2n+L-1}\end{array} \tag{15}\]
\[\begin{array}{l}\mathbf{F}^{\prime}(\mathbf{r})=\sum\limits_{n=0}^{\infty}\mathbf{(2n+L )}\mathbf{(2n+L-1)}\mathbf{\varsigma}_{n}\mathbf{r}^{2n+L-2}\end{array} \tag{16}\]
The substitution of Eqs. (14),(15) and (16) into Eq.(13) gave,
\[\begin{array}{l}\sum\limits_{n=0}^{\infty}\mathbf{(2n+L)(2n+L-1)}\mathbf{\varsigma}_{ n}\mathbf{r}^{2n+L-1}\\ +\Big{[}(4\alpha^{\prime}-J)\mathbf{r}^{2}+(4\alpha\beta-H)r+(C-2\beta)\frac{1}{r}- \frac{L(L+1)}{r^{2}}+(\varepsilon+\beta^{\prime}-6\alpha)\Big{]}\frac{\sum \limits_{n=0}^{\infty}\mathbf{\varsigma}_{n}\mathbf{r}^{2n+L-2}}{\mathbf{r}^{2}}-0\end{array} \tag{17}\]
By collecting powers of \(\mathbf{\Gamma}\) in Eq. (17) we have
\[\begin{array}{l}\Big{[}(2n+L)(2n+L-1)+2(2n+L)-L(L+1)\Big{]}\mathbf{r}^{2n+L-1}\\ +\big{[}(-2\beta(2n+L)+(G-2\beta))\mathbf{r}^{2n+L}\\ +\big{[}-4\alpha(2n+L)+\varepsilon+\beta^{\prime}-6\alpha\big{]}\mathbf{r}^{2n+L}\\ +\big{[}4\alpha\beta-H\big{]}\mathbf{r}^{2n+L+1}+\big{[}4\alpha^{\prime}-J\big{]} \mathbf{r}^{2n+L+1}\end{array} \tag{18}\]
Equation (18) is linearly independent implying that each of the terms is separately equal to zero, noting that \(\mathbf{\Gamma}\) is a non-zero function; therefore, it is the coefficient of \(\mathbf{\Gamma}\) that is zero. With this in mind, we obtain the relation for each of the terms.
\[\begin{array}{l}\mathbf{(2n+L)(2n+L-1)+2(2n+L)-L(L+1)=0}\end{array} \tag{19}\] \[\begin{array}{l}-2\beta(2n+L)+G-2\beta=0\end{array}\] (20) \[\begin{array}{l}-4\alpha(2n+L)+\varepsilon+\beta^{2}-6\alpha=0 \end{array}\] (21) \[\begin{array}{l}\mathbf{4\alpha\beta-H=O}\end{array}\] (22) \[\begin{array}{l}\mathbf{4\alpha\varepsilon\varepsilon^{2}-\beta=O}\end{array} \tag{23}\]
From Eq. (20)
\[\begin{array}{l}\mathbf{\rho}=\frac{G}{4n+2L+2}\end{array} \tag{24}\]
From Eq. (23)
Solutions of the Schrodinger equation with Quarkonium potential
\[\mathbf{c_{x}}\ =\ \frac{\mathbf{\sqrt{f}}}{2\mu} \tag{25}\]
We proceed to obtaining the energy eigenvalue equation using Eq. (21) and have
\[\mathbf{c}=2\mathbf{c_{x}}\mathbf{(4\pi+2L+3)}-\mathbf{\mathcal{P}}^{2} \tag{26}\]
Substituting Eqs. (7), (9), (24) and (25) into Eq. (26) and simplifying we obtain
\[\kappa_{x}-\sqrt{\frac{\mathbf{h}^{2}\mu}{2\mu}}\left(4n+2+\sqrt{\left(2^{2}+1 \right)^{2}}\right)-\frac{2\mu\mu\mu^{2}}{\mathbf{h}^{2}}\left(4n+1+\sqrt{\left(2 +1\right)^{2}}\right)^{-1}+\alpha_{x} \tag{27}\]
Substituting Eq. (5) into Eq. (27) we obtain the energy eigenvalue equation for quarkonium potential
\[\kappa_{x}-\sqrt{\frac{\mathbf{h}^{2}\mu}{2\mu}\left(\frac{m_{c}^{2} \left(\mathbf{c}\right)}{6}\cdot\frac{m_{c}\left(\mathbf{c}\right)}{3}\right)\left(4n+ 2+\sqrt{\left(2^{2}+1\right)^{2}}\right)}\] \[-\frac{2\mu\mu}{\mathbf{h}^{2}}\left(4-\frac{2\mu}{m_{c}^{2}\left( \mathbf{c}\right)}+\frac{2\mu}{m_{c}^{2}\left(\mathbf{c}\right)}\right)^{2}\left(4n+ 1+\sqrt{\left(2^{2}+1\right)^{2}}\right)^{-1}+\frac{2b}{m_{c}\left(\mathbf{c} \right)}-\frac{2a}{m_{c}\left(\mathbf{c}\right)} \tag{28}\]
## 3 Results and discussion
The mass spectra of the heavy mesons such as charmonium and bottomonium is calculated using the following relation (Abu-Shady,2016;Inyang _et al._, 2021)
\[\mathbf{\mathcal{A}}\ =\ 2\mathbf{\mathcal{P}}_{\mathbf{H}}+\mathbf{E}_{\mathbf{\mathcal{H}}}\, \tag{29}\]
where \(\mathbf{m}\) is quarkonium mass, and \(\mathbf{E}_{\mathbf{\mathcal{H}}}\) is energy eigenvalues. By substituting Eq. (28) into Eq. (29) we obtain the mass spectra for quarkonium potential as
\[M-2\mu+\sqrt{\frac{\mathbf{h}^{2}\mu}{2\mu}\left(\frac{am_{c}^{2} \left(\mathbf{c}\right)}{6}\cdot\frac{am_{c}\left(\mathbf{c}\right)}{3}\right)\left(4n +2+\sqrt{\left(2^{2}+1\right)^{2}}\right)}\] \[-\frac{2\mu\mu}{\mu^{2}\left(\alpha-\frac{2\alpha}{m_{c}^{2} \left(\mathbf{c}\right)}+\frac{2b}{m_{c}^{2}\left(\mathbf{c}\right)}\right)^{2}\left(4 n+1+\sqrt{\left(2^{2}+1\right)^{2}}\right)^{-1}+\frac{2b}{m_{c}\left(\mathbf{c} \right)}-\frac{2\alpha}{m_{c}\left(\mathbf{c}\right)}} \tag{30}\]
We calculate mass spectra of charmonium and bottomonium for quantum states from 1S to 1F using Eq. (30). The free parameters of Eq. (30) were then obtained by solving two algebraic equations.
The experimental data were taken from (Tanabashi _et al._, 2018). For bottomonium and charmonium systems we adopt the numerical values of these masses as \(\mathbf{m}_{b}=\begin{array}{c}\\ 4.823\,GeV\\ \end{array}\) and \(\mathbf{m}_{c}=1.209\,GeV\\) respectively (Barnett _et al._, 2012). Then, the corresponding reduced mass are \(\mathbf{\mu}_{b}=\begin{array}{c}\\ 2.4115\,GeV\\ \end{array}\) and \(\mathbf{\mu}_{c}=\begin{array}{c}\\ 0.6045\,GeV\\ \end{array}\). The Debye mass \(\mathbf{m}_{D}(T)\) is taken as \(1.52\,GeV\\) by fitted with experimental data. We note that calculation of mass spectra of charmonium and bottomonium are in good agreement with experimental data, as shown in Tables 1 and 2. The values obtained are also in good agreement with work of other researchers like; (Abu-Shady,2016) as shown in Tables 1 and 2 in which the author investigated the N- radial SE analytically. The Cornell potential was extended to finite temperature. In order to test for the accuracy of the predicted results, we used a Chi square function to determine the error between the experimental data and theoretical predicted values. The maximum error in comparison with the experimental data is found to be \(0.058\,GeV\\).
## Conclusion
In this study, we adopt a quarkonium potential for quark-antiquark interaction. We obtained the approximate solutions of the Schrodinger equation for energy eigenvalues using the series expansion method. The present results were applied to compute heavy-meson masses of charmonium and bottomonium for different quantum states. The result agreed with experimental data and work of other researchers with a maximum error of \(0.058\,GeV\).
|
2305.03695 | Vera: A General-Purpose Plausibility Estimation Model for Commonsense
Statements | Despite the much discussed capabilities of today's language models, they are
still prone to silly and unexpected commonsense failures. We consider a
retrospective verification approach that reflects on the correctness of LM
outputs, and introduce Vera, a general-purpose model that estimates the
plausibility of declarative statements based on commonsense knowledge. Trained
on ~7M commonsense statements created from 19 QA datasets and two large-scale
knowledge bases, and with a combination of three training objectives, Vera is a
versatile model that effectively separates correct from incorrect statements
across diverse commonsense domains. When applied to solving commonsense
problems in the verification format, Vera substantially outperforms existing
models that can be repurposed for commonsense verification, and it further
exhibits generalization capabilities to unseen tasks and provides
well-calibrated outputs. We find that Vera excels at filtering LM-generated
commonsense knowledge and is useful in detecting erroneous commonsense
statements generated by models like ChatGPT in real-world settings. | Jiacheng Liu, Wenya Wang, Dianzhuo Wang, Noah A. Smith, Yejin Choi, Hannaneh Hajishirzi | 2023-05-05T17:15:32Z | http://arxiv.org/abs/2305.03695v3 | # Vera: A General-Purpose Plausibility Estimation Model for Commonsense Statements
###### Abstract
Despite the much discussed capabilities of today's language models, they are still prone to silly and unexpected commonsense failures. We consider a retrospective verification approach that reflects on the correctness of LM outputs, and introduce Vera, a general-purpose model that estimates the plausibility of declarative statements based on commonsense knowledge. Trained on \(\sim\)7M commonsense statements created from 19 QA datasets and two large-scale knowledge bases, and with a combination of three training objectives, Vera is a versatile model that effectively separates correct from incorrect statements across diverse commonsense domains. When applied to solving commonsense problems in the verification format, Vera substantially outperforms existing models that can be repurposed for commonsense verification, and it further exhibits generalization capabilities to unseen tasks and provides well-calibrated outputs. We find that Vera excels at filtering LM-generated commonsense knowledge and is useful in detecting erroneous commonsense statements generated by models like ChatGPT in real-world settings.
Figure 1: Vera estimates the correctness of declarative statements. Example adapted from a contribution made by Henry Minsky to Marcus and Davis (2023) on February 23, 2023.
Introduction
We introduce Vera, a general-purpose commonsense statement verification model. This model is designed to estimate the plausibility of declarative, natural language statements based on commonsense knowledge.
We build Vera in response to the absence of good detectors of commonsense errors in text generated by language models (LMs). LMs have been advancing rapidly and have demonstrated remarkable success in various tasks, including question answering, natural language inference, sequence classification, and text generation. Yet these models still make simple commonsense mistakes. As shown in Figure 1, as of February 23, 2023, ChatGPT [OpenAI, 2022a] reportedly output the text _"since the density of a marble is much less than the density of mercury, the marble would sink to the bottom of the bowl if placed in it"_, which is obviously flawed. This kind of failure raises concerns about the reliability and trustworthiness of these models [Lin et al., 2022].
Vera estimates a plausibility score for a commonsense statement based on its commonsense knowledge about the world. It contrasts with _fact_ verification methods [Thorne et al., 2018, Wadden et al., 2020], which verify the correctness of claims based on evidence from a text corpus. Vera enables plausibility estimation where direct evidence is often not retrievable from some corpus, and usually some implicit, fuzzy reasoning is needed. It operates solely with the commonsense knowledge stored in its model parameters, and does not have a retrieval component.
Vera is built on top of T5 [Raffel et al., 2020], a generic pretrained LM, by finetuning on a vast collection of correct and incorrect commonsense statements sourced from knowledge bases (KBs) and question answering (QA) datasets. The 21 data sources (Table 2) amount to \(\sim\)7M statements encompassing a wide spectrum of domains, including general, scientific, physical, and social commonsense, as well as quantitative (reasoning about numbers) and qualitative (reasoning about qualitative relationships such as _smaller_) commonsense. We propose a novel two-stage model training process that takes into account the scale and quality of data from different sources. In addition to the standard multiple-choice binary classification objectives, we adopt a supervised contrastive loss [Khosla et al., 2020] to magnify the distinction between similar statements with different correctness labels. Furthermore, we propose an automatic way of augmenting the training data by eliciting LMs to generate incorrect answers to commonsense questions and empirically find that it helps generalization.
We evaluate Vera in the following applications:
* **Solving commonsense problems (SS5.1).** Vera can be applied to solve multiple-choice and boolean commonsense problems when expressed in the verification format, by scoring and ranking candidate hypotheses. It substantially outperforms existing models repurposed for commonsense verification (including GPT-3.5 and ChatGPT), improving upon the best existing baseline, Flan-T5, with absolute improvement of 6% on seen benchmarks and 4% on unseen ones.
* **Filtering LM-generated commonsense knowledge (SS5.2).** Vera can filter noisy commonsense knowledge statements generated by other LMs, improving the effectiveness of LM-generated knowledge in downstream knowledge-augmented inferences. Vera is well-calibrated, enabling filtering at customized thresholds.
* **Detecting commonsense errors in ChatGPT outputs (SS5.3).** Through a preliminary analysis, we find that Vera can identify commonsense errors made by ChatGPT in-the-wild, with a precision of 91% and a recall of 74%. An example of Vera in action is shown in Figure 1.
We hope that Vera can be a useful tool for improving the commonsense correctness of existing generative LM output and inspire more effort toward general-purpose and robust verification methods. We release the model2 and a demo,3 and will later release the code and data.4
Footnote 2: [https://huggingface.co/liujch1998/vera](https://huggingface.co/liujch1998/vera)
Problem Definition and Scope
Our goal is to build a model that can estimate the plausibility of any given _commonsense statement_. The model takes as input a statement that
1. is expressed in **natural language** (e.g., _Bicycles are used for transportation_.), as opposed to structured triples (e.g., _(bicycle, UsedFor, transportation_));
2. is **declarative** (e.g., _An average dog can follow an instruction manual_.), as opposed to interrogative questions (e.g., _Can an average dog follow an instruction manual?_); it may contain multiple sentences, in which case we aim to predict the overall plausibility of the statement;
3. is **self-contained**, not requiring additional context to comprehend; most hypotheses from NLI tasks (e.g., _The man is sleeping_.) are out of scope;
4. has an objective, binary **correctness label** - similar to a logical proposition. While in practice the correctness of commonsense statements may be defeasible by additional context (Reiter, 1987; Rudinger et al., 2020), when training and evaluating our model, we make the assumption that each statement has an unambiguous label, which is consistent with most commonsense QA datasets and benchmarks;
5. in principle can be labeled using widely-held **commonsense knowledge** about the world; encyclopedic knowledge (e.g., _Ljubljana is the capital of Slovenia_.) is out of scope.
Moving forward, unless explicitly noted, we use _commonsense statement_ to refer to statements within the above scope. Though somewhat strict, this scope covers a broad range of potential applications.
For an input commonsense statement \(x\), the model should output a real-valued score \(s\in[0,1]\) that represents its estimated plausibility of \(x\). While the gold correctness label is binary, we let the model output a score to reflect its confidence. A score of 1.0 means that it is completely confident that \(x\) is correct, and a score of 0.0 means it is completely confident that \(x\) is incorrect. When predicting correctness label from the score, we use 0.5 as the threshold.
## 3 Method
In this section, we describe the whole pipeline to build Vera. We start from curating large-scale training data including both correct and incorrect statements from diverse commonsense tasks (SS3.1). Next, we learn a scoring model that takes a statement and returns a continuous score by finetuning a LM via a combination of 3 training objectives (SS3.2). An additional post hoc calibration strategy is applied to make the output scores well-calibrated (SS3.3).
### Data Construction
Labeled commonsense statements usually do not appear in text in the wild, while some commonsense question answering (QA) datasets and commonsense knowledge bases (KBs) are good sources for this kind of statements. We collect correct and incorrect commonsense statements from the above two types of data source. Table 1 shows some examples on how these statements can be converted from QA problems and KB entries. In total, we obtain \(\sim\)7M statements (for training) from 19 QA datasets (SS3.1.1) and two KBs (SS3.1.2) that encompass a wide spectrum of commonsense domains. Table 2 lists these datasets and their statistics. All datasets we use are publicly available.
#### 3.1.1 From Commonsense QA Datasets
Numerous commonsense reasoning datasets have been published in recent years (Davis, 2023), and many of them are in the format of multiple-choice QA (selecting the correct answer out of a set of choices) or boolean (yes/no) QA. These can be easily converted to correct and incorrect commonsense statements. From multiple-choice QA problems, we combine the question and each answer choice to form declarative statements, which are correct when using the correct answer, and incorrect otherwise. From boolean QA problems, we convert the question into a declarative statement, and keep the original label as the correctness label. Concrete examples can be found in Table 1.
Statement groups.We refer to statements originating from the same problem as a _statement group_. Note that statement groups originating from multiple-choice problems contain at least two statements, of which one and only one is correct; statement groups originating from boolean problems contain only one statement, and it can be either correct or incorrect.
We do conversion to declarative statements automatically; the detailed process can be found in Appendix SSA.2. In total, 19 commonsense QA datasets contribute \(\sim\)200k statement groups and \(\sim\)400k statements to the training set of Vera.
LM-augmented falsehoods.Existing commonsense QA datasets are mostly manually constructed or assembled from standard school exams. A model trained on these datasets might overfit specific annotation patterns from humans which may limit generalization. Therefore, we augment QA problems with LM-generated answers and construct additional incorrect statements. Specifically, for a multiple-choice question, we use a small LM to sample 50 possible answers to the question, and select the 3 least probable answers with generation probability less than 0.15 (making these unlikely to be correct answers) This threshold is chosen based on manual inspection over a small portion of examples. We observe generated answers with probability larger than 0.15 are more likely to be plausible. We create LM-augmented falsehoods for the training set of 9 commonsense QA datasets, as noted in Table 2.
#### 3.1.2 From Commonsense KBs
Commonsense KBs (e.g., Atomic2020 [14] and GenericsKB [11]) contain a large number of correct commonsense statements. To create incorrect statements, we automatically perturb KB entries by replacing the subject with three random subjects that appear in the KB. Table 1 shows how to convert an entry in GenericsKB to a statement group containing four statements, three of which are augmented via perturbations. The perturbed statements are relatively easy to identify and may contain false negatives. As noted in SS3.2.4, we use these KB-constructed statements in a separate training stage that precedes training with QA-constructed statements. In total, two commonsense KBs contribute \(\sim\)1.6M statement groups and \(\sim\)6M statements to the training set of Vera.
### Model Training
#### 3.2.1 Model Architecture
Given a statement \(x\), Vera outputs a real-valued score \(s\in[0,1]\). As we will use a transformer-based LM as the backbone of Vera, we first extract the input representation \(\mathbf{h}\) by selecting the last hidden state corresponding to the EOS input token. We choose EOS because it is capable to encode the entire input in both bidirectional encoder models (e.g., T5's encoder) and left-to-right decoder models (e.g., LLaMA). Then a linear layer projects \(\mathbf{h}\) to a scalar logit \(z\), followed by a sigmoid function \(\sigma(\cdot)\) that transforms the logit into a score \(s\). Formally,
\[\mathbf{h}=f_{\text{LM}}(x),\qquad z=f_{\text{linear}}(\mathbf{h}),\qquad s= \sigma(z).\]
For brevity, we use \(\mathbf{h}(x)\), \(z(x)\) and \(s(x)\) to refer to the representation, logit and score of an arbitrary input \(x\).
\begin{table}
\begin{tabular}{l l l l l l l l l} \hline \hline
**Abbr.** & **Name** & **Domain** & **Format** & **\# Train Ex.** & **Aug** & **\# Dev Ex.** & **\# Statements** & **\# True** & **\# False** \\ \hline \hline \multicolumn{10}{c}{Stage A training} \\ Atomic2020 & Atomic2020 & multiple-choice (4) & 803541 & 70731 & 282924 & 70731 & 212193 \\ GenericsKB & GenericsKB & multiple-choice (4) & 775820 & 96977 & 387908 & 96977 & 290931 \\
**Total** & & **1579361** & **167708** & **670832** & **167708** & **503124** \\ \hline \hline \multicolumn{10}{c}{Stage B training (seen)} \\ \hline OBQA & OpenBookQA & scientific & multiple-choice (4) & 4957 & ✓ & 500 & 2000 & 500 & 1500 \\ ARC\_e & ARC (easy) & scientific & multiple-choice (4) & 2251 & ✓ & 570 & 2281 & 570 & 1711 \\ ARC\_h & ARC (hard) & scientific & multiple-choice (4) & 1119 & ✓ & 299 & 1194 & 299 & 895 \\ AI25ic\_e & AZ Science (elem) & scientific & multiple-choice (4) & 623 & ✓ & 123 & 489 & 123 & 366 \\ AI25ic\_m & AZ Science (middle) & scientific & multiple-choice (4) & 605 & ✓ & 125 & 502 & 125 & 377 \\ CSQA & CommonsenseQA & general & multiple-choice (5) & 9741 & ✓ & 1221 & 6099 & 1221 & 4878 \\ QASC & QASC & scientific & multiple-choice (8) & 8134 & ✓ & 926 & 7408 & 926 & 6482 \\ PIQA & Physical IQA & physical & multiple-choice (2) & 16113 & 1838 & 3676 & 1838 & 1838 \\ SIQA & Social IQA & social & multiple-choice (3) & 33410 & ✓ & 1954 & 5861 & 1954 & 3907 \\ WG & Winogradne & general & multiple-choice (2) & 40398 & 1267 & 2534 & 1267 & 1267 \\ C2S & Com2Sense (paired) & general & multiple-choice (2) & 804 & 391 & 782 & 391 & 391 \\ SciQ & SciQ & scientific & multiple-choice (4) & 11679 & ✓ & 1000 & 4000 & 1000 & 3000 \\ QuaRel & QuaRel & qualitative & multiple-choice (2) & 1941 & 278 & 556 & 278 & 278 \\ QuaRLr & QuaRTz & qualitative & multiple-choice (2) & 2696 & 384 & 768 & 384 & 384 \\ CycleC & Cycle (mc) & general & multiple-choice (5) & 6521 & 907 & 4535 & 907 & 3628 \\ ComVE & ComVE (task A) & general & multiple-choice (2) & 10000 & 997 & 1994 & 997 & 997 \\ CSQA & CommonsEA 0.2 & general & boolean & 9264 & 2541 & 2541 & 1225 & 1316 \\ SKD\_anno & SKD (annotated) & & boolean & 7980 & 1015 & 1015 & 803 & 212 \\ I2D2\_anno & I2D2 (annotated) & & boolean & 26206 & 13094 & 13094 & 6158 & 6936 \\
**Total** & & & **194442** & **29430** & **61329** & **20966** & **40363** \\ \hline \hline \multicolumn{10}{c}{Evaluation (unseen type 1)} \\ \hline WSC & WSC & general & multiple-choice (2) & 0 & 273 & 546 & 273 & 273 \\ COPA & COPA & general & multiple-choice (2) & 0 & 500 & 1000 & 500 & 500 \\ NumerSense & NumerSense & quantitative & multiple-choice (11) & 0 & 200 & 2200 & 2000 & 2000 \\ PROOST & PROST & physical & multiple-choice (4) & 0 & 18736 & 74944 & 18736 & 56208 \\ SpatialCS & Spatial Commonsense & physical & boolean & 0 & 1448 & 1448 & 724 & 724 \\ Rainier\_anno & Rainier (annotated) & & boolean & 0 & 591 & 591 & 424 & 167 \\
**Total** & & & **0** & **21748** & **80729** & **20857** & **59872** \\ \hline \hline \multicolumn{10}{c}{Evaluation (unseen type 2)} \\ \hline SWAG & SWAG & multiple-choice (4) & 0 & 20006 & 80024 & 20006 & 60018 \\ HellaSwag & HellaSwag & multiple-choice (4) & 0 & 10042 & 40168 & 10042 & 30126 \\ CODAH & CODAH & multiple-choice (4) & 0 & 2776 & 11104 & 2776 & 8328 \\ SCT & Story Cloze Test & multiple-choice (2) & 0 & 1871 & 3742 & 1871 & 1871 \\ \(\alpha\)NL1 & \(\alpha\)NLI & multiple-choice (2) & 0 & 1532 & 3064 & 1532 & 1532 \\ StrategyQA & StrategyQA & boolean & 0 & 229 & 107 & 122 \\ CREAK & CREAK & boolean & 0 & 1371 & 1371 & 691 & 680 \\
**Total** & & & **0** & **37827** & **139702** & **37025** & **102677** \\ \hline \hline \end{tabular}
\end{table}
Table 2: Datasets and statistics. Data sourced from commonsense KBs are listed under Stage A training, and data sourced from commonsense QA datasets are listed under Stage B training. The number in parentheses under the **Format** column represents the number of choices per question. The **Aug** column indicates whether LM-augmented falsehoods are generated for each dataset. The last three columns are the number of total, correct and incorrect statements in the development set. See Table 6 for more dataset statistics, and Table 7 for full citations and sources for these datasets.
#### 3.2.2 Batching
The data we construct consists of statements belonging to different statement groups. For reasons we will describe in SS3.2.3, we put all statements belonging to the same statement group into the same batch. Each batch may contain multiple complete statement groups. We denote by \(B_{G}\) the number of statement groups and \(B_{S}\) the number of statements in total within a single batch. We denote the statement groups as \(\{X_{j}\}_{j=1}^{B_{G}}\), and the statements as \(\{x_{i}\}_{i=1}^{B_{S}}\). \(\{X_{j}\}_{j=1}^{B_{G}}\) is a partition of \(\{x_{i}\}_{i=1}^{B_{S}}\). \(y_{i}\in\{0,1\}\) is the correctness label of \(x_{i}\).
#### 3.2.3 Training Objectives
We use a training loss that is a linear combination of three losses: a binary classification loss, a multi-class loss, and a supervised contrastive loss, which we describe below. Formally,
\[\mathcal{L}=\alpha\mathcal{L}_{\text{bin}}+\beta\mathcal{L}_{\text{mc}}+ \gamma\mathcal{L}_{\text{ctr}}.\]
Binary classification loss.Naively, commonsense statement verification can be viewed as a binary classification task. Under this setting, the loss is
\[\mathcal{L}_{\text{bin}}(x_{i},y_{i})=-y_{i}\log s(x_{i})-(1-y_{i})\log(1-s(x_ {i})).\]
To account for the fact that there are usually more incorrect statements than correct ones in the data produced from multiple-choice datasets, we divide this loss by the number of statements with the same correctness label in the same statement group. Therefore, the binary classification loss for the whole batch is
\[\mathcal{L}_{\text{bin}}=\frac{1}{B_{G}}\sum_{j=1}^{B_{G}}\sum_{y\in\{0,1\}} \Bigg{[}\frac{1}{\sum_{c=1}^{C_{j}}\mathbb{I}[y_{jc}=y]}\sum_{c=1}^{C_{j}} \mathbb{I}[y_{jc}=y]\mathcal{L}_{\text{bin}}(x_{jc},y_{jc})\Bigg{]},\]
where \(C_{j}\) is the number of statements in statement group \(X_{j}\), \(x_{jc}\) is the \(c\)th statement in \(X_{j}\), and \(\mathbb{I}\) is the indicator function.
Multi-class loss.We expect the model to be robust against nuances in commonsense statements. Ideally, the model should be able to recognize opposite correctness labels for a group of seemingly similar statements in surface forms, such as statements created from different choices of the same question, or perturbed from the same piece of knowledge in a KB. To achieve this goal, we treat each statement group as a multi-class classification problem, maximizing the log-likelihood of the single correct statement in the statement group after passing the logits through a softmax. Formally,
\[\mathcal{L}_{\text{mc}}(X_{j})=-\log\frac{\exp z(x_{j*})}{\sum_{c=1}^{C_{j}} \exp z(x_{jc})},\]
where \(x_{j*}\) is the correct statement in \(X_{j}\). The multi-class loss for the whole batch is
\[\mathcal{L}_{\text{mc}}=\frac{1}{B_{G}}\sum_{j=1}^{B_{G}}\mathcal{L}_{\text{ mc}}(X_{j}).\]
Note that the multi-class loss is not applicable to statement groups with only one statement (i.e., statement groups from boolean QA datasets).
Supervised contrastive loss.It has been shown (Khosla et al., 2020) that supervised contrastive learning helps to improve model robustness and generalization against input variations. In light of this, we further adopt supervised contrastive learning on top of the input representations \(\mathbf{h}\). We will show in Figure 4 that the contrastive loss indeed improve generalization to unseen datasets. For each anchor statement \(x_{i}\) in a batch, the contrastive loss aims to maximize the similarity between \(x_{i}\) and each other statement \(x_{p}\) that has the same correctness label as \(x_{i}\) (i.e., positive example). At the same time, we push apart \(x_{i}\) and other statements \(x_{n}\) that has opposite correctness label as \(x_{i}\) (i.e., negative example). The supervised contrastive loss is
\[\mathcal{L}_{\text{ctr}}(x_{i},y_{i})=-\log\frac{\sum_{k\in\mathcal{P}(i)}\exp [\text{cos}(\mathbf{h}(x_{i}),\mathbf{h}(x_{k}))/\tau]}{\sum_{k\in\mathcal{P}( i)\cup\mathcal{N}(i)}\exp[\text{cos}(\mathbf{h}(x_{i}),\mathbf{h}(x_{k}))/ \tau]},\]
where \(\tau\) is a temperature hyperparameter, \(\text{cos}(\cdot,\cdot)\) refers to cosine similarity, \(\mathcal{P}(i)\subseteq[B_{S}]\) is the index set of statements that are positive examples for \(x_{i}\), and \(\mathcal{N}(i)\subseteq[B_{S}]\) is the index set of statements that are negative examples for \(x_{i}\). Formally,
\[\mathcal{P}(i) =\big{\{}k\mid 1\leq k\leq B_{S},y_{k}=y_{i},k\neq i\big{\}},\] \[\mathcal{N}(i) =\big{\{}k\mid 1\leq k\leq B_{S},y_{k}\neq y_{i}\big{\}}.\]
The supervised contrastive loss for the whole batch is
\[\mathcal{L}_{\text{ctr}}=\frac{1}{B_{S}}\sum_{i=1}^{B_{S}}\mathcal{L}_{\text{ ctr}}(x_{i},y_{i}).\]
#### 3.2.4 Two-stage training
Since data sourced from KBs are larger in scale but more noisy than data sourced from QA datasets, we take a two-stage training approach. In training _stage A_, we start from a pre-trained LM and train with data sourced from KBs. In training _stage B_, we start from the model obtained in stage A and train with data sourced from QA datasets. During experiments we found that this setting is better than single-stage training with either data source or a mixture of the two.
### Inference and Calibration
An ideal plausibility estimation model should be calibrated, that is, its confidence in its predictions should be approximately equal to the actual frequency of correctness. During early experiments, we found that Vera tends to be overconfident. Therefore, we apply a post hoc calibration on Vera's output. Following the temperature scaling method introduced in Guo et al. (2017), during inference we divide the model-predicted logit by a temperature \(T\) before computing the score, that is,
\[\mathbf{h}=f_{\text{LM}}(x),\qquad z=f_{\text{linear}}(\mathbf{h}),\qquad \tilde{z}=z/T,\qquad s=\sigma(\tilde{z}).\]
Note that no temperature scaling is applied during model training.
With predictions on a validation set \(\mathcal{D}=\{(x_{i},y_{i})\}_{i=1}^{\mathcal{D}}\), we estimate \(T\) that gives the minimal expected calibration error (ECE) (Naeini et al., 2015) on this validation set. ECE is computed as
\[ECE =\sum_{m=1}^{M}\frac{|B_{m}|}{|\mathcal{D}|}\cdot\Big{|}\text{ Acc}(B_{m})-\text{Score}(B_{m})\Big{|}\] \[=\sum_{m=1}^{M}\frac{|B_{m}|}{|\mathcal{D}|}\cdot\Big{|}\frac{1 }{|B_{m}|}\sum_{(x_{i},y_{i})\in B_{m}}\mathbb{I}[y_{i}=1]-\frac{1}{|B_{m}|} \sum_{(x_{i},y_{i})\in B_{m}}s(x_{i})\Big{|}, \tag{1}\]
where \(M\) is the number of bins which bucket data points with similar predictions, and \(B_{m}\subseteq\mathcal{D}\) is the subset of data points that fall into the \(m\)-th bin. We use \(M=10\) equal-sized bins when computing ECE. In practice, we use the combined development sets of the seen datasets (SS4.2) to estimate \(T\), and the optimal \(T\) becomes a parameter of Vera.
Note that temperature scaling does not change the relative ordering of prediction scores, and thus the other performance metrics (e.g., accuracy) are not affected by this post hoc calibration.
## 4 Experimental Setup
In this section, we provide more details of model training, the evaluation protocol and metrics, and describe the baseline models we benchmark.
### Training Details
Datasets.For training stage A, we use the \(\sim\)1.6M statement groups (i.e., \(\sim\)6M statements) sourced from two commonsense KBs; for training stage B, we use the \(\sim\)200k statement groups (i.e., \(\sim\)400k statements) sourced from 19 commonsense QA datasets. For each training stage, we mix the training sets of all datasets together, without any re-weighting. For memory efficiency, during training, each statement is truncated to 128 tokens (which can accommodate more than 99% of the statements; see Table 6) and each statement group is capped to four statements.
Models.We use two types of pretrained LMs as the backbone of Vera: (1) the encoder of T5 [Raffel et al., 2020], which is a bidirectional encoder model; (2) LLaMA [Touvron et al., 2023], which is a left-to-right decoder model. The T5 tokenizer tokenizes input so that it ends with the EOS token </s> (token ID = 1). We manually configured the LLaMA tokenizer so that its output ends with the EOS token </s> (token ID = 2), and does not contain the BOS token <s> (token ID = 1). For the T5 encoder, we start from the pretrained T5-v1.1-XXL5 whose encoder has about 5B parameters, and refer to the resulting model as Vera-T5. (During experiments we found that starting from Flan-T5-XXL6 performs slightly worse than starting from T5-v1.1-XXL.) For LLaMA, we start from the pretrained LLaMA-7B and refer to the resulting model as Vera-LLaMA. As we will see, Vera-T5 has better performance than Vera-LLaMA, so unless explicitly specified, when we say Vera we mean Vera-T5. Models are trained for \(S=50k\) steps with \(B_{G}=64\) statement groups per batch, using the Adam optimizer [Kingma and Ba, 2014] with learning rate \(\eta=1\times 10^{-5}\) for T5 encoder and \(\eta=2\times 10^{-6}\) for LLaMA. We train models with the Huggingface Transformers and Accelerate libraries [Wolf et al., 2019, Gugger et al., 2022]. See Table 8 for the complete hyperparameter settings.
Footnote 5: [https://huggingface.co/google/t5-v1_1-xxl](https://huggingface.co/google/t5-v1_1-xxl)
Footnote 6: [https://huggingface.co/google/flan-t5-xxl](https://huggingface.co/google/flan-t5-xxl)
### Evaluation Protocol and Metrics
We divide our evaluation into two parts:
1. _Seen_ benchmarks, where the training set of the benchmark is used for model training.
2. _Unseen_ benchmarks, where the training set of the benchmark is not used for model training. We futher divide up the unseen benchmarks into _type 1_ and _type 2_, where in type 1 benchmarks the task is similar to those in the seen benchmarks, while type 2 benchmarks are a bit further away in terms of the nature of the task. Examples of type 2 unseen benchmarks include HellaSwag which is contextualized with event descriptions, and CREAK which involves reasoning among different entities.
Depending on the nature of the evaluation benchmark, we use different metrics to evaluate our model's performance. Unless explicitly said otherwise, we report performance on the development set of each benchmark, where the gold labels are generally available to enable easy comparison, and we do not use the development sets of unseen datasets for model selection. The overall metric reported over multiple benchmarks is the unweighted average of the metric over all these benchmarks, which accounts for the differently-sized evaluation sets.
We report accuracy for multiple-choice and balanced boolean benchmarks. For those and also for unbalanced boolean benchmarks (e.g., LM-generated knowledge filtering datasets), we report area under the ROC curve (AUROC) and average precision (AP). To measure how well the model-predicted scores reflect confidence, we measure the ECE [Naeini et al., 2015] on the boolean benchmarks, following Equation 1.
### Baseline Models
We compare Vera with the best publicly available models that can be repurposed for commonsense statement verification. These models are described below, roughly in increasing order of performance.
SKD Critic.West et al. [2021] trained a critic model that filters incorrect commonsense knowledge generated by their symbolic knowledge distillation (SKD) method. This critic model is based on RoBERTa-large [Liu et al., 2019] and is finetuned on \(8k\) GPT-3-generated commonsense knowledge sentences with human-annotated true/false labels. The model predicts a \([0,1]\) score \(s\) which we use as the final score, and we let the logit \(z=\sigma^{-1}(s)\).
I2D2 Critic.Bhagavatula et al. [2022] trained a critic model that filters incorrect commonsense knowledge generated by their I2D2 method. This critic model is based on RoBERTa-large [Liu et al., 2019] and is finetuned on \(12k\) I2D2-generated commonsense knowledge sentences with human-annotated true/false labels. Given an input statement, the model predicts two logits: \(t\) for the True
label and \(f\) for the False label. We let the logit \(z=t-f\) and the score \(s=\sigma(t-f)\). We use the critic model trained in the final iteration (i.e., "Iter 2" in I2D27).
Footnote 7: [https://gengen.apps.allenai.org/](https://gengen.apps.allenai.org/)
**UnifiedQA-v2.** UnifiedQA-v2 [Khashabi et al., 2022] is a general-purpose QA model trained on datasets with a variety of input formats, including boolean datasets. When the input is a declarative statement, the model is trained to output either "yes" or "no". We use this feature of the model and make it act as a commonsense statement verifier. For an input statement, we compute the logits received by "yes" and "no" in the decoder, denoted as \(t\) and \(f\), respectively. We let the logit \(z=t-f\) and the score \(s=\sigma(t-f)\). We use the largest version of this model, UnifiedQA-v2-11b.8
Footnote 8: [https://huggingface.co/allenai/unifiedqa-v2-t5-11b-1251000](https://huggingface.co/allenai/unifiedqa-v2-t5-11b-1251000)
**Entailer.** Entailer [Tafjord et al., 2022] is a model trained to construct proof trees for scientific commonsense hypotheses. This multi-angle model can be used in three ways: (1) given a hypothesis, generate a set of premises that may entail it; (2) given a hypothesis, predict a score that reflects the model's belief in it; (3) given a hypothesis and set of premises, predict a score that reflects whether there is a valid entailment between them. We use (2) as a commonsense statement verifier. The model predicts a \([0,1]\) score \(s\) which we use as the final score, and we let the logit \(z=\sigma^{-1}(s)\). We use the largest version of this model, Entailer-11b.9
Footnote 9: [https://huggingface.co/allenai/entailer-11b](https://huggingface.co/allenai/entailer-11b)
**Gpt-3.5.** GPT-3.5 [OpenAI, 2022b] is a series of general-purpose autoregressive decoder-only LMs. To make it act as a commonsense verifier, we use the following input prompt:
Question: Based on commonsense knowledge, is the following statement correct? Please answer yes or no. Statement: {statement}
Answer:
We query the OpenAI Completions API10 with this prompt and compute the logits received by " Yes" and " No" in the next-token prediction, denoted as \(t\) and \(f\), respectively. We let the logit \(z=t-f\) and the score \(s=\sigma(t-f)\). We experimented with several prompt formats and found the one presented above to have the best performance, and in most cases, " Yes" and " No" together receive most of the probability mass during next-token prediction. We also experimented with several models in the GPT-3 [Brown et al., 2020] and GPT-3.5 series, and found GPT-3.5 (text-davinci-002) to work the best.
Footnote 10: [https://platform.openai.com/docs/api-reference/completions](https://platform.openai.com/docs/api-reference/completions)
**ChatGPT and GPT-4.** ChatGPT [OpenAI, 2022a] and GPT-4 [OpenAI, 2023] are optimized for chat. To make them act as a commonsense verifier, we use the same input prompt as for GPT-3.5, without the "Answer:" line. We query the OpenAI Chat API11 with this prompt in a user message, and obtain the first token of the assistant message in the response. Since the API does not provide token logits, we let the score \(s=1.0\) when this token is "Yes", and \(s=0.0\) when this token is "No". In the unlikely case that this token is neither, we let \(s=0.5\). We add a small random noise to the score. This is to arbitrate potentially multiple positive predictions within statement groups from multiple-choice QA problems, and to enable plotting the ROC and precision-recall curves. Note that this is not an ideal solution and may cause under-estimation of ChatGPT and GPT-4's performance.
Footnote 11: [https://platform.openai.com/docs/api-reference/chat](https://platform.openai.com/docs/api-reference/chat)
**Flan-T5.** Flan-T5 [Chung et al., 2022] is a series of sequence-to-sequence LMs instruction-finetuned on massive number of tasks. To make it act as a commonsense verifier, we use the same input prompt as for GPT-3.5. We compute the logits received by "yes" and "no" in the first token prediction in the decoder, denoted as \(t\) and \(f\), respectively. We let the logit \(z=t-f\) and the score \(s=\sigma(t-f)\). We experimented with several prompt formats and found the one presented above to have the best performance, and in most cases, "yes" and "no" together receive most of the probability mass during the token prediction. We use the largest version of this model, Flan-T5-XXL.12 Note that some unseen benchmarks are in the training data of Flan-T5; see Table 7 for details on data contamination.
Footnote 12: [https://huggingface.co/google/flan-t5-xxl](https://huggingface.co/google/flan-t5-xxl)
Footnote 13: [https://github.com/docs/api-reference/completions](https://github.com/docs/api-reference/completions)
Footnote 14: [https://platform.openai.com/docs/api-reference/completions](https://platform.openai.com/docs/api-reference/completions)
Footnote 15: [https://platform.openai.com/docs/api-reference/chat](https://platform.openai.com/docs/api-reference/chat)
Footnote 16: [https://huggingface.co/google/flan-t5-xxl](https://huggingface.co/google/flan-t5-xxl)
Footnote 17: [https://github.com/docs/api-reference/completions](https://github.com/docs/api-reference/completions)
[MISSING_PAGE_POST]
Evaluation Results
In this section, we evaluate the ability of Vera to estimate the plausibility of commonsense statements and compare it with the baseline models. We show the effectiveness of Vera in three scenarios: solving commonsense problems, filtering LM-generated commonsense knowledge, and detecting commonsense errors in ChatGPT outputs.
### Solving Multiple-Choice and Boolean Commonsense Problems
The output plausibility scores from Vera can be used for solving multiple-choice and boolean commonsense problems. We first convert the problems into the statement group format (SS3.1). For multiple-choice problems, we choose the statement with the highest score in the statement group. For boolean problems, we use \(s=0.5\) as the threshold to predict correctness labels of statements.
Figure 2: Results on problem-solving with Vera on seen and unseen benchmarks. Average result on the development sets is reported. Accuracy across different parts (seen, unseen (type 1), unseen (type 2)) are not directly comparable due to different underlying benchmarks. For calibration curves, curves with saturated colors are results after applying post hoc calibration (§3.3), while curves with faded colors are results from the raw logits. See Figure 6 and Table 9, 10, 11 for full results. \(\dagger\): The performance of ChatGPT and GPT-4 may be under-estimated because we don’t have access to the raw token logits. \(\ddagger\): Flan-T5 has been trained on some unseen benchmarks we use; see Table 7 for details on data contamination.
Figure 2 reports the results when Vera is applied to solve commonsense problems. On seen benchmarks (16 multiple-choice and one boolean), Vera outperforms the best baseline, Flan-T5, by 6% on (absolute) accuracy and 9% on AUROC. Vera beats Flan-T5 by 4% accuracy and 5% AUROC on type 1 unseen benchmarks (four multiple-choice and one boolean), and by 4% accuracy and 6% AUROC on type 2 unseen benchmarks (five multiple-choice and two boolean), demonstrating good generalization. Vera-T5 has better performance than Vera-LLaMA across the board, which may be due to its bidirectional connectivity. Aside from performance, Vera also has good calibration, with ECE no higher than 3% on seen and unseen benchmarks. The post hoc calibration method improves calibration across all three parts.
Typically we may need to choose a threshold for binary classification in boolean datasets. However, we notice that a zero logit (\(z=0\)) is generally close to the optimal decision threshold between correct and incorrect commonsense statements. Therefore we do not estimate a model-specific threshold, and simply use the default threshold: \(z=0\), or equivalently, \(s=0.5\).
### Filtering LM-generated Commonsense Knowledge
Figure 3 reports the results when Vera is applied to filter LM-generated commonsense knowledge. On the two seen benchmarks, SKD_anno and I2D2_anno, Vera is a better knowledge filter than all baseline models, in terms of both AUROC and AP. In particular, on I2D2_anno it outperforms the I2D2 critic model by 2% AUROC, which is specifically trained on the I2D2_anno dataset and does not generalize well to other benchmarks. On the unseen benchmark, Rainier_anno, Vera is also comparable with the best baselines like Flan-T5 and GPT-3.5. As for calibration, the ECE is no higher than 8% on all three benchmarks.
We find that filtering commonsense knowledge using Vera can greatly improve the performance of knowledge-augmented reasoning methods. In the Generated Knowledge Prompting framework (Liu et al., 2021), when solving a commonsense QA problem, first a knowledge model generates several commonsense knowledge statements relevant to the question, and then a QA model makes predictions based on them. A big problem that hinders the effectiveness of this framework is that model-generated knowledge is not always factual, and incorrect knowledge statements can mislead the QA model. We introduce Vera to filter these statements before passing them to the QA model. In particular, we keep those statements that receive a score higher than 0.5 from Vera.
Figure 3: Results for filtering LM-generated commonsense knowledge with Vera. Results on the development sets are reported. See Figure 7 for full results.
Following Liu et al. (2022b), we use UnifiedQA-large as the QA model, and consider two knowledge models: few-shot GPT-3 (davinci) (Brown et al., 2020) and Rainier-large (Liu et al., 2022b). We follow the evaluation settings as in Liu et al. (2022b), and for few-shot GPT-3 (davinci), we use the same task-specific few-shot prompts and same process to generate silver knowledge as in Liu et al. (2022b). Results are shown in Table 3. Applying knowledge filtering with Vera increases the effectiveness of GPT-3's knowledge by 46%, and increased the effectiveness of Rainier's knowledge by 233%. Vera can effectively supervise and improve the quality of commonsense knowledge generated by a much larger model, GPT-3 (davinci). Detailed results (Table 12) show that there is increased effectiveness in every individual benchmark.
### Preliminary Study on Detecting Commonsense Errors in ChatGPT Outputs
Vera can be useful in detecting commonsense mistakes made by generative LMs in-the-wild. We collected 27 anecdotes from the Internet where people reported ChatGPT making commonsense errors, and manually rewrote them into their correct versions, obtaining 54 statements in total.
As reported in Table 4, when detecting incorrect commonsense statements in this dataset, Vera has a precision of 91% and a recall of 74%, amounting to an \(F_{1}\) score of 82%. Table 5 shows how Vera scores some of these these erroneous commonsense statements and their manually corrected version. In 7 out of the 9 cases, Vera assigns a low score to the original, incorrect statement, and a high score to the corrected statement. For example, _"since the density of a marble is much less than the density of mercury, the marble would sink to the bottom of the bowl if placed in it"_ receives a score of 0.04 and is identified as an incorrect statement, whereas _"since the density of a marble is much less than the density of mercury, the marble would float if placed in mercury"_ receives a score of 0.96 and is identified as a correct statement. Meanwhile, there are also some failure cases. Vera believes that _"it is possible for a solar eclipse to be followed by a lunar eclipse the next day"_, and fails to reject that _"it is possible to draw a diagonal line in a triangle"_.
## 6 Analysis
### Ablations
We conduct an ablation study by incrementally removing the following components from the training process: contrastive loss (SS3.2.3), training stage A (SS3.2.4), LM-augmented falsehoods (SS3.1), multi-class loss or binary loss (SS3.2.3). Since at least one of the multi-class loss and the binary loss is needed, we remove them separately and observe the effect of training with a single loss.
Results are shown in Figure 4. Overall, the ablated components have more impact on unseen benchmarks than seen ones. Removing the contrastive loss hurts performance mostly on unseen
\begin{table}
\begin{tabular}{l c} \hline \hline
**Metric** & **Value** \\ \hline Precision (incorrect statements) & 0.91 \\ Recall (incorrect statements) & 0.74 \\ \(F_{1}\) (incorrect statements) & 0.82 \\ Classification accuracy on incorrect statements & 0.74 \\ Classification accuracy on correct statements & 0.93 \\ Classification accuracy on paired statements & 0.74 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Results on using Vera to detect commonsense mistakes made by ChatGPT.
\begin{table}
\begin{tabular}{l c c c c} \hline \hline
**Generator** & **Filter** & **Acc** & **Usefulness** & \(\Delta\) \\ \hline – & – & 60.45 & – & – \\ GPT-3 (davinci) & – & 67.44 & +6.99 & – \\ GPT-3 (davinci) & Vera **70.67** & **+10.22** & **+46\%** \\ \hline \hline \end{tabular}
\begin{tabular}{l c c c} \hline
**Generator** & **Filter** & **Acc** & **Usefulness** & \(\Delta\) \\ \hline – & – & 60.45 & – & – \\ Rainier-large – & 61.78 & +1.33 & – \\ Rainier-large Vera & **64.88** & **+4.43** & **+233\%** \\ \hline \hline \end{tabular}
\begin{tabular}{l c c c} \hline
**Generator** & **Filter** & **Acc** & **Usefulness** & \(\Delta\) \\ \hline – & – & 60.45 & – & – \\ Rainier-large – & 61.78 & +1.33 & – \\ Rainier-large Vera & **64.88** & **+4.43** & **+233\%** \\ \hline \hline \end{tabular}
\end{table}
Table 3: Results of introducing Vera into the Generated Knowledge Prompting pipeline (Liu et al., 2021). The QA model is UnifiedQA-large. Average accuracy on the development set is reported; see Table 12 for detailed results.
datasets, implying that the contrastive objective is beneficial for generalization. Removing training stage A hurts performance across the board, emphasizing the importance of training with large-scale commonsense knowledge. LM-augmented falsehoods are most helpful on unseen benchmarks, with a little sacrifice in the performance on seen benchmarks. The multi-class loss is most helpful on multiple-choice benchmarks, while removing the binary loss substantially hurts performance on boolean benchmarks.
### Scaling trends of Vera
We trained variants of Vera that are based on smaller versions of the T5 encoder, and show the results in Figure 5. Model performance increases steadily with size, and does not show evidence of saturation at 5B parameters, suggesting that better commonsense plausibility estimation models might be yielded from larger pretrained LMs.
Figure 4: Ablation results. Average accuracy on the development sets is reported. Components are incrementally removed from the training process, except for the multi-class loss and the binary loss; the hierarchy is indicated in the legend.
\begin{table}
\begin{tabular}{l l c} \hline \hline
**Date** & **Original / Corrected** & **Score** & **Pred** \\ \hline
2023/01/05 & It is possible for a solar eclipse to be followed by a lunar eclipse the next day. & 0.86 & ✓ \\
2023/01/06 & It is impossible for a solar eclipse to be followed by a lunar eclipse the next day. & 0.48 & � \\ \hline
2023/01/06 & The time it takes for a given number of cars to travel a fixed distance is directly proportional to the number of cars. & 0.26 & � \\ & The time it takes for a given number of cars to travel a fixed distance is invariant of the number of cars. & 0.52 & ✓ \\ \hline
2023/01/06 & If A sits next to B and B sits next to C, then A must sit next to C. & 0.20 & � \\ & If A sits next to B and B sits next to C, then A may not sit next to C. & 0.60 & ✓ \\ \hline
2023/01/10 & If two cats can eat two cans of food in a minute, then it would take six cats to eat three cans of food in a minute. & 0.05 & � \\ & If two cats can eat two cans of food in a minute, then it would take three cats to eat three cans of food in a minute. & 0.67 & ✓ \\ \hline
2023/01/11 & A three-dimensional cube has eight faces. & 0.46 & � \\ & A three-dimensional cube has six faces. & 0.70 & ✓ \\ \hline
2023/01/30 & It is possible to draw a diagonal line in a triangle. & 0.80 & ✓ \\ & It is impossible to draw a diagonal line in a triangle. & 0.28 & � \\ \hline
2023/02/21 & 70 is a smaller number than 58. & 0.14 & � \\ & 70 is a larger number than 58. & 0.85 & ✓ \\ \hline
2023/02/23 & Since the density of a marble is much less than the density of mercury, the marble would sink to the bottom of the bowl if placed in it. & 0.04 & � \\ & Since the density of a marble is much less than the density of mercury, the marble would flood if placed in mercury. & 0.96 & ✓ \\ \hline
2023/02/25 & Both a house and a round of feathers weigh the same, which is one round. & 0.25 & � \\ & A house weighs more than one round, while a round of feathers weighs one round. & 0.87 & ✓ \\ \hline \hline \end{tabular}
\end{table}
Table 5: Examples of commonsense mistakes made by ChatGPT, and how Vera can detect them. In each section, the first line is the original, incorrect commonsense statement in ChatGPT’s output, and the second line is the authors’ manually corrected version of the statement. Each statement is followed by Vera’s score and predicted correctness label. Examples are adapted from Venuto (2023), Marcus and Davis (2023), Borji (2023).
## 7 Related Work
Commonsense verifiers.Prior work has explored the idea of verifying commonsense statements. Symbolic Knowledge Distillation [West et al., 2021] and I2D2 [Bhagavatula et al., 2022] train models to classify the acceptability of model-generated commonsense knowledge statements. The Entailer [Tafjord et al., 2022] model is partially trained to score the validity of a given hypothesis. These models are trained on relatively small-scale, domain-specific data and do not generalize well to broader commonsense domains. Some other work uses pretrained LMs with few-shot prompting to verify commonsense statements [Kadavath et al., 2022, Jung et al., 2022]. In this work, we develop a general-purpose commonsense statement verifier that works out-of-the-box in zero-shot setting.
Verification in other tasks.Beyond commonsense statements, the problem of verification has been extensively studied on various other NLP tasks. NLI [Liu et al., 2019, 2022a, Zhang et al., 2017] can be viewed as an _entailment verification_ task. Chen et al. [2021] presents a method for _QA verification_ by transforming the context passage and question-answer pair into a premise-hypothesis format as in NLI. Some work build models to perform _reasoning verification_ - classifying whether a premise supports or refutes a hypothesis [Bostrom et al., 2022, Sprague et al., 2022, Yang et al., 2022, Tafjord et al., 2022]. On the other hand, _fact verification_[Thorne et al., 2018, Wadden et al., 2020] requires judging the validity of claims against a corpus of evidence text (e.g., Wikipedia). These tasks feature either context-sensitive or knowledge-intensive hypotheses to verify and are typically complemented with additional context. In contrast, we focus on verifying standalone commonsense statements where no context is required or provided.
Generation vs. verification.With the rapid progress in generative language models, researchers have been largely building general-purpose problem-solving methods with a generative approach [Khashabi et al., 2020, 2022, Lourie et al., 2021, Tafjord and Clark, 2021, Wei et al., 2022]. However, current generative LMs are still prone to hallucination errors and lack an intrinsic mechanism to express confidence level on their outputs. Verification, on the other hand, shows promise to complement these shortcomings and has been adopted to improve the outcome of generation [Chen et al., 2021, Jiang et al., 2022]. In this work, we take a pure verification approach and build a general-purpose verifier for commonsense statements, which to our best knowledge is the first of its kind.
## 8 Conclusion and Future Work
We introduced Vera, a general-purpose verification model for commonsense statements and an early step toward mitigating commonsense errors in text generated by large language models. Vera achieves state-of-the-art performance when solving commonsense problems in the verification format, excels at filtering LM-generated commonsense knowledge statements, and is found useful in detecting erroneous commonsense statements from generative LMs. Furthermore, the scores produced by Vera are well-calibrated; and could be used for plausibility score estimation for declarative statements if needed. As Vera mainly targets on single-sentence statements, future work may consider verification of multi-sentence or long-form statements.
Figure 5: Scaling trends of commonsense statement verifiers.
## Limitations and Ethics Statement
Vera aims and is trained to predict the plausibility of statements based on objective commonsense knowledge of our world. It is not intended to handle text outside the scope of commonsense statements (e.g., Encyclopedic facts, reading comprehension with fictional worlds). It is not trained or evaluated on moral commonsense data, so its capability of making moral predictions is unknown. It gives a prediction even if the input falls out of its intended scope, which could be mitigated by an additional scope guard to determine its applicability. In addition, it is not trained to handle very long and compositional input. Although significantly outperforming existing systems, Vera is not perfect and may make incorrect predictions. It is not very robust under syntactic variations of the input, such as paraphrases and negations. As the training data may contain bias or toxicity, Vera may also make predictions that are perceived as ethically problematic. The output of Vera does not reflect the authors' view. Vera is a research prototype, and extra care should be taken when using it to make real-world decisions.
## Acknowledgements
We thank Sean Welleck, Peter West, Alisa Liu, Jaehun Jung, Chandra Bhagavatula, Ram Pasunuru, Asli Celikyilmaz, and members of the H2lab, Xlab and ARK lab for their discussion and constructive feedback. This work was funded in part by the DARPA MCS program through NIWC Pacific (N66001-19-2-4031), NSF IIS-2044660, and ONR N00014-18-1-2826. We thank OpenAI for offering access to their API.
|
2305.12940 | GEST: the Graph of Events in Space and Time as a Common Representation
between Vision and Language | One of the essential human skills is the ability to seamlessly build an inner
representation of the world. By exploiting this representation, humans are
capable of easily finding consensus between visual, auditory and linguistic
perspectives. In this work, we set out to understand and emulate this ability
through an explicit representation for both vision and language - Graphs of
Events in Space and Time (GEST). GEST alows us to measure the similarity
between texts and videos in a semantic and fully explainable way, through graph
matching. It also allows us to generate text and videos from a common
representation that provides a well understood content. In this work we show
that the graph matching similarity metrics based on GEST outperform classical
text generation metrics and can also boost the performance of state of art,
heavily trained metrics. | Mihai Masala, Nicolae Cudlenco, Traian Rebedea, Marius Leordeanu | 2023-05-22T11:38:27Z | http://arxiv.org/abs/2305.12940v1 | # GEST: the Graph of Events in Space and Time as a
###### Abstract
One of the essential human skills is the ability to seamlessly build an inner representation of the world. By exploiting this representation, humans are capable of easily finding consensus between visual, auditory and linguistic perspectives. In this work, we set out to understand and emulate this ability through an explicit representation for both vision and language - Graphs of Events in Space and Time (GEST). GEST allows us to measure the similarity between texts and videos in a semantic and fully explainable way, through graph matching. It also allows us to generate text and videos from a common representation that provides a well understood content. In this work we show that the graph matching similarity metrics based on GEST outperform classical text generation metrics and can also boost the performance of state of art, heavily trained metrics.
## 1 Introduction
Making connections between vision and language seems easy for humans, but extremely challenging for machines, despite a large body of research on image and video captioning You et al. (2016); Aneja et al. (2018); Anderson et al. (2018); Gao et al. (2017); Zhou et al. (2018); Wang et al. (2018), visual question answering Antol et al. (2015); Lu et al. (2016); Zhong et al. (2020), image synthesis Reed et al. (2016); Dong et al. (2017); Zhou et al. (2019) or video generation Li et al. (2018); Balaji et al. (2019); Wu et al. (2022); Singer et al. (2022); Villegas et al. (2022). While major improvements were made using Transformers Vaswani et al. (2017), there is still a long way to go. Also, these tasks were widely tackled independently of each other, with no significant push for a more unified approach.
For tasks involving vision or language, information is usually processed by an encoder (e.g. Transformers, CNNs or LSTMs) that builds a numerical representation. While this approach is ubiquitous across both vision and NLP, it is fundamentally limited by its implicit, mostly unexplainable, and highly volatile nature. We strongly believe that such a representation can be replaced (or augmented) by a better, explicit, and more robust one.
In this work we introduce the Graph of Events in Space and Time (GEST) for representing visual or textual stories, as groups of events related in space and time at any level of semantics. GEST provides a common and meaningful representation, through which we can compute similarities or differences between texts or videos, and we could also generate texts or videos in an explainable, analytical way.
## 2 Related Work
**Graphs that model text:** Graphs were traditionally used in natural language processing (NLP) in many forms: syntactic trees (e.g. dependency or constituency parsing trees) Lin (1998); Culotta and Sorensen (2004), semantic trees (in the form of Combinatory Categorial Grammar) Zettlemoyer and Collins (2012), Rhetorical Structure Theory (RST) Mann and Thompson (1988) trees, Discourse Graphs Christensen et al. (2013), knowledge graphs Hao et al. (2017); Bauer et al. (2018); Wang et al. (2019) and Abstract Meaning Representation (AMR) graphs Banarescu et al. (2013). Recently, Graph Neural Networks (GNNs) Zhou et al. (2020); Wu et al. (2020) were employed to parse and encode such structures. RST trees Mann and Thompson (1988) and Discourse Graphs Christensen et al. (2013) were developed as theories of text organization using relations between claims as the central component and emphasizing relations between these claims. Then, knowledge graphs are used for encoding true fact about the world, allowing for efficient interrogation for Question Answering systems. Conversely, AMR graphs are semantic and represent links between concepts from the natural text. Crucially, two syntactically different sentences can share the
same AMR graph if they are semantically similar.
**Graphs that model videos:** Graphs were also used as a way to model videos (Sridhar et al., 2010; Aoun et al., 2011; Singh and Mohan, 2017). While previous approaches (Brendel and Todorovic, 2011; Chen and Grauman, 2016; Yuan et al., 2017; Wang and Gupta, 2018; Cherian et al., 2022) consider the nodes in the graph as video regions, we opt for a fundamentally different approach, modeling events as graph nodes.
Aditya et al. (2018) define Scene Description Graphs (SDGs), graph-based intermediate representation built specifically for describing images. SDGs are based on objects, actions and semantic (based on KM-Ontology (Clark et al., 2004) ), ontological and spatial relations. With GEST we explicitly add the temporal aspect as we are interested in representing videos instead of images. Furthermore, our formulation is uniform (everything is an event), leads to a more compact representation, allows for more complex (e.g. semantic, logical) relations between events, while also being capable of representing such events at different scales (see Figure 2).
**Text generation metrics:** Text generation metrics were studied in the field of NLP for comparing two or more texts or documents (Sai et al., 2022). Common metrics include BLEU (Papineni et al., 2002), METEOR (Banerjee and Lavie, 2005), ROUGE (Lin, 2004) and SPICE (Anderson et al., 2016). While BLEU and ROUGE compute the similarity as the exact n-gram overlap, METEOR uses a more relaxed matching criteria, based on stemmed matching, followed by synonymy and paraphrase matching. SPICE builds a semantic scene-graph that is used to extract information about objects, attributes and their relationships. More recently, BERT (Devlin et al., 2019) was integrated into text metrics. BERTScore (Zhang et al., 2019) uses a BERT backbone to obtain embeddings for each token in the hypothesis and reference, which are matched using a greedy approach, The state-of-the-art BLEURT (Sellam et al., 2020) is pre-trained on a large number of synthetic samples then finetuned on WMT (Bojar et al., 2017) and WebNLG (Gardent et al., 2017). Synthetic data is generated by altering Wikipedia sentences via back-translation, random words dropping or mask-filling with BERT. Most pretraining signals are employed in the form of BLEU, ROUGE and BERTScore, back-translation likelihood or textual entailment.
All mentioned text generation metrics employ clear rules, but they lack explainability, due to the space in which computations are formed. The n-gram space of BLEU, METEOR or ROUGE is simple, but totally counter-intuitive for humans. In the case of BERTScore and BLEURT the projected space is even more blurry and void of any intuitive understanding. Instead of projecting texts into an n-gram or Transformer space, we propose a new representation space, namely the space of events in space-time. Comparing events and their relations expressed in two texts is much more natural. The fact that the GEST space is explicit and grounded in the real world, is the very reason for which we obtain explainability and interpretability.
## 3 Graph of Events in Space and Time
Fundamentally, a GEST is a means of representing stories. We focus on modeling stories as they are the main way of expressing ideas, sentiments, facts, perceptions, real-world or fantasy happenings. Stories are an essential component in theater, cinema in the form of storyboards and are also an integral part in relating, communicating and teach
Figure 1: Functional overview of the proposed framework, centered around GEST. GEST represent the central component, allowing for seamless transitions between different forms. For example the transition from text to video is done via steps A and C, while the transformation from video to text can be done via steps D and B. In this work we focus on modules A and B.
ing historical events. Stories are universal: a life is a story, a dream is a story, a single event is a story. Atomic events create intricate stories in the same way that small parts form an object in a picture, or how words form a sentence. Therefore, in modeling stories, we distinguish interactions in space and time as the central component. In general, changes in space and time lead to the notion of events and interactions. Similarly to how changes in an image (image gradients) might represent edges, space-time changes (at different levels of abstraction) represent events. Accordingly, events in space and time could be detectable, repeatable and discriminative. Interactions between events in space and time change the current state of the world, can trigger or cause other events and in turn cause other changes. Therefore, we use these events and their interactions in space and time as the fundamental component of GEST. Fundamentally, an edge connects two events in space and time. This connection can be, but is not limited to temporal (e.g. after, meanwhile), logical (e.g. and, or) or spatial (e.g. on top of). Since a node in GEST can also represent physical objects (e.g, "The house exists for this period of time") the graph connections can represent any potential relation between two objects or two events: the event "house" was involved in event: "holding a meeting at that house". Therefore, an edge can also represent an event by itself. For each event we encode mainly the type of action, the involved entities, the location and the timeframe in which an event takes place. Crucially, in GEST both explicit (e.g. actions) and implicit (e.g. existence) events are represented using the same rules. A GEST example can be found in Fig. 2, while more examples are in Appendix, Sec. A.
GEST can represent events at different scales, in greater detail by expanding an event node into another graph, or in a lesser detail by abstracting a graph into a single event node. In Fig. 2 we exemplify the power of such an approach. On the left of Fig. 2 we show the GEST associated to the following story: "John says that Daniel bought a watch". In the right half we expand the event "Daniel bought a watch" to a more detailed story (GEST). All other event nodes can be expanded into their own GEST stories (e.g. the paying action can be further expanded by detailing the procedure: currency, amount, method and so on). In principle, any GEST could become an event into a higher-level GEST and vice-versa, any event could be expanded into a more detailed GEST.
GEST represents concisely what happens in the real world. So, when vision and language represent the same world, they could also be represented by the same GEST. GEST is suitable for many tasks, including video-to-text or text-to-video generation. GEST is an alternative to the standard way of solving these tasks. Instead of generating natural language descriptions directly from an obfuscated and implicit representation given by a video encoder, GEST breaks video captioning in two problems: generate GEST from video, followed by generating text from GEST. Conversely, generating a video starting from a text prompt can be split into building GEST from text, followed by independently creating the video (Fig. 1). In this paper we demonstrate both directions and the advantages of the approach. We also argue that the main advantage of the highly-explicit GEST representation is to give total knowledge and control over the content of the text or video. Additional details and formal definition of GEST is given in Appendix, Sec. B.
Figure 2: GEST that illustrates the concept of multiple viewpoints and graph-node equivalence. Note that for brevity, we omit some details in the nodes (e.g. timeframe) and also add details to emphasize some points (e.g. the same entity edges).
**Building ground truth GEST from text:** Ground truth GEST from text is needed for training and evaluation. We note that building GEST representation from text is not a trivial task, and we aim to automate this process. Nevertheless, to obtain correct GEST from text human intervention is still needed. From each sentence, we want to extract information such as the type of actions, the entities involved, locations and the times of actions, as well as their relations. All this is extracted by parsing the dependency tree (automatically extracted1) of each individual sentence using a set of handcrafted rules (followed if needed by human correction). Context (e.g. location inference) and event ordering is also injected into the graph to obtain the complete GEST of a story.
Footnote 1: [https://spacy.io/models/en#en_core_web_lg](https://spacy.io/models/en#en_core_web_lg) last accessed on 19th of January 2023
### bAbI corpus
The bAbI corpus (Weston et al., 2015) introduces a set of 20 question answering tasks that are designed to act as proxy tasks for reading comprehension. As the grammar of bAbI is rather simple, we devised a set of handcrafted rules to automatically parse the dependency tree of each sentence in order to extract the relevant information. For bAbI, the text-to-graph automatic module works flawlessly, always detecting and extracting the correct information from each sentence. In this work we focus on bAbI tasks numbered 1, 2, 3, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14. We leave the other tasks for future work, as they are devised with other goals in mind (e.g. tasks numbered 16 and 18 are devised for basic induction or size reasoning) This leads to a total of 26.364 graphs, with 21.588 train, 2.388 validation and 2.388 test graphs.
### Videos-to-Paragraphs dataset
The Videos-to-Paragraphs dataset (Bogolin et al., 2020) contains videos with two stages of text representations. The 1st stage contains contains simple sentences that describe simple actions, while the 2nd stage contains semantically rich descriptions. This duality is especially suited for GEST as the 1st stage is simple enough that we can immediately extract events as simple actions. The 2nd stage is semantically richer and compact. This represents a crucial step-up from the bAbI corpus where only the simpler linguistic stage is present. Following (Bogolin et al., 2020), we will refer to sentences from the 1st stage as SVOs (Subject, Verb, Object) and to the 2nd stage texts as stories. In Videos-to-Paragraphs we identify thre types of temporal relations between events (SVOs): "next", "same time" and "meanwhile", using soft margins to extract them. Using both 1st and 2nd stage texts annotations, we build (with minimal manual intervention) ground truth GEST representations for the entire dataset, a total of 1048 samples (with a 85-5-10 training, dev, test split) consisting of GESTs and the two stages of text descriptions.
## 4 GEST as a metric for comparing stories
We first want to study and evaluate the power of GEST to capture content from stories in natural language. Ideally, different texts that illustrate the same underlying story should have the same GEST. We evaluate this property by first defining a similarity metric between two GESTs and compare its performance (in separating texts that represent the same story vs. different stories) to other metrics from the literature that work directly on the original text in natural language.
### Graph matching similarity metric
Comparing two GEST representations, being graphs, is naturally suited for a graph matching formulation. we test two graph matching methods, a classical approach, Spectral Matching
\begin{table}
\begin{tabular}{l c c c c} \hline
**Method** & **Corr** & **Acc** & **F** & **AUC** \\ \hline BLEU@4 & 24.45 & 75.52 & 0.2816 & 52.65 \\ METEOR & 58.48 & 84.23 & 1.1209 & 73.90 \\ ROUGE & 51.11 & 83.40 & 0.7164 & 68.92 \\ SPICE & 59.42 & 84.65 & 1.0374 & 74.43 \\ BS & 57.39 & 85.89 & 1.0660 & 77.93 \\ \hline G SM & 61.70 & 84.65 & 1.2009 & 75.47 \\ G NGM & 60.93 & 86.31 & 0.9770 & 76.75 \\ \hline BLEURT & **70.93** & **90.04** & **2.0280** & **88.02** \\ \hline \end{tabular}
\end{table}
Table 1: Results comparing GEST representation power with common text generation metrics applied on stories from Videos-to-Paragraphs test set. Both text generation metrics and graph similarity function are applied on the ground truth (stories and graphs). We show in **bold** the best value for each metric, and with underline 2nd best. BS stands for BERTScore, G for GEST, corr for correlation, Acc for Accuracy, F for Fisher score and AUC for the area under the precision-recall curve. For brevity all (except F) are scaled by 100.
(SM) Leordeanu and Hebert (2005) and a modern deep learning based approach, Neural Graph Matching (NGM) Wang et al. (2021). SM is a fast, robust and accurate method that uses the principal eigenvector of an affinity matrix2, while NGM employs multiple neural networks that learn and transform the affinity matrix into the association graph, which is further embedded and used as input for a vertex classifier.
Footnote 2: more details on building the matrix in Appendix, Sec. D.2
### Results and Discussion
Results in Tab. 1 attest the power of our proposed representation: graph matching in the GEST space outperforms all classic text generation metrics (i.e. BLEU@4, METEOR and ROUGE) and even modern metrics based on pre-trained Transformers such as BERTScore. Nevertheless, the specifically and heavily trained BLEURT metric outperforms all considered metrics on this dataset. Note that the other metrics all lack access to the sheer amount of data that BLEURT metric was trained on (around 1.8 million samples). We reckon that given such data, a trained GRAPH-BLEURT metric could outperform the original BLEURT.
The initial test show the representational power of GEST, but they do not test yet the capability of this representation to be combined with a heavily trained one. That would be another, complementary way, to prove the effectiveness of GEST. We test this capability by showing that GEST can boost a state-of-the-art, strongly trained metric, even when we combine the two in the simplest, linear way. Starting from the original text of the story, we learn to transform the story automatically into GEST, and then obtain a GEST similarity score between stories by comparing, using graph matching, the corresponding generated GESTs. A second, BLEURT score between the stories is obtained as before. We then learn, on the training set, how to linearly combine the two scores, to best separate the texts of the same story vs. texts of different stories. We apply the same procedure to all classic metrics, in order to evaluate the benefit brought by GEST relative to other methods. We learn to transform a graph from a story in natural text, by using a sequence-to-sequence framework, with the story as input and the serialized graph as output. For further details on the training process see Appendix, Sec. D.2.
In Tab. 2 we show the results of BLEURT (top), those of other metrics combined with BLEURT using the same linear regression approach (middle) and the results of GEST (bottom), using the two graph matching methods (SM and NGM). It is important to note that in combination with other metrics BLEURT does not always improve, but when combined with GEST it always improves and by the largest margin. In the Appendix Sec. G, we show cases when BLEURT fails to predict when two different textual descriptions stem from the same video. In the first case this is due to the different writing style of the two annotators, while in the second case BLEURT assigns a high similarity score in spite of the fact that different actors perform somewhat similar actions. In both cases, the graph matching algorithm manages to correctly predict if the two pairs depict the same video. These tests prove the power of GEST: its new space and associated graph matching metric can be effectively used, with minimal training cost, to boost the performance of existing state-of-the-art.
## 5 GEST for text generation
GEST describes the world in terms of events and how they relate in space and time and could provide a common ground between the real space and time and "what we say" about it in natural language. Atomic events in a linguistic story (e.g. SVOs) are also well formed events in real space and time, thus they provide a direct link between both worlds. Then relations between events define the space-time structure at semantic level, inevitably becoming a central component in natural language generation. In the following set of experiments we want to better understand and evaluate the importance
\begin{table}
\begin{tabular}{l c c c c} \hline
**Method** & **Corr** & **Acc** & **F** & **AUC** \\ \hline BLEURT & 70.93 & 90.04 & 2.0280 & 88.02 \\ \hline +BLEU@4 & 70.93 & 90.04 & 2.0274 & 88.04 \\ +METEOR & 71.20 & 89.63 & 2.0659 & 87.62 \\ +ROUGE & 70.76 & 90.04 & 1.9973 & 87.71 \\ +SPICE & 71.94 & 88.80 & 2.0808 & 87.71 \\ +BS & 71.11 & 89.63 & 2.0089 & 87.25 \\ \hline +G SM & **72.89** & **90.87** & **2.2086** & **89.80** \\ +G NGM & 71.91 & 90.46 & 2.0537 & 88.58 \\ \hline \end{tabular}
\end{table}
Table 2: Results comparing the power of BLEURT coupled with common text generation metrics and GEST, applied on stories from Videos-to-Paragraphs test set. Text generation metrics are computed on the ground truth stories, while the GEST similarity (G) with graph matching is computed on GEST learned from stories. Notations are the same as in Tab. 1.
of these relations, which are an essential component of GEST. We will evaluate the importantce of these connections between events, by comparing language that is generated from events only (task S2T - SVOs-to-Text) to language that is generated from events and their relations, that is full GESTs (G2T - GEST-to-Text) - for both using the sequence-to-sequence net.
We perform the tests on the Video-to-Paragraphs dataset, where the relations between events are mainly temporal in nature. Thus, to better highlight the differences between the textual SVOs and GEST representations we decide to break the implicit temporal relations given by SVOs ordering, by randomizing (with the same seed) both representations. In the case of SVOs, the order is randomized while for the graphs the order of the edges in the representation is randomized (based on the SVOs permutation). In this setup we can clearly evaluate the impact of the temporal information encoded in the graph structure.
### Results and Discussion
Results in Tab. 3 validate that GEST is suited for text generation and provides a better representation than plain textual descriptions of the atomic events. Conceptually, the graph representation should always be better as it explicitly encodes complex temporal relations that are not present in SVOs. Nevertheless this does not directly guarantee a better off the shelf performance for text generation as the available training data in our tests is very limited. Our tests show that these limitation is overcome by the power of the representation. In the first two rows of Tab. 3, both SVOs to text (S2T) and graph to text (G2T) models are trained starting from a general pre-trained encoder-decoder model with no previous knowledge of our proposed representation. Even in this very limited setup (under 900 training samples) the graph representation proves to be superior. Adding more pretraining data, using the bAbI corpus only extends the performance gap between the two approaches (last section of Tab. 3). In the case of bABi we only have access to a single textual representation for each graph, which is akin to the SVOs in the Videos-to-Paragraphs dataset. For this reason, the S2T task on bAbI can be simply solved by using the identity function, while the G2T task can be solved by describing each node. However they provide valuable aditional pretraining data, especially for G2T as it helps the model to better understand and order events in time. The ability to understand and order events in time enables a better transition from simple sentences to longer, more complex natural language
## 6 Conclusions
In this paper we introduce GEST, which could set the groundwork for a novel and universal representation for events and stories. We discuss and motivate its necessity and versatility, while also empirically validating its practical value, in comparing and generating stories. Even with very limited data, our experiments show that GEST is more than fitted for recreating the underlying story, within a space that allows for very reliable and human correlated comparisons. This explicit and structured nature of the GEST space lends itself beautifully to various other uses (e.g. video generation).
GEST aims to bring together vision and language, as a central component in an explainable space. Such explicit models are largely missing in the literature, but as we believe that our work demonstrates, they could be useful to better understand language and also control its relation to the visual world.
\begin{table}
\begin{tabular}{l c c c c c c c c} \hline \hline
**Method** & **B@1\(\uparrow\)** & **B@2\(\uparrow\)** & **B@3\(\uparrow\)** & **B@4\(\uparrow\)** & **M\(\uparrow\)** & **R\(\uparrow\)** & **C\(\uparrow\)** & **BS\(\uparrow\)** & **BT\(\uparrow\)** \\ \hline S2T & 43.81 & 30.95 & 22.87 & 16.90 & 20.15 & 38.87 & 78.60 & 38.87 & 58.28 \\ G2T & 46.73 & 32.90 & 24.23 & 18.15 & 20.88 & 39.57 & 87.29 & **41.24** & 58.73 \\ \hline S2T\({}^{2}\) & 42.39 & 30.37 & 22.73 & 17.21 & 20.32 & 39.64 & **96.10** & 40.63 & 57.57 \\ G2T\({}^{2}\) & **52.34** & **36.92** & **27.11** & **19.91** & **23.18** & **41.49** & 94.59 & 40.42 & **59.42** \\ \hline \hline \end{tabular}
\end{table}
Table 3: Results for the task of the text generation on the test set of Videos-to-Paragraphs dataset, presented using common text generation metrics: BLEU@N (B@N), METEOR(M), ROUGE(R), CIDEr(C), BERTScore(BS) and BLEURT(BT). In the S2T (SVOs to text) experiments we trained models that take as input the SVO sequence, while in the G2T (Graph to text) experiments we give the serialized graph as input. \({}^{2}\) marks experiments in which we use an additional training stage with data from bAbI corpus. We highlight with **bold** the best value for each metric. For brevity all values are scaled by a factor of 100.
## 7 Limitations
Maybe the most important limitation of our work is, for the moment, data availability. This lack of quality data affects both learning tasks (i.e. graph-to-text and text-to-graph), as access to more graph-text pairs will greatly improve performance. We found that this is especially relevant for the text-to-graph task, as we conjecture that is represents the harder task. Because we use models that are pre-trained with natural language as input and output, the graph representation has to be learned and understood by the encoder (for the graph-to-text task) and the decoder (for the text-to-graph task). Especially with a limited number of samples, we believe understanding the new graph representation is easier than generating it. Moreover, for the text-to-graph task we ask the decoder to generate a very structured output, defined by a precise grammar.
Our experiments highlighted the power of GEST when applied on real-world events with temporal relations between events. Crucially, this represents only a small subset of what GEST can model. Due to lack of data, we are unable in this work to show the full potential of GEST, namely to represent more abstract events. For example, a revolution is still an event, but at the same time a complex story comprised of multiple events.
|
2308.05218 | Conformer-based Target-Speaker Automatic Speech Recognition for
Single-Channel Audio | We propose CONF-TSASR, a non-autoregressive end-to-end time-frequency domain
architecture for single-channel target-speaker automatic speech recognition
(TS-ASR). The model consists of a TitaNet based speaker embedding module, a
Conformer based masking as well as ASR modules. These modules are jointly
optimized to transcribe a target-speaker, while ignoring speech from other
speakers. For training we use Connectionist Temporal Classification (CTC) loss
and introduce a scale-invariant spectrogram reconstruction loss to encourage
the model better separate the target-speaker's spectrogram from mixture. We
obtain state-of-the-art target-speaker word error rate (TS-WER) on
WSJ0-2mix-extr (4.2%). Further, we report for the first time TS-WER on
WSJ0-3mix-extr (12.4%), LibriSpeech2Mix (4.2%) and LibriSpeech3Mix (7.6%)
datasets, establishing new benchmarks for TS-ASR. The proposed model will be
open-sourced through NVIDIA NeMo toolkit. | Yang Zhang, Krishna C. Puvvada, Vitaly Lavrukhin, Boris Ginsburg | 2023-08-09T20:51:54Z | http://arxiv.org/abs/2308.05218v1 | # Conformer-based Target-Speaker Automatic Speech Recognition for Single-Channel Audio
###### Abstract
We propose CONF-TSASR, a non-autoregressive end-to-end time-frequency domain architecture for single-channel target-speaker automatic speech recognition (TS-ASR). The model consists of a TitaNet based speaker embedding module, a Conformer based masking as well as ASR modules. These modules are jointly optimized to transcribe a target-speaker, while ignoring speech from other speakers. For training we use Connectionist Temporal Classification (CTC) loss and introduce a scale-invariant spectrogram reconstruction loss to encourage the model better separate the target-speaker's spectrogram from mixture. We obtain state-of-the-art target-speaker word error rate (TS-WER) on WSJ0-2mix-extr (4.2%). Further, we report for the first time TS-WER on WSJ0-3mix-extr (12.4%), LibriSpeech2Mix (4.2%) and LibriSpeech3Mix (7.6%) datasets, establishing new benchmarks for TS-ASR. The proposed model will be open-sourced through NVIDIA NeMo toolkit.
Yang Zhang\({}^{*}\), Krishna C. Puvvada\({}^{*}\), Vitaly Lavrukhin, Boris Ginsburg NVIDIA, USA Target-speaker ASR, Conformer, multi-speaker ASR, source separation
## 1 Introduction
Target-speaker automatic speech recognition (TS-ASR) is the task to transcribe a specific speaker's speech in an overlapping multi-speaker environment given the speaker's profile - an auxiliary utterance (Fig. 1). Along with blind source separation (BSS) and multi-speaker ASR, TS-ASR constitutes a class of approaches for overlapped speech recognition.
BSS methods separate individual components from a speech mixture in time-domain [1, 2] which are passed on to a single-speaker ASR model for transcription as a second step. As the separation step of BSS is not optimized for ASR, this can be sub-optimal. Multi-speaker ASR approaches [3, 4, 5, 6] and their speaker-attributed variants (SA-ASR) [7, 8] generate transcripts as output and are optimized end-to-end for ASR. A characteristic of BSS models and their analogous multi-speaker ASR models is their multiple output branches, one per source. SA-ASR requires profiles of all speakers in a mixed utterance as auxiliary information.
These seemingly similar approaches for overlapping speech recognition come with their own set of pros and cons. While BSS and their analogous multi-speaker ASR approaches do not require any auxiliary information, their major limitations include predefined number of output streams, permutation invariant training (PIT) [9] and speaker-tracing [10] for long audio inference. Further, having a different number of speakers during training and inference can greatly reduce their performance. In case of multi-speaker ASR, serialized output training (SOT) can overcome some of these limitations, but leaves much to be desired for in terms of performance [5]. In conjunction with SOT, SA-ASR uses speaker profile information to improve performance and not be limited by fixed number of outputs. But, it assumes availability of profiles for _all_ speakers in an utterance. Nonetheless, it is well suited for transcribing meeting like scenarios. TS-ASR [10, 11], on the other hand, requires only one speaker profile of interest. It is apt for situations that require transcribing one target-speaker while ignoring interfering speakers. By design, TS-ASR doesn't suffer from permutation ambiguity and speaker-tracing. However, it requires one inference per speaker if used to transcribe multiple speakers.
In this paper, we propose Conformer-based TS-ASR model (CONF-TSASR) to address single-channel target-speaker ASR. Our approach adopts the SpeakerBeam [10] architecture and makes the following contributions:
* Improve SpeakerBeam using TitaNet [12] and Conformer [13] modules trained in an end-to-end fashion with CTC and novel spectrogram reconstruction loss.
* Achieve state-of-the-art target-speaker WER (TS
Figure 1: Target-speaker ASR transcribes a specific speaker’s part in a mixed utterance based on their clean speech sample (auxiliary utterance).
WER) on WSJ0-2mix-extr2 and report results on WSJ0-3mix-extr and LibriiSpeechMix [14] for the first time. Footnote 2: [https://github.com/WVIDIA/NeMo](https://github.com/WVIDIA/NeMo)
* Study the effects of target-speaker SNR and length of auxiliary utterance on model performance.
## 2 Conformer-based ts-asr architecture
The proposed single-channel CONF-TSASR model consists of three modules - TitaNet, MaskNet and an ASR module (Fig. 2). It takes two inputs - a mixed utterance and a clean auxiliary utterance from the target-speaker and transcribes only the target-speaker's speech from the mixed utterance. The auxiliary utterance is encoded into a 192-dim speaker embedding by TitaNet [12] - a speaker embedding extractor model based on ContextNet [15]. From the mixed utterance 80-dim log-Mel features are extracted every 10msec over a window of 25msec. These are further perturbed with SpecAugment [16] and sub-sampled by 4x using two convolutional layers. MaskNet takes the sub-sampled features (\(S_{mix}\)) and speaker embedding to produce a time-frequency mask which is multiplied with \(S_{mix}\) to estimate the target-speaker's time-frequency features (\(\hat{S_{t}}\)). Finally, a Conformer [13] ASR module is used to transcribe the target speaker's speech using \(\hat{S_{t}}\). The entire model is optimized using CTC [17] loss and spectrogram reconstruction loss. The latter computes scale invariant SiSNR [18] between an upsampled \(\hat{S_{t}}\) - the estimated spectrogram - and true spectrogram \(S_{t}\). The spectrogram reconstruction loss is reserved for training where you have access to the individual sources.
In our experiments, both MaskNet and ASR module consist of 18 Conformer layers, each with a hidden dimension of 256 and feed-forward dimension of 1024. Multi-head attention consists of 4 heads and the kernel size of the convolutional module is 31. The speaker embedding is linearly projected to match MaskNet's hidden dimension of 256 and added to the input of every Conformer block. Both ASR module and TitaNet are initialized with pre-trained weights available in NVIDIA NeMo toolkit1. For TitaNet, we freeze its ContextNet encoder and only train the decoder. The CONF-TSASR model has 66.1M trainable parameters and 85.4M parameters in total including the frozen TitaNet encoder.
Footnote 1: [https://github.com/WVIDIA/NeMo](https://github.com/WVIDIA/NeMo)
## 3 Experiments
### Datasets
We evaluated the proposed approach using two and three speaker mixtures created from WJS0 [19] and Librispeech [20] datasets following [21] and [14] respectively. To adapt these mixture datasets for TS-ASR, we augment them with a random auxiliary utterance from the target-speaker2. Briefly, a training example for two-speaker mixture was created by first randomly selecting two speakers and choosing one as the target-speaker. Among all utterances by the target-speaker two are chosen, one for creating the auxiliary utterance and one for the mixed utterance. To create the mixture we pick an utterance spoken by the other speaker. For WSJ0 mixtures, utterances were combined at a SNR uniformly chosen between 0 and 5 dB for each mixture [21]. For Librispeech mixtures, chosen utterances were combined without changing SNR to be consistent with [14]. For WSJ0 mixtures, the shorter utterance is both prepended and appended with random length of silence to match the length of the longest utterance in the mixture. In contrast, Librispeech mixtures are generated following [14], where the utterances were shifted by random delays before being added to the mixture. Delay values were chosen under the constraint that the start times of each utterance differed by 0.5sec or longer. Further, we augment the training data using speed and volume perturbation. For speed perturbation [22], the speed of each individual utterance is modified with a probability of 0.3 from its original rate to 95%, 97.5%, 100%, 102.5% or 105% before mixing. Volume perturbation [14] involves scaling the final mix
Figure 2: Conformer-based CONF-TSASR model architecture. Feature extraction creates log-Mel spectrogram \(S_{mix}\) of the mixed utterance. A speaker embedding is extracted from an auxiliary utterance using TitaNet. The masking network learns a time-frequency mask for the target-speaker. The model is trained using CTC-loss and spectrogram reconstruction loss using the target-speaker’s individual spectrogram \(S_{t}\).
ture's volume by a random factor sampled from \([0.125,2.0]\). In the following, we refer to two and three speaker mixtures of WSJ0 as WSJ0-2mix-extr and WSJ0-3mix-extr respectively, whereas LibriSpeech2mix and LibriSpeech3mix denotes Librispeech mixtures.
### Training Setup
For training, we used 16 V100-32GB GPUs with a global batch size of 64. The model was trained for 60K and 480K updates for WSJ0-mix-extr and LibriSpeechMix respectively. We used AdamW with a peak learning rate of \(3*10^{-4}\) and 0.01 weight decay. We used 10K and 25K warmup steps respectively with Cosine annealing and a minimum learning rate of \(1*10^{-6}\). The relative weights of losses, when more than one is used, are tuned based on validation TS-WER.
### WSJ0-mix-extr results
Table 1 compares performance of the proposed CONF-TSASR model with contemporary results on the WSJ0-mix-extr datasets. For baselines, we include a conventional ASR model (Conformer-CTC), SpeakerBeam [10], Exformer [23] and Conditional-Conformer-CTC [6]. The first model was trained on single-speaker, the second and third were trained on two speakers and the last model was trained on up to three speakers. SpeakerBeam can be regarded as a TS-ASR model which directly generates transcription for target-speaker. Exformer is a state-of-the-art target-speaker source separation model based on SepFormer [2] and thus requires an additional step to transcribe the model output. We updated the Exformer architecture with recent advancements to facilitate fairer comparisons. Namely, we replaced the original pre-trained embedding network in Exformer with the same pre-trained TitaNet [12] that was used in the CONF-TSASR. We also matched model size of the SepFormer with masking network in CONF-TSASR. Note that the model used to transcribe Exformer output and the "conventional ASR" baseline are the same. This in-turn is same as the one used for initializing ASR block in CONF-TSASR, except the former is further fine-tuned on training partition of WSJ0 dataset. Conditional-Conformer-CTC is a multi-speaker ASR model that uses conditional speaker chain to transcribe every speaker subsequently.
As expected, the conventional ASR model trained on single-speaker data performs poorly on WSJ0-mix (36.7% WER on WSJ0-2mix), whereas SpeakerBeam and Exformer+Conformer-CTC show 30.6% and 13.2% TS-WER respectively. Conditional-Conformer-CTC reaches a WER of 19.9% on WSJ0-2mix and 34.3% on WSJ0-3mix. In comparison, CONF-TSASR trained on up to two-speaker mixtures reaches 4.8% TS-WER and 4.2% TS-WER with additional spectrogram reconstruction loss. Exformer results show optimizing for SiSNR does not necessarily give the best result for transcription. Also, shifting away from time-frequency domain to time-domain is not only unnecessary for target-speaker speech recognition but also decreases model efficiency due to more time steps. CONF-TSAR model reaches TS-WER of 4.8% on WSJ0-2mix-extr and 12.4% on WSJ0-3mix-extr when trained on up to three speakers, suggesting that the proposed model is able to transcribe target-speaker from single-channel input in spite of two distracting speakers. To our knowledge, this is the best TS-WER reported on WSJ0-2mix-extr and the first study to report TS-WER on WSJ0-3mix-extr.
Fig. 4 shows the sensitivity of CONF-TSASR w.r.t. SNR between target and overlapping speakers. We observe that the performance is more sensitive to SNR when the mixture contains three overlapping speakers compared to just two speakers.
### LibriSpeechMix results
Tables 2 & 3 report TS-WER results for the first time on LibriSpeechMix datasets. Due to lack of previous TS-ASR results on LibriSpeechMix dataset in literature, we use SOT-Conformer-AED [7] and its streaming version t-SOT [8] as reference.3 These are different, yet closely related multi
Figure 4: TS-WER on WSJ0-2mix-extr and WSJ0-3mix-extr under different SNRs between target-speaker and interfering speakers using CONF-TSASR.
Figure 3: A LibriSpeech2mix example. (top) Input mixture spectrogram. Target-speaker and overlapping speaker are shown in different colors. (bottom) Time-aligned non-blank emission probabilities for target-speaker using the proposed model. (Best viewed in color.)
speaker transcription models. They are based on transformer encoder-decoder architecture and use SOT [5]. They differ with the proposed model in the following (non-exhaustive) ways. 1) They transcribe all speakers in a given utterance. 2) Have knowledge of speaker profiles for all possible speakers in a given utterance. 3) Use 10 auxiliary utterances during training and 2 during evaluation (each with avg. length of 7.5 sec) to calculate speaker profiles. 4) SOT-Conformer-AED reports speaker-attributed WER (SAWER) [14] on LibriSpeech2Mix and LibriSpeech3Mix, while t-SOT reports permutation-invariant SA-WER [8] on LibriSpeech2Mix. In contrast, the proposed model 1) Transcribes only target-speaker, 2) Is not aware of profiles for other speakers in utterance, 3) Uses only one auxiliary utterance during training and two during evaluation to calculate speaker profiles, and 4) Reports WER only for target speaker (TS-WER).
To make the TS-WER results on LibriSpeechMix comparable to SA-WER, we transcribe all speakers in a mixed utterance with each speaker as target, one at a time, with CONF-TSASR model. CONF-TSASR trained on up to three speakers achieves 5.4% TS-WER on LibriSpeech2Mix and 7.6% on LibriSpeech3Mix (Table 3). When trained on only up to two-speaker mixtures, the performance improves to 4.2% TS-WER on LibriSpeech2Mix (Table 2). Both two and three-speaker results show that adding spectrogram loss (CTC+Spec) significantly outperforms using merely CTC loss. When evaluated using only one auxiliary utterance (7.5 sec) as speaker profile, the proposed model exhibits slight performance deterioration (e.g. 7.6% vs. 9% in Table 3) highlighting the importance of robust speaker profiles. Training model with CTC loss [17] provides an auxiliary benefit of obtaining time-aligned token output probabilities for target-speaker (Fig. 3, bottom).
## 4 Conclusion
We present CONF-TSASR, an end-to-end state-of-the-art single-channel target-speaker speech recognition model. It consists of three modules. TitaNet - extracts a speaker embedding from a target-speaker's auxiliary utterance. MaskNet - generates a time-frequency mask for a target-speaker using Conformer. ASR module - transcribes the masked speech features using Conformer. The model is trained with CTC and spectrogram loss. We obtain state-of-the-art results on WSJ0-2mix-extr and establish new benchmarks for WSJ0-3mix-extr and LibriSpeechMix datasets. The model can be both used for fully and partially overlapped speech, requires as little as one auxiliary utterance and is non-autoregressive. Model will be open-sourced through NVIDIA NeMo toolkit4.
\begin{table}
\begin{tabular}{c|c|c c c c} Model & Params, M & N & Loss & Learn Embedding & 2-mix & 3-mix \\ \hline \multirow{5}{*}{CONF-TSASR} & & 2 & CTC & no & 8.6 & - \\ & & 2 & CTC & yes & 4.8 & - \\ & 85 & 2 & CTC+Spec & yes & **4.2** & - \\ & & 3 & CTC & yes & 5.4 & 13.8 \\ & & 3 & CTC+Spec & yes & **4.8** & **12.4** \\ \hline SpeakerBeam [10] & n/a & 2 & Cross Entropy & yes & 30.6 & - \\ Exformer [23] + Conformer-CTC & 80 & 2 & SiSNR, CTC & no & 13.2 & - \\ Conditional-Conformer-CTC [6] & n/a & 3 & CTC & yes & 19.9\({}^{**}\) & 34.3\({}^{**}\) \\ Conformer-CTC & 29 & 1 & CTC & no & 36.7\({}^{*}\) & 54\({}^{*}\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: TS-WER of different models on WSJ0-2mix-extr and WSJ0-3mix-extr. The model was trained on N-maximum speakers in train mixture. \({}^{*}\) denotes best WER on individual transcript. \({}^{**}\) denotes WER for multi-speaker ASR model.
\begin{table}
\begin{tabular}{c|c c c} Model & L (sec) & 2-mix & 3-mix \\ \hline \multirow{2}{*}{CONF-TSASR CTC} & 7.5 & 7 & 9.7 \\ & 15 & 6 & 8.4 \\ & 7.5 & 6.3 & 9 \\ & 15 & **5.4** & **7.6** \\ \hline SOT-Conformer-AED [7] & 15 & 6.8\({}^{\dagger}\) & 9.6\({}^{\dagger}\) \\ SOT-Conformer-AED [7] SD & 15 & **6.4\({}^{\dagger}\)** & **8.5\({}^{\dagger}\)** \\ \hline \hline \end{tabular}
\end{table}
Table 3: TS-WER of CONF-TSASR, and permutation invariant SA-WER\({}^{\ddagger}\) of related multi-speaker references on LibriSpeechMix. CONF-TSASR was trained for up to 3 speakers. Notation: L - length of auxiliary utterance, SD - speaker deduplication [7]
\begin{table}
\begin{tabular}{c|c c c} Model & Params, M & L (sec) & 2-mix \\ \hline \multirow{2}{*}{CONF-TSASR CTC} & 7.5 & 5.1 \\ & 85 & 15 & 4.6 \\ & 7.5 & 4.5 \\ & 15 & **5.4** & **7.6** \\ \hline SOT-Conformer-AED [7] & 15 & 6.8\({}^{\dagger}\) & 9.6\({}^{\dagger}\) \\ SOT-Conformer-AED [7] SD & 15 & **6.4\({}^{\dagger}\)** & **8.5\({}^{\dagger}\)** \\ \hline \hline \end{tabular}
\end{table}
Table 2: TS-WER of CONF-TSASR, SA-WER\({}^{\dagger}\), and permutation invariant SA-WER\({}^{\ddagger}\) of related multi-speaker references on LibriSpeechMix, trained for up to 2 speakers. Notation: L - Length of auxiliary utterance |
2303.14067 | SEAL: Semantic Frame Execution And Localization for Perceiving Afforded
Robot Actions | Recent advances in robotic mobile manipulation have spurred the expansion of
the operating environment for robots from constrained workspaces to
large-scale, human environments. In order to effectively complete tasks in
these spaces, robots must be able to perceive, reason, and execute over a
diversity of affordances, well beyond simple pick-and-place. We posit the
notion of semantic frames provides a compelling representation for robot
actions that is amenable to action-focused perception, task-level reasoning,
action-level execution, and integration with language. Semantic frames, a
product of the linguistics community, define the necessary elements, pre- and
post- conditions, and a set of sequential robot actions necessary to
successfully execute an action evoked by a verb phrase. In this work, we extend
the semantic frame representation for robot manipulation actions and introduce
the problem of Semantic Frame Execution And Localization for Perceiving
Afforded Robot Actions (SEAL) as a graphical model. For the SEAL problem, we
describe our nonparametric Semantic Frame Mapping (SeFM) algorithm for
maintaining belief over a finite set of semantic frames as the locations of
actions afforded to the robot. We show that language models such as GPT-3 are
insufficient to address generalized task execution covered by the SEAL
formulation and SeFM provides robots with efficient search strategies and long
term memory needed when operating in building-scale environments. | Cameron Kisailus, Daksh Narang, Matthew Shannon, Odest Chadwicke Jenkins | 2023-03-24T15:25:41Z | http://arxiv.org/abs/2303.14067v1 | # SEAL: Semantic Frame Execution And Localization
###### Abstract
Recent advances in robotic mobile manipulation have spurred the expansion of the operating environment for robots from constrained workspaces to large-scale, human environments. In order to effectively complete tasks in these spaces, robots must be able to perceive, reason, and execute over a diversity of affordances, well beyond simple pick-and-place. We posit the notion of semantic frames provides a compelling representation for robot actions that is amenable to action-focused perception, task-level reasoning, action-level execution, and integration with language. Semantic frames, a product of the linguistics community, define the necessary elements, pre- and post- conditions, and a set of sequential robot actions necessary to successfully execute an action evoked by a verb phrase. In this work, we extend the semantic frame representation for robot manipulation actions and introduce the problem of Semantic Frame Execution And Localization for Perceiving Afforded Robot Actions (SEAL) as a graphical model. For the SEAL problem, we describe our nonparametric Semantic Frame Mapping (SeFM) algorithm for maintaining belief over a finite set of semantic frames as the locations of actions afforded to the robot. We show that language models such as GPT-3 are insufficient to address generalized task execution covered by the SEAL formulation and SeFM provides robots with efficient search strategies and long term memory needed when operating in building-scale environments.
## I Introduction
We envision autonomous systems that can perceive and perform tasks across large, building-scale spaces [1, 2, 3] to serve needs across society, such as care-taking tasks in assisted living facilities and supply chain tasks in warehouses. In order to be effective, such systems must infer the objects present in the environment as well as predict the outcomes of actions afforded [4] by these objects. In essence, robots need to perceive actions that are currently afforded by the environment, and not just the objects to be acted upon. For example, observing a cup should inform the system of an optimal location to achieve the action "Grasp Cup". In small enough workspaces, a robot can simply look for the objects required for the task, but as the environment grows this becomes infeasible.
We are inspired by the idea that despite the aforementioned challenges, there is structure to human environments. Buildings, in most cases, are designed for efficient task completion by humans as objects and actions of similar types of usually in the vicinity of each other. For example, brooms, mops, and vacuums are likely to be in the closet whereas spoons, cups, and plates are likely to be in the kitchen. Moreover, we acknowledge the inherent structure of task execution due to the sequentiality of multi-step actions. While some affordances are inherent in certain object classes (i.e. _Grasp_ a cup, _Open_ a door, etc.), others have structured criteria, or preconditions, which must be met before being executed. For example, a cup must be full and near a container in order to _Pour_ the contents of the cup. Semantic frames, as elaborated in further sections, explicitly describe these relations and have been used in previous works to ground natural language
commands in robot actions [5]. Recently, the community has explored using Transformer-based models to ground natural language commands [6][7][8]. While these models have shown impressive high-level reasoning capabilities, they often lack the physical intuition necessary to ground their output in feasible robot actions.
Three core characteristics of semantic frames [5][9][10] motivate our exploration of their use as a representation to bring together language, action, and perception. First, they are evoked by a verb phrase such that we can directly parse natural language commands into semantic frames. Second, they explicitly define the objects necessary for execution. Last, they define the preconditions necessary before execution can begin and postconditions of the state after execution.
In order to efficiently execute semantic frames, we require a model which can localize the frames location conditioned on observations of the environment. Semantic perception of individual objects in large environments has been explored previously in the context of object search and generalized notions of object permanence [11]. We are now able to extend these ideas to consider perception of afforded actions through inference over semantic frame representations.
In this paper, we introduce **S**emantic Frame **E**xecution **A**nd **L**ocalization for **P**ereciving **A**fforded Robot **A**ctions (SEAL) which casts the affordance execution problem into a graphical model which accounts for object-affordance, state-affordance, and affordance relations. Additionally, we propose the Semantic Frame Mapping (SeFM) algorithm for perception of afforded actions in the context of task-level reasoning for mobile manipulation robot. We consider SeFM as one possible algorithm for the broader SEAL problem. SeFM is a nonparametric particle-based inference method for maintaining belief over a finite set of semantic frames which represent the locations of actins afforded to the robot. We introduce and validate the SEAL model in a simulated apartment using ROS Gazebo. Next, we compare SeFM to a Transformer-based model on a multitude of household tasks using a simulated mobile manipulator and find that using SeFM leads to a higher success rate. Finally, we empower a real Fetch robot to execute tasks using SeFM.
## II Related Work
### _Generalizable Task Execution_
Generalized task execution has garnered much attention in the community as robotic perception and manipulation capabilities have improved. Recent works have demonstrated the ability to learn task-specific manipulation policies from RGB-D observations of the workspace [12][13]. Though, in those works, they assume the environment is fully observable. Some attention has been given to operating in partially observable domains, but has met limited success due to challenges in perceiving the necessary objects for a task [14]. Other works have attempted to overcome the challenges imposed by human environments by utilizing a hybrid planning framework in which an online probabilistic semantic representation of the environment is passed to an offline task planner [15], or by restricting the action space to unstructured (i.e. atomic) actions that have a uniform likelihood throughout the environment [16]. Recent methods have explored the use of Large Language Models (LLM) as planners [6, 8]. While LLM do show promising results in reasoning over high-level goals, they struggle to ground their output in robot actions, even with appropriate prompting. Additionally, LLM cannot inherently estimate whether an action is afforded in the current scene, so in [6] the authors train value function offline mapping RGB images of the state to executability and [8] they restrict the output to API function names which are always actionable. Our proposed method, SEAL, addresses these challenges by formulating affordance execution as a search problem. Estimating executability is no long necessary as we can simply cross-reference the defined preconditions with the known state of the world. Partial observability is overcome through actively searching the environment for necessary frame elements.
### _Semantic Frames_
Semantic frames [5, 10] describe affordances, complete with actors, objects, preconditions, and results. A semantic frame is said to be evoked by a particular verb clause making them good representations of actions due to their implicit ability to generalize task description across variations in environment, object instances, and even request phrases. "Get Roger a coffee" and "Bring Roger a coffee" evoke the same semantic frame: "Bring {_object_} to {_recipient_}". Formally, a semantic frame, \(f\), is defined as \(f=(O,P,A)\), where \(O\), \(P\), and \(A\) are the sets of frame elements (objects), preconditions, and robot actions encoded in the frame, respectively. Semantic frames also have a notion of postconditions -- how the state transitions given a successful frame execution -- which can be logically sequenced to generate task plans for high level goals. Previous work [5][10] has shown the ability to ground natural language commands into robot actions by parsing commands into semantic frames. In [5], \(A\) consisted only of locomotion actions. In this work, we expand the set of possible actions to include manipulation and move toward performing complex tasks across large building-wide spaces. Because it is now necessary to interact with objects, a new problem of semantic frame localization is introduced.
### _Conditional Random Fields_
Conditional Random Fields (CRF) [17][18] are a class of statistical modeling methods introduced in machine learning for sequence labelling problems. In essence, CRFs are an extension of Hidden Markov Models [19] with the ability to incorporate complex, higher-order dependencies among the input features. CRFs are particularly useful when the outputs are correlated and depend not only on the current input but also on the context of neighboring inputs. A factor graph is a probabilistic graphical model where the nodes represent variables and edges represent the conditional dependencies between variables. In the case of CRFs, the variables are inputs and outputs, and the edges are the dependencies neighboring input features and outputs. Prior work [11] has shown that casting the object search problem into a
CRF factor graph can lead to performance gains in partially observable, real-world environments without the need for strong assumptions about static landmarks as in [20][21][22].
## III Method
### _Problem Formulation_
Let \(F=\{f^{i}|i=1,..,N\}\) be the set of semantic frames we wish to localize. Let \(O=\{o^{i}|i=1,...,M\}\) be the collective set of all objects defined in the frame elements of all \(f\in F\). Given observations \(z_{0:T}\) and robot-state information \(x_{0:T}\), we wish to maintain the belief over frame locations \(P(F_{0:T}|x_{0:T},z_{0:T},O_{0:T})\). The resulting distribution will inform the robot where affordances are in the scene, and, thus, where the robot can take actions and effect change in the environment. We assume a room-level annotated metric map is given. SEAL describes the inference problem of localizing instances of semantic frames within the given map.
### _Seal_
SEAL formalizes the frame location estimation problem via a Conditional Random Field (CRF) extending the work of Lorbach [23] and Zeng [11]. The CRF model includes object-affordance, affordance-affordance, and state-affordance relations as shown in Figure 1. The full posterior probability of frame locations is
\[p(F_{0:T}|x_{0:T},z_{0:T},O_{0:T})=\\ \frac{1}{Z}\prod_{t=0}^{T}\prod_{i=1}^{N}\phi_{p}(f_{t}^{i},f_{t- 1}^{i})\prod_{i,k}\phi_{m,\mathcal{B}(R_{ik}|x_{t})}(f_{t}^{i},o_{t}^{k})\\ \prod_{i,j}\phi_{c,\mathcal{B}(R_{ij}|x_{t})}(f_{t}^{i},f_{t}^{j}) \tag{1}\]
where Z is a normalization constant, \(\phi_{p}\) is the prediction potential, \(\phi_{m}\) is the measurement potential, and \(\phi_{c}\) is the context potential. We assume the robot remains localized in a metric map of the environment. Robot-state information \(x_{t}\) informs the model about concepts regarding the robot itself: pose of the robot, whether an object is in its gripper, etc. Observations \(z_{t}\) are RGB-D images of the environment taken by the robot as it navigates. Both \(x_{t}\) and \(z_{t}\) are known variables.
The _prediction potential_, \(\phi_{p}(f_{t}^{i},f_{t-1}^{i})\), models the temporal permanence of a frame. The value of this potential is category-dependent. Some frames, like "_Go_ to couch", remain static over time. Others, like "_Grasp_ cup", can move over time due to various external factors which we model as:
\[\phi_{p}(f_{t}^{i},f_{t-1}^{i})\sim\mathcal{N}(f_{t-1}^{i},\Sigma^{i})\]
The _measurement potential_,\(\phi_{m,\mathcal{B}(R_{ik}|x_{t})}(f_{t}^{i},o_{t}^{k})\), models object-frame relations parameterized by the belief -- \(\mathcal{B}(R_{ik}|x_{t})\) -- over a set of defined frame-object relations \(R\) for frame \(f^{i}\) and object \(o^{k}\) conditioned on robot-state information \(x_{t}\). We assume that frames are tightly coupled to the objects which afford them (i.e. "_Grasp_ Spoon" is spatially close to a spoon). By parameterizing the belief over all frame elements, we are able to model the effect of state-transition on frame locations. For example, "_Stir_ Mug" requires a spoon and the semantic frame explicitly encodes this with the precondition "_Grasp_ spoon". Therefore, if the robot does not yet have a spoon in its gripper "_Stir_ Mug" should be localized close to a spoon; conversely, "_Stir_ Mug" should be close to a mug if the robot is already holding a spoon. Additionally, because we model the relation over all frame elements "_Stir_ Mug" is more likely to be in a room that has both spoons and mugs rather than a room that only has one or the other. Concretely,
\[\phi_{m,\mathcal{B}(R_{ik}|x_{t})}=\sum_{r}\sum_{k=0}^{M}\mathcal{B}(R_{ik}=r |x_{t})\phi_{m,r}(f^{i},o^{k},R_{ik}=r)\]
where \(r\) can be one of the following relation types {_Core_, _Other_, _Disjoint_} and \(\mathcal{B}(R_{ik}=r|x_{t})\) is the belief that \(r\) is relevant between a frame (\(f^{i}\)) and object (\(o^{k}\)) given the current state. For \(r\in\{\)_Core_, _Other_\(\}\), the measurement potential corresponds to a Gaussian distribution:
\[\phi_{m}(f_{t}^{i},o_{t}^{k})\sim\mathcal{N}(o_{t}^{k},\Sigma)\]
where \(\Sigma\) is always constant across frame-object pairs. A _Core_ object is the next object the robot would need to interact with to proceed with frame execution. Whereas _Other_ objects will eventually be necessary or are completely optional. For \(r=\)_Disjoint_ the measurement potential is 0 since the object is not involved in frame execution. In this work, object-frame relations are explicit in the semantic frame definition.
The _context potential_,\(\phi_{c,\mathcal{B}(R_{ik})}(f_{t}^{i},f_{t}^{j},x_{t})\), models inter-frame relations. In this work we only model the relation between a frame and its preconditions. Since the preconditions define a sequential list of actions to be completed, we can
Fig. 1: SEAL Model: Known: \(x_{t}\) robot-state information, \(z_{t}\) sensor observations; Unknown: \(O_{t}:\{o^{i}:i\in 1:M\}\) object locations, \(F_{t}:\{f^{i}:i\in 1:N\}\) frame locations. Prediction potential shown in red, measurement potential shown in yellow, context potential shown in blue.
use the state information to inform our model about which preconditions have already been met and which precondition must be met next. From there, the context potential follows a similar model to the measurement potential
\[\phi_{c,\mathcal{B}(R_{ij}|x_{t})}=\sum_{r}\sum_{j=0}^{M}\mathcal{B}(R_{ij}=r|x_ {t})\phi_{c,r}(f^{i},f^{j},R_{ij}=r)\]
where \(r\) can be one of two relation types {_Precondition_, _Disjoint_}. For \(r=\)_Precondition_ the context potential corresponds to a Gaussian distribution:
\[\phi_{c}(f^{i}_{t},f^{j}_{t})\sim\mathcal{N}(f^{j}_{t},\Sigma)\]
where \(f^{j}\) is the precondition and \(\Sigma\) is always constant. For \(r=\)_Disjoint_ the context potential is 0. Again, these relations are explicit in the semantic frame definition.
### _Semantic Frame Mapping_
We implement a particle-based inference method, dubbed Semantic Frame Mapping, for maintaining belief over semantic frame locations, as shown in Algorithm 1. Object locations are inferred using the method defined in [11] and we refer the reader to that paper for full implementation details. In our experiments, we use 200 particles for each object and semantic frame with heuristics for particle reinvigoration.
```
0: Observation \(z_{t}\), Robot-State Vector \(x_{t}\), Object location particle set \(O_{t}\), particle set for each frame: \(f^{i}_{t-1}=\{(f^{i(k)}_{t-1},\alpha^{i(k)}_{t-1})|k=1,...,P\},i\in 1:N\) Resample P particles with probability proportional to \(\alpha^{i(k)}_{t-1}\) for\(i=1,...,N\)do for\(k=1,...,P\)do \(f^{i(k)}_{t}\sim\phi_{p}(f^{i}_{t},f^{i(k)}_{t-1})\) \(\alpha^{i(k)}_{t}\propto\prod_{j\in\Gamma(i)}\phi_{m}(f^{i(k)}_{t},o^{j}_{t}) \prod_{l\in\Gamma(i)}\phi_{c}(f^{i(k)}_{t},f^{l}_{t})\) where, \(\phi_{m}=\sum_{r}\sum_{s=0}^{P}\mathcal{B}(r|x_{t})\alpha^{j(s)}_{t} \phi_{m,r}(f^{i}_{t},o^{j}_{t},r)\) and, \(\phi_{c}=\sum_{r}\sum_{s=0}^{P}\mathcal{B}(r|x_{t})\alpha^{l(s)}_{t} \phi_{c,r}(f^{i}_{t},f^{l}_{t},r)\) endfor endfor
```
**Algorithm 1** Inference of Semantic Frame Locations in SeFM
## IV Results
### _Inference_
We begin by studying the effectiveness of SEAL to model the location of affordances in a simulated apartment environment using ROS Gazebo. Figure 2 shows our experimental setup. The shaded rectangles represent room-level priors explicitly given to the agent a priori. The agent does not know the ground truth _Spoon_ and _Cup_ locations a priori, but can observe them while exploring the environment. Particles are uniformly initialized throughout the map for each frame and object class. We choose to maintain belief over the semantic frames "_Grasp_ Spoon", "_Grasp_ Cup" and "_Stir_ Cup". Where "_Stir_ Cup" refers to the action of grasping a spoon then using that spoon to stir the contents of a cup.
To explore SEAL's effectiveness under partial observability, we keep the agent at a fixed position where none of the objects are observable. Figure 2 A-B show the resulting particle distributions after 20 belief update iterations. In Figure 2A, the agent is initialized with empty grippers whereas in 2B the agent is initialized already grasping a spoon. The effect of this change is reflected in the final distribution of "_Stir_ Cup". Without a spoon (2A), we maintain belief near likely locations of a spoon. Furthermore, since we sum over all frame elements, we maintain relatively higher density in the region where both a spoon and cup are likely to be. When the agent has a spoon (2B), the density of "_Stir_ Cup" shifts to locations where cups are likely to be, since the measurement potential between spoon and "_Stir_ Cup" is now 0. This result affirms that SEAL can accurately condition semantic frame locations \(f_{t}\) on the robot-state information vector \(x_{t}\) through the context potential \(\phi_{c,\mathcal{B}(R_{ij})}(f^{i}_{t},f^{j}_{t},x_{t})\).
To incorporate observations from the environment, we have the agent follow a predefined trajectory through the environment. We assume the agent has a 5m observable range. Figures 2 C-D display the particle distributions after
Fig. 2: SeFM Inference in Gazebo Apartment Setting. Object’s room-level priors (Red=Cup, Blue=Spoon) and ground truth locations (Diamond=Spoon, Star=Cup) shown in top right. Resulting particle distributions shown in A,B,C,D. A&B validate context potential as Stir Cup density (Green) shifts from close to spoon (A) to cup (B) depending on whether robot is currently grasping a spoon. C&D validate measurement potential as distributions converge when objects are observed.
the agent has finished navigation. Here we follow the same initialization routine as mentioned above (i.e. C is initialized without a spoon and D is). By incorporating observations, the final distributions converge to observed objects and show the same sensitivity to initial conditions as in 2A-B. This suggests that the measurement potential \(\phi_{m,\mathcal{B}(R_{ik}|x_{t})}\) is effective in the convergence of frame locations when frame elements are observed. We later incorporate this into an active search algorithm in simulated and real robots.
### _Task Execution_
Now we explore the utility these distributions are when a robot is tasked with executing a semantic frame. Experiments are conducted using a mobile manipulator in the iTHOR simulation environment [24] using tasks from the ALFRED benchmark [25]. ALFRED is a public benchmark used to evaluate the ability to ground natural language commands for everyday household tasks. Tasks in ALFRED are commanded using a natural language sentence, domains are kitchens, bathrooms, and living rooms and contain actions with irreversible state changes. To apply SEAL to this, we slightly alter our inference method to now reason over robot poses that allow for interaction with an affordance rather than the affordance location itself. Additionally, in this case, we no longer use room-level priors for objects since the operating domain is single-room. When given a task, we first parse the task into a set of semantic frames. Next, the agent uses our particle-based inference method over SEAL to infer the poses at which a semantic frame can be executed, and, finally, navigates to and executes the action primitive defined in the semantic frame. Figure 3 shows the progression of distributions for the semantic frame "Put Vase in Safe".
We choose a subset of 50 trials from each of the following experiment groups in ALFRED: Look at, Pick-Place, Pick-Stack-Place, and Pick-Heat-Place and refer the reader to the original work for task descriptions. We define 6 semantic frames (Pick, Place, Slice, Open, Close and Heat) grounded in singular action events that can be called in the iTHOR simulator. We compare SeFM to a method similar to SayCan [6] in which a Large Language Model is queried to provide the next robot action conditioned on current state and commanded goal. In this experiment, we query GPT-3 using OpenAI's API client. Further, we use the same affordance scoring algorithm available in the public SayCan implementation based on object detection rather then a learned value function.
Figure 4 shows the success rate of each algorithm across the 4 aforementioned task groups. Success rate here is defined as the percentage of trials which completed all the required actions in the correct order; partial completion of a task counts as a failure. We note that SeFM does significantly better than SayCan, especially in multi-step tasks. Empiri
Fig. 3: SeFM implemented in iTHOR simulation tasked with “Put Vase in Safe”. Top row shows topdown view of environment with Robot circled in blue, Vase location marked with teal diamond, and Safe marked with green cross. Bottom row shows the distribution at various timesteps throughout the episode. Beginning with initial, uniform distribution, particle weights are updated according to Algorithm 1.
cally, we found that GPT-3 will often propose actions which are not yet afforded to the robot. For example, when heating an object GPT-3 will suggest "Turn on Microwave" prior to suggesting "Close Microwave" leading to failure. Because semantic frames explicitly encode these preconditions, SeFM does not struggle with this. Further, we found that Saycan does poorly when a required object is not immediately observable by the robot. This could be alleviated by improving the affordance scoring function, but that requires additional training data. As shown in section IV-A, SeFM maintains an informed belief of affordance locations even without observing the necessary objects. This ability allows for the robot to search its environment efficiently for necessary objects, boosting performance here.
### _Real Robot_
Finally, we implement SeFM on a Fetch mobile manipulator. We start by creating a 2D occupancy map of the operating environment and annotate said map with known regions (i.e. "Lab", "Hallway", "Kitchenette", etc.). Robot-state information, \(x_{t}\), maintains knowledge of the pose of the robot in the map frame, a history of semantic frames the robot has previously executed, and the name of the object currently in the gripper. For our observations, we use a pretrained YOLOv7 [26] network finetuned on real world images of objects from the YCB object dataset [27]. To determine navigation goals, we use a similar method to [11] which fits a Bayesian Gaussian Mixture Model to the distribution and chooses a pose which allows the robot to observe the mean of the resulting Gaussian. Action policies (Pick, Place, etc.) are manually engineered using MoveIt!. We task the robot with tasks similar to those described in Section IV-B - excluding Pick-Heat-Place due to Fetch's inability to operate a microwave. 10 trials are conducted for each experiment group. We achieve success rates of 80%, 60% and 20% for Look at, Pick-Place, and Pick-Stack-Place, respectively. We note that a majority of failures came from errors during manipulation not inference or navigation.
## V Conclusion
We show that Semantic Frame Mapping can incorporate the structure of human environments as factors in a factor graph to efficiently and accurately localize semantic frames even with weak priors and under partial observability. We also show that by using particle based inference methods we are able to encode multi-modal belief distributions efficiently and without mode collapse. We explored the advantages of Semantic Frame decomposition of tasks compared to LLM and showed that LLM fail to grasp the subtleties of complex actions even when prompted with an example. This shortcoming led to 0% success rate in tasks that SeFM was able to complete over 50% of the time. When extending SeFM to the real world, we noticed that a majority of failures come from the encoded manipulation policies failing. With more refinement in this domain, we expect Semantic Frame Mapping to be a viable bridge from high-level command to robot actions. Further work remains to remove the necessity to explicitly define each component of a semantic frame; for instance, could we learn the frame elements, pre- and post-conditions from demonstrations?
|
2302.03632 | A construction of minimal coherent filling pairs | Let $S_g$ denote the genus $g$ closed orientable surface. A \emph{coherent
filling pair} of simple closed curves, $(\alpha,\beta)$ in $S_g$, is a filling
pair that has its geometric intersection number equal to the absolute value of
its algebraic intersection number. A \emph{minimally intersecting} filling
pair, $(\alpha,\beta)$ in $S_g$, is one whose intersection number is the
minimal among all filling pairs of $S_g$. In this paper, we give a simple
geometric procedure for constructing minimal intersecting coherent filling
pairs on $S_g, \ g \geq 3,$ from the starting point of a coherent filling pair
of curves on a torus. Coherent filling pairs have a natural correspondence to
square-tiled surfaces, or {\em origamis}, and we discuss the origami obtained
from the construction. | Hong Chang, William W. Menasco | 2023-02-07T17:37:40Z | http://arxiv.org/abs/2302.03632v1 | # A construction of minimal coherent filling pairs
###### Abstract.
Let \(S_{g}\) denote the genus \(g\) closed orientable surface. A _coherent filling pair_ of simple closed curves, \((\alpha,\beta)\) in \(S_{g}\), is a filling pair that has its geometric intersection number equal to the absolute value of its algebraic intersection number. A _minimally intersecting_ filling pair, \((\alpha,\beta)\) in \(S_{g}\), is one whose intersection number is the minimal among all filling pairs of \(S_{g}\). In this paper, we give a simple geometric procedure for constructing minimal intersecting coherent filling pairs on \(S_{g},\ g\geq 3\), from the starting point of a coherent filling pair of curves on a torus. Coherent filling pairs have a natural correspondence to square-tiled surfaces, or _origamis_, and we discuss the origami obtained from the construction.
## 1. Introduction
A simple closed curve on a compact closed surface, \(S_{g}\), of genus \(g\geq 2\) is called _essential_ if it does not bound a disc. As such, going forward a "curve in \(S_{g}\)" will mean an essential simple closed curve in \(S_{g}\). Two curves in \(S_{g}\) intersect coherently if all the intersection points have the same orientation provided that the two curves are oriented. Note that it does not depend on the choice of the orientation of the curves. Two curves are in _minimal position_ if the number of intersections of these curves is the minimal within the curves isotopy classes. It is a simple observation that a coherently intersecting pair are already intersecting minimally within their isotopy classes. Thus, for coherently intersecting curves this convenience allows us to drop the distinction between working with a curve pair and their isotopy classes. A pair of curves in \(S_{g}\) is _filling_ if over all representatives from their isotopy classes their complement in \(S_{g}\) is a collection of discs.
Let \(\alpha,\beta\subset S_{g}\) be a filling pair. We call \((\alpha,\beta)\) a minimally intersecting filling pairs if the intersecting number, \(i(\alpha,\beta)\), is minimal among all filling pairs on \(S_{g}\). For Euler characteristics reasons, \(i(\alpha,\beta)\geq 2g-1\). Additionally, for Euler characteristic reasons a minimally intersecting filling pair would have the property that \(S_{g}\setminus(\alpha\cap\beta)\) is a single open disc. For \(g=1\) this low bound is geometrical realizable with a meridian-longitude pair. For \(g=2\), an exhaustive search the finite possibilities for a filling pair intersecting \(3\) times establishes that none exist and that at least \(4\) intersections is needed. However, Aougab and Huang showed in [1] that for all \(g\geq 3\) there exists filling pairs of curves whose intersection achieves the \(2g-1\) minima. (More recently, also see [7, 8].) Moreover, the minima can be obtained with \(\alpha\) and \(\beta\) intersecting coherently shown in [2], so "minimally intersecting filling coherent pair" is not an empty set for \(g\geq 3\).
The construction of minimally intersecting filling coherent pairs in [2] largely utilizes the algebraic techniques coming from the symmetric groups. In this paper we give an alternate geometric construction coming from simple cut-and-paste techniques. Our construction allows one to rapidly construct by hand such filling pairs for any genus.
###### Contents
* 1 Coherent filling pairs and origamis.
* 2 The \(1\)-skeleton of a \(2\)-dimensional CW structure on \(S_{g}\).
* 3 The \(1\)-skeleton of a \(2\)-dimensional CW structure on \(S_{g}\).
* 4 The \(1\)-skeleton of a \(2\)-dimensional CW structure on \(S_{g}\).
* 5 The \(1\)-skeleton of a \(2\)-dimensional CW structure on \(S_{g}\).
* 6 The \(1\)-skeleton of a \(2\)-dimensional CW structure on \(S_{g}\).
* 7 The \(1\)-skeleton of a \(2\)-dimensional CW structure on \(S_{g}\).
## 1. Introduction
### \(1\)-skeleton of a CW structure
The \(1\)-skeleton of a CW structure on \(S_{g}\) is a \(1\)-skeleton of a CW structure on \(S_{g}\). The \(1\)-skeleton of a CW structure on \(S_{g}\) is a \(1\)-skeleton of a CW structure on \(S_{g}\).
quadratic differentials. When \(\mathcal{F}\) is orientable, the origami supports the structure of a translation surface, and the associated quadratic differential will be the square of an abelian differential.
### Outline
In SS 2, we introduce the cut-and-paste surgery operations that we will be utilizing. These surgery operations have the property that, starting with a coherent filling pair of curves on a torus, we will trade an increase in the genus of the surface for a reduction in the number of discs in the complement of the filling pair. A minimal intersecting filling pair will be realized when the number of disc components is reduced to one.
Previous work on constructing and understanding minimal intersecting filling pairs has focused on the growth of the number of nonequivalent pairs [1, 2, 3, 4]. As such, understanding how one might create nonequivalent minimal coherent filling pairs via our surgeries is of interest. In SS 3, we will use a "tree-graph" analysis to understand how our surgery construction can yield nonequivalent filling pairs.
Finally, in SS 4 we generalize the construction to \(S_{g,p}\), oriented surfaces of genus \(g\) with \(p\) punctures. In the setting of punctures surfaces the complement of a filling pair of curves is a collection of discs and once punctured discs.
### Notation
Throughout this note we will use \(S_{g}\) to denote a closed orientable surface of genus, \(g\). \(S_{g,p}\) will denote an closed orientable surface of genus \(g\) with \(p\) marked or puncture points. \(\partial Y\) denotes the boundary of a compact surface, \(Y\). \(|X|\) denotes the cardinality of the set \(X\).
### Acknowledgements
We thank the authors of [2] for the use of Fig. 1. This work has its genesis in the second author's collaboration with Tarik Aougab and Mark Nieland and he thanks them for numerous discussion on the topic of this note.
## 2. Surgeries on coherent filling pairs on \(S_{1}\)
Our strategy for constructing a minimal coherent intersecting filling pair for a genus \(g\geq 3\) oriented closed surface is to start with a coherent intersecting filling pair of curves, \((\alpha,\beta)\), on the torus, \(S_{1}\), that intersect \(g\)-times. (Using the ordered 2-tuple, \(\langle m,l\rangle(=\langle\operatorname{meridian},\operatorname{longitude}\rangle) \in\mathbb{Z}\times\mathbb{Z}\) that specifies curve isotopy classes on \(S_{1}\), the reader might think of \(\alpha\subset S_{1}\) as being the \(\langle 0,1\rangle\) curve and \(\beta\subset S_{1}\) being the \(\langle g,1\rangle\) curve.) Our construction requires that we consider two cases, when \(g\) is odd and when \(g\) is even. For the odd case, through a simply surgery operation on the graph, \(\alpha\cup\beta\subset S_{1}\), we will add in \(g-1\) new vertices. Initially, \(S_{1}\setminus(\alpha\cup\beta)\) has \(g\) disc components. Our simple surgery on \(\beta\) will trade disc components for genus--each surgery decreases the number of disc components by one while increasing the genus of the resulting surface by one. The genus of the final surface will be \(2g-1\) and there will be exactly one disc component in the complement of the resulting filling pair, \((\alpha,\beta^{\prime})\), for \(S_{2g-1}\). For the case when \(g\) is even, will need one additional simply surgery to trade disc-for-genus all the way up to \(2g-1\) genus.
### Two simple surgeries on filling pairs
Let \((\alpha,\beta)\) be a filling pair for \(S_{g\geq 1}\) and consider a closed regular neighborhood, \(\mathbf{N}\), of the graph, \(\alpha\cup\beta\subset S_{g}\). We assume that \(\alpha\) and \(\beta\) are positioned so as to intersect minimally within the isotopy class of, say, \(\beta\). As such, each boundary component of \(\partial\mathbf{N}\) bounds a disc in \(S_{g}\). In particular, we focus on any small neighborhood, \(\nu(p)\subset\mathbf{N}\), around \(p\in\alpha\cap\beta\), one of the 4-valent intersection points--the first illustration in the sequence of Fig. 2 depicts \(\nu(p)\). \(\nu(p)\)
will have segment portions of four boundary components, \(\partial_{1}\), \(\partial_{2}\), \(\partial_{3}\) and \(\partial_{4}\) as shown in first illustration in the sequence of Fig. 2. We remark that some of \(\partial_{i}^{\prime}\)s may be the same component of \(\partial N\). Taking \(\alpha\) near \(p\) as a west/east axis and \(\beta\) as a north/south axis, the four boundary segment are positioned so that \(\partial_{1}\) is Southwest (SW), \(\partial_{2}\) is NW, \(\partial_{3}\) is NE, and \(\partial_{4}\) is SE.
**The single \(1\)-handle surgery**--We now glue to the sub-surface, \(\mathbf{N}\), a \(1\)-handle, \(B(\cong[0,1]\times[0,1])\), that is attached to \(\partial_{1}\) (SW) and \(\partial_{3}\) (NE).
Referring to the second illustration in the sequence in Fig. 2, we take an arc, \(\gamma\), to be the extended core of the attached \(B\). The salient feature is that \(\gamma\) is attached to the south side (north side) of the west portion (east portion) of \(\alpha\cap\nu\). Then \(\alpha\cup\beta\cup\gamma\) will be a graph in \(\mathbf{N}\cup B\) that has some number of \(4\)-valent vertices--same number as \(|\alpha\cap\beta|\)--and two \(3\)-valent vertices--the two endpoints of \(\gamma\).
The third illustration in the sequence in Fig. 2 show a _shearing_ of \(\beta\) at the point \(p\), creating two new \(3\)-valent vertices. The reader should observe that we now have four \(3\)-valent vertices in succession on \(\alpha\). The fourth illustration shows how these four \(3\)-valence vertices are realigned and _spliced_ to create two new \(4\)-valence vertices and a new \(\beta^{\prime}\). The key feature of the final fourth illustration is that the orientation at the two intersections of \(\alpha\cap\beta^{\prime}\) created by this splice is consistent with the original orientation intersection point, \(\nu\cap(\alpha\cap\beta)\)--crossing \(\alpha\) south to north.
Figure 2. The first illustration of the sequence shows the extended core of the band-to-be-added. Its endpoints are on \(\alpha\). The second illustration shows the added band. The third and fourth illustration in the sequence show how the “shear” the intersection point in \(\alpha\cap\beta\) and adjoin, or “splice”, to the endpoints of the extended core of the band. The salient feature is the \(\partial_{1}\) and \(\partial_{3}\) are band connected.
We observe that if \(\partial_{1}\neq\partial_{3}\) then the \(\partial(\mathbf{N}\cup B)\) has one less boundary component and the genus of \(\mathbf{N}\cup B\) is increased by one. Moreover, the curve pair \((\alpha,\beta^{\prime})\), will be a filling pair in the surface obtains by capping off each component of \(\partial(\mathbf{N}\cup B)\) with a disc, i.e. \(S_{g+1}\). Additionally, \(|\alpha\cap\beta|+1=|\alpha\cap\beta^{\prime}|\).
The surgery sequence obviously is generalized by rotation and reflection.
We will refer back to this _shear and splice_ construction numerous times in this note.
**The two \(1\)-handle surgery**--For this surgery we refer the reader to Fig. 3 on how we will alter the initial \(\nu(p)\) neighborhood. Specifically, we glue in two \(1\)-handles: a \(1\)-handle, \(B_{NW/SW}\), that is attached to \(\partial_{2}\) (NW) and \(\partial_{1}\) (SW); and, a \(1\)-handle, \(B_{NW/SE}\) attached to, again, \(\partial_{1}\) (SW) and \(\partial_{3}\) (NE). Next, we take a core arc of each \(1\)-handle and extend them into \(\nu(p)\) so as to create a single arc, \(\gamma\), that is attached to \(\alpha\) on the north (south) side of the west (east) portion in \(\nu(p)\). The blue arc in the left illustration of Fig. 2 corresponds to \(\gamma\). Note that at this stage \(\alpha\cup\beta\cup\gamma\) is a graph in \(\mathbf{N}\cup B_{NW/SW}\cup B_{SW/NE}\) having \(|\alpha\cap\beta|+1\) 4-valent vertices and two \(3\)-valent vertices.
Finally, we shear \(\beta\) at the point \(p\in\alpha\cap\beta\) to create two \(3\)-valent vertices. As with our first surgery, we will then have four \(3\)-valent vertices in succession on \(\alpha\). The right illustration of Fig. 2 shows the realignment of these four vertices creating two new \(4\)-valent vertices and a new \(\beta^{\prime}\) curve by splicing into \(\beta\) the extended core arc. As with our first surgery, the two new vertices of \(\beta^{\prime}\) are intersections with \(\alpha\) that are consistent with the manner of intersection of our original point \(p\)--crossing \(\alpha\) south to north. Thus, again we have a shear and splice construction, going from \(\beta\) to \(\beta^{\prime}\).
If we assume that \(\partial_{1},\partial_{2},\partial_{3}\) are all distinct boundary curves of \(\mathbf{N}\) then \(|\partial(\mathbf{N}\cap B_{NW/SW}\cap B_{SW/NE})|=|\partial\mathbf{N}|-2\). Thus, the curve pair \((\alpha,\beta^{\prime})\), will be a filling pair in the surface obtains by capping off each component of \(\partial(\mathbf{N}\cap B_{NW/SW}\cap B_{SW/NE}))\) with a disc, i.e. \(S_{g+2}\). Additionally, \(|\alpha\cap\beta|+2=|\alpha\cap\beta^{\prime}|\)
Finally, both the surgery sequences are generalized by rotation and reflection.
### Constructing minimal coherent filling pairs
With our two surgeries in hand we are now in a position to construct minimal coherent intersecting filling pairs for genus, \(g\geq 3\).
As stated at the beginning of SS 2, we start with a filling pair on \(S_{1}\) that intersect \(g\)-times. Again, \(\alpha\) is a \(\langle 0,1\rangle\) curve and \(\beta\) as a \(\langle g,1\rangle\) curve. We give an orientation to \(\alpha\) and label the \(g\) intersection points, \(\{p_{1},p_{2},\cdots,p_{g}\}=\alpha\cap\beta\), such that the cyclic order of the points on \(\alpha\) correspondence to the cyclic order given by indices of the \(\nu(p_{i})\)-labels. Next we orient \(\beta\) similarly--traversing \(\beta\), \(mod(g)\) the \(i^{\text{th}}\) intersection point is \(p_{i}\).
Figure 3. Banding three boundary components with one arc
Our construction requires that we consider the cases when \(g\) is odd and even separately. The top illustration in Fig. 4 has \(g=3\) and is representative of the cases having \(g\) odd. The bottom illustration of Fig. 4 has \(g=4\) and is representative of the cases having \(g\) even.
Referring to Fig. 4, it is convenient to represent the \(\alpha\) curve by a horizontal line segment which has its left and right endpoints identified. Then we can represent the \(\beta\) curve by \(g\) vertical line segment, each one of which intersects our \(\alpha\) representation once at its midpoint. Assigning labels--1 through \(g\), left to right--to the top endpoints of our \(g\) vertical segments and labels, \(g\) then 1 through \(g-1\), to the bottom ends of the vertical segments, we realize \(\beta\) by a gluing that matches the top endpoint labels with the bottom endpoint labels.
It is also helpful to assign labels, \(p_{1}\) through \(p_{g}\), to the points of intersection of the horizontal \(\alpha\) segment with the vertical segments--\(p_{i}\) will be in the vertical segment have \(i\) as a top endpoint label. Next, when we consider a regular neighborhood, \(\mathbf{N}\) of \(\alpha\cup\beta\subset S_{1}\), near \(p_{i}\) we have the four "compass" boundary curves, \(NE_{i},NW_{i},SW_{i},SE_{i}\), where, due to indexing scheme for connecting the labels of the vertical segments, \(NE_{i}=NW_{i+1},SE_{i}=SW_{i+1},NW_{i}=SE_{i}\) To help to reader with this identification in Fig. 4 we have distinguished the components of \(\partial\mathbf{N}\) by a color assignment. The reader should observe that \(|\partial\mathbf{N}|=g\).
The key issue is when a "1-handle attaching scheme" to \(\partial\mathbf{N}\) results in a surface with one boundary. (In SS 5 we will expand and make precise our notion of "scheme".) To that end we define an _attaching graph or A-graph_, \(G\). The vertices of \(G\) are the components of \(\partial\mathbf{N}\). And, two vertices share an edge if they share the attaching ends
Figure 4. The top illustration has \(g=3\) and is representative of the odd case. The bottom illustration has \(g=4\) and is representative of the even case. The horizontal segments in each has the right/left endpoints identified and corresponds to the \(\alpha\) curve. The labels on the endpoints of the vertical segments correspond to the identification of their endpoints so as to form the \(\beta\) curve.
of a specified \(1\)-handle. Then \(G\) will have \(g\) vertices and \(g-1\) edges. We then have the following lemma.
**Lemma 2**.: _Given an attaching scheme of \((g-1)\)\(1\)-handles to \(\partial\mathbf{N}\), the resulting surface will have exactly one boundary component if and only if the \(A\)-graph, \(G\), is a connected tree._
The proof of the following lemma will be delayed until SS 3.
We now give two schemes--one for \(g\) odd and one for \(g\) even--attaching \(1\)-handles to \(\mathbf{N}\), both utilize the \(1\)-handle surgeries of SS 2.1.
**Case where \(g\) is odd.** In a neighborhood of each intersection point, \(p_{i},\ 2\leq i\leq g\), we perform a single \(1\)-handle surgery attached to \(SW_{i}\) to \(NE_{i}\) for \(2\leq i\leq g\). It is readily observed that the graph, \(G\), is a linear tree. (The reader may wish to consult the top of Fig. 4.) Thus, by Lemma 2 the resulting surface has one boundary component and is of genus \((2g-1)\).
As previously observed, the resulting filling pair will still have coherent intersection. \(\Box\)
**Case where \(g\) is even.** In a neighborhood of \(p_{i}\) we perform a two \(1\)-handel surgery: attaching a \(1\)-handle between \(NW_{1}\) and \(SW_{1}\); and, \(NW_{1}\) and \(SE_{1}\). Then, in a neighborhood of each intersection point, \(p_{i},\ 4\leq i\leq g\), we perform a single \(1\)-handle surgery attached to \(SW_{i}\) to \(NE_{i}\) for \(4\leq i\leq g\). Again, it is readily observed that the associated graph, \(G\), is a linear tree. (The reader may wish to consult the bottom of Fig. 4.) And, Lemma 2 again gives us that the resulting surface has one boundary component and is of genus \((2g-1)\).
And again, the resulting filling pair will still have coherent intersection. \(\Box\)
Based upon the above two surgery schemes we can state the following result.
**Theorem 3**.: _For genus \(g\geq 3\), we can create minimal coherent filling pairs utilizing the two \(1\)-handle surgeries of SS 2.1._
By now the reader may have realized that there are other choices one may make for attaching \(1\)-handles, shearing vertices and slicing in the extended handles cores so as to obtain a single boundary curve and a new \(\beta^{\prime}\). In the next sections we investigate other such choices.
**Remark 4**.: For \(g=2\), if we attempt to attach a single \(1\)-handles to \(\partial\mathbf{N}\) using the single \(1\)-handle surgery we "run out of room". That is, the associated graph, \(G\), will not be a connected tree since both points, \(p_{1},p_{2}\subset\alpha\cup\beta\subset\mathbf{N}\), are adjacent to just the two boundary curves of \(\partial\mathbf{N}\). Thus, we cannot realize a filling pair such that \(\alpha\cap\beta=2\cdot g-1=2\cdot 2-1=3\). This "failure to construct" is consistent with the fact the for genus \(2\) we need \(|\alpha\cap\beta|=4\).
## 3. \(1\)-handle attaching schemes and filling pair
We now supply the proof of our previously used lemma.
Proof of Lemma 3.: First, we will assume that the resulting surface has exactly one boundary component. We will argue that \(G\) must be a tree.
We observe that our definition of the graph \(G\) is really independent of the filling pair and only dependent on the surface type of \(\mathbf{N}\). That is, \(\mathbf{N}\) is homeomophic to \(S_{1,0,g}\), a compact surface of genus one having no puncture (or marked points) and
\(g\) boundary components. By attaching a \(1\)-handle to any surface with boundary we either increase by one or decrease by one the number of boundary components of the resulting surface. To do the former (latter), both ends of the \(1\)-handle must be attached to the same (different) boundary component(s).
With the above in mind, we take a \(\hat{g}\) to be of minimal value for which the theorem is not true. Then for \(\mathbf{N}\cong S_{1,o,\hat{g}}\), there is an attaching scheme of \((\hat{g}-1)\) handle on the \(\hat{g}\) boundary components that produce a single boundary curve, but the associated A-graph is not a tree.
We next take a maximal sub-collection of handles that results in an A-graph, \(G^{\prime}\), having each component of \(G^{\prime}\) is a tree. (By assumption this sub-collection has fewer than \((\hat{g}-1)\) handles.) The cardinality of our sub-collection of handles is \((\hat{g}-|G^{\prime}|)\). And, there are \(|G^{\prime}|-1\) remaining handles to attach. Moreover, each component accounts for one boundary component of the resulting sub-surface, i.e. there are \(|G^{\prime}|\) boundary component.
If we now attach a one of the handles not in our maximal sub-collection, it must result in a component of our A-graph being not a tree. This implies that both ends of this handle are attached to the same boundary component. But, this is not possible. We are in the situation where we have a surface with \(|G^{\prime}|(<\hat{g})\) boundary curves and \(|G^{\prime}|-1\) handles to be attach resulting in a single boundary component. By the assumption that \(\hat{g}\) is minimal for realizing a counterexample, we have a contradiction.
For the other direction of our theorem we proceeding inductively. If there are initially two components of \(\partial\mathbf{N}\), attaching a \(1\)-handle between them will produce a single boundary component and the associated \(G\) is a tree. Now suppose we can attach
Now consider the associated graph, \(G\), coming from an attachment scheme of \((g-1)\) 1-handles to the \(g\) boundary components of \(\mathbf{N}\). And, assume that \(G\) is a connected tree. We use \(\mathbf{N}^{\prime}\) to denote the resulting surface and we observe that \(\mathbf{N}\) is naturally seen as a sub-surface in \(\mathbf{N}^{\prime}\). Moreover, we can obtain \(\mathbf{N}\) from \(\mathbf{N}^{\prime}\) by deleting the open sets in \(\mathbf{N}^{\prime}\) that correspond to the "interior" of the \(1\)-handles--homeomorphically equivalent to \((0,1)\times[0,1]\).
We then have a similar behavior to that described in the first half of our argument. By deleting the interior of a \(1\)-handle to any surface with boundary we either increase or decrease by one the number of boundary components of the resulting surface. To do the former (latter), both components of \(\{1-\text{handle}\}\cap\partial\mathbf{N}^{\prime}\) must on the same (different) boundary component(s) of \(\mathbf{N}^{\prime}\). Since the deletion of \((g-1)\) interiors of \(1\)-handles in \(\mathbf{N}^{\prime}\) produces a surface with \(g\) boundary components, by \(G\) being connected--every boundary of \(\mathbf{N}\) has at least one \(1\)-handle attached--we conclude that \(|\partial\mathbf{N}^{\prime}|=1\).
As previously observed, the \(1\)-handle attaching scheme of Theorem 3 is not unique in that one can readily construct other \(1\)-handle attaching schemes whose associated graph \(G\) is a tree. In Fig. 5 we offer such an example.
_Example 5_ (An attaching scheme for \(S_{11}\).).: Initially, the \(\beta\) curve is a \((6,1)\) curve on \(S_{1}\) with \(\alpha\) again being the \((0,1)\) curve. As before, we will have \(\{p_{1},\cdots,p_{6}\}=\alpha\cap\beta\) which we indicate in the top illustration of Fig. 5. To reduce the clutter we do not depict the vertical arcs associated with \(\beta\). Finally, we double-label the boundary components of \(\partial\mathbf{N}\) with numeric-colored labels.
We depict an oriented blue arc, \(\gamma\), that has endpoints in \(\alpha\) to the left and right of the point, \(p_{2}(\subset\alpha\cap\beta)\). We will use \(\gamma\) for a scheme of attaching five \(1\)-handles to
the components of \(\partial\mathbf{N}\). Specifically, \(\gamma\) is the union of five extended 1-handled cores. These five extended cores have their endpoints in \(\alpha\): one to the right of \(p_{1}\); one to the right of \(p_{3}\); one to the left of \(p_{6}\); and, the two endpoints that are to the left and right of \(p_{2}\).
The last two listed points, the endpoints of \(\gamma\) are of particular interest. They are positioned on \(\alpha\) so that we can splice \(\gamma\) into \(\beta\) by a shearing of \(\beta\) at \(p_{2}\) followed by a reconnecting of endpoints as previously depicted in Fig.'s 2 & 3. The resulting curve will be our new \(\beta^{\prime}\). Returning to the orientation assignment of \(\gamma\), it is consistent with the orientation of \(\beta\) that we have been assigning--edges of \(\beta\) are depicted as coming into \(\alpha\) from below and going out of \(\alpha\) from above. Thus, the new \(\beta^{\prime}\) will have coherent intersection with \(\alpha\).
The bottom illustration of Fig. 5 depicts the associated A-graph, \(G\). (The validity of \(G\) we leave it to the reader to check.) Since \(G\) is a tree by our Lemma 2 we conclude that this scheme for attaching 1-handles yields a minimal coherent filling pair for \(S_{11}\). Moreover, the extended cores of the 1-handles inherent an orientation from \(\gamma\) that results in giving each edge of \(G\) an orientation, e.g. the core of the 1-handle goes from the 1/red boundary curve to the 6/black boundary curve giving us a edge in \(G\) going from the 1/red vertex to the 6/black vertex. \(\diamond\)
From this example we see that an attaching scheme for a collection of 1-handles can be described by specifying a disjoint collection of oriented arcs, \(\{\gamma_{1},\cdots,\gamma_{n}\}\), that have their endpoints on our initial \(\alpha\) curve--a \(\langle 0,1\rangle\) curve on \(S_{1}\)--and intersect \(\alpha\) is a coherent manner that is consistent with that of \(\beta\)--a \(\langle g,1\rangle\) curve on \(S_{1}\). Then, 1-handles are attached to \(\mathbf{N}\) so as to have their extended cores equal \(\cup_{1\leq i\leq n}\gamma_{i}\). Each \(\gamma_{i}\) satisfies the following conditions. (We continue our appeal to the setup: \(\alpha\) and \(\beta\) curves in \(S_{1}\) coherently intersect and \(\mathbf{N}\) is a regular neighborhood \(\alpha\cup\beta\). We also
Figure 5. The arc, \(\gamma\), contains the extended core of five 1-handles. There is only one shear and splice which is at \(p_{2}\). It splices \(\gamma\) into \(\beta\) to produce \(\beta^{\prime}\). The lower illustration depicts the associated graph, \(G\), which is a tree.
visually depict the setup in the same manner: \(\alpha\) a single horizontal arc with left/right endpoints identified; and, \(\beta\) as \(g\) vertical segments oriented pointing bottom-to-top.)
1. For \(\partial\gamma_{i}\) there exists an intersection point \(p\in\alpha\cap\beta\) such that on \(\alpha\) these two endpoints are to the immediate left/right of \(p\).
2. \(\gamma_{i}\) intersects \(\alpha\) in a coherent manner.
3. \(\gamma_{i}\) is attached to \(\alpha\) and oriented such that a shear and splice operation at the point \(p\) (previous condition) yields a consistently oriented curve coherently intersecting \(\alpha\).
A collection of \(\gamma\) arcs satisfying the above three conditions are said to be a _\(1\)-handle attaching scheme_.
Performing the shear and splice operation for each \(\gamma\) arc of a \(1\)-handle attaching scheme will yield a curve pair, \((\alpha,\beta^{\prime})\), for some oriented some closed surface. Additionally, a corresponding A-graph, \(G\), can be constructed.
We have the following theorem whose proof is now self-evident.
**Theorem 6**.: _Let \(\{\gamma_{1},\cdots,\gamma_{n}\}\) be a \(1\)-handle attaching scheme. Suppose_
\[|\cup_{1}^{n}\gamma_{i}\cap\alpha|-n=g-1.\]
_Then the resulting curve pair is a minimal coherent filling pair for a \(S_{g}\) if and only if the A-graph \(G\) is a connected tree._
## 4. Cases with punctures
We now extend our minimal coherent filling pair construction to orientable finite type surfaces, \(S_{g,p}\), where the genus is \(g\geq 3\) and there are \(p(>0)\), punctures (or marked) points. A pair of curves, \(\bar{\alpha},\bar{\beta}\subset S_{g,p}\), is _filling_ if, when \(\bar{\alpha}\) and \(\bar{\beta}\) are positioned to intersect minimally within their isotopy classes, \(S_{g,p}\setminus(\bar{\alpha}\cup\bar{\beta})\) is a collection of discs and once punctured discs. By an Euler characteristic argument, the minimal number of intersections needed for a pair of curves to fill is \(2g+p-2\)[7]. If \((\bar{\alpha},\bar{\beta})\) is a minimal filling pair then \(|S_{g,p}\setminus(\bar{\alpha}\cup\bar{\beta})|=p\). Alternatively, if we consider a regular neighborhood, \(\mathbf{N}(\bar{\alpha}\cup\bar{\beta})\subset S_{g,o,p}\) we would have \(|\partial\mathbf{N}|=p\).
We slightly modify our initial setup of \(\alpha,\beta\subset S_{1}\) by requiring \(|\alpha\cap\beta|=g+p-1\). Thus, \(\beta\) is now a \(\langle(g+p-1),1\rangle\) curve on \(S_{1}\) while \(\alpha\) is still a \(\langle 0,1\rangle\) curve. This implies that \(S_{1}\setminus(\alpha\cup\beta)\) has \((g+p-1)\) discs components. We again denote a regular neighborhood of \(\alpha\cup\beta\) by \(\mathbf{N}(\subset S_{1})\).
We define a collection of \(\gamma\) arcs, \(\{\gamma_{1},\cdots,\gamma_{n}\}\), as being a \(1\)-handle attaching scheme in exactly the same manner as that of SS 3. As such we can consider the A-graph, \(G\), of an attaching scheme. If \(|G|=p\) with each connected sub-graph component being a tree where exactly one of its vertices corresponds to a \(\partial_{i}\) boundary component of \(\mathbf{N}\), then the resulting curve pair, \((\alpha,\beta^{\prime})\), will be filling. And additionally, if
\[|\cup_{1}^{n}\gamma_{i}\cap\alpha|-n=g-1,\]
the resulting pair, \((\alpha,\beta^{\prime})\), will be minimal, i.e. \((g+p-1)+(g-1)=2g+p-2\). Since our third condition for an attaching scheme requires that the shear and splice operation produce a \(\beta^{\prime}\) that coherently intersects \(\alpha\), we have our conditions for constructing minimal coherent filling pairs for \(S_{g,p},g\geq 3,p>0\).
**Theorem 7** (Also see [7], Theorem 1.3).: _Minimal coherent filling pairs for \(S_{g,p},g\geq 3,p>0\), exist for all such \(g\) and \(p\)._
Proof.: Utilizing only the two 1-handle surgeries of SS 2, we take any 1-handle attaching scheme of \(\gamma\) arcs that yields a minimal coherent filling pair for \(S_{g+p}\). We throw away any \(p-1\) edges of the associated tree graph, \(G\), to produce a non-connected graph, \(G^{\prime}\), which has \(p\) sub-graphs. We restrict our choices of discarded edges to those that correspond to single 1-handle surgery and the \(B_{NW/SW}\) handle of the two 1-handle surgery. The reader should observe that throwing away the \(B_{NW/SW}\) alters the two 1-handle surgery to a single 1-handle surgery.
Thus, by construction there is a sub-collection of our original collection of \(\gamma\) arcs that yield an attaching scheme that producing a surface of genus, \(g\), with \(p\) boundary components. Moreover, \(G^{\prime}\) is the associated A-graph and this surface will be a regular neighborhood of the resulting coherent filling pair four valent graph, \(\alpha\cup\beta^{\prime}\). Now by capping off each of the \(p\) boundary components with a once punctured disc, we obtain \(S_{g,p}\). Necessarily, \(|\alpha\cap\beta^{\prime}|=2g+p-2\).
One last subtle observation. We can designate one arbitrary vertex from each component of \(G^{\prime}\) as corresponding to a one of the \(p\) designated boundary components, \(\partial_{i}\). Placing a puncture in each associated disc of \(S_{1}\setminus(\alpha\cup\beta)\), we can then have this \(S_{1,p}\) with its coherent filling pair, \((\alpha,\beta)\), as the initial starting setup. The previous sub-collection of \(\gamma\) arcs will then be an attaching scheme that yields \((\alpha,\beta^{\prime})\) as a minimal coherent filling pair in a \(S_{g,p}\) surface.
|
2305.06935 | \emph{Ab initio} calculations of structural stability, thermodynamic and
elastic properties of Ni, Pd, Rh, and Ir at high pressures | The paper presents results of a comprehensive study from first principles
into the properties of Ni, Pd, Rh, and Ir crystals under pressure. We
calculated elastic constants, phonon spectra, isotherms, Hugoniots, sound
velocities, relative structural stability, and phase diagrams. It is shown that
in nickel and palladium under high pressures ($>$0.14 TPa) and temperatures
($>$4 kK), the body-centered cubic structure is thermodynamically most stable
instead of the face-centered cubic one. Calculated results suggest that nickel
under Earth-core conditions ($P$$\sim$0.3 TPa, $T$$\sim$6 kK) have a bcc
structure. No structural changes were found to occur in Rh and Ir under
pressures to 1 TPa at least. The paper also provides estimations for the
pressure and temperature at which the metals of interest begin to melt under
shock compression. | N. A. Smirnov | 2023-05-11T16:13:24Z | http://arxiv.org/abs/2305.06935v1 | _Ab initio_ calculations of structural stability, thermodynamic and elastic properties of Ni, Pd, Rh, and Ir at high pressures
###### Abstract
The paper presents results of a comprehensive study from first principles into the properties of Ni, Pd, Rh, and Ir crystals under pressure. We calculated elastic constants, phonon spectra, isotherms, Hugoniots, sound velocities, relative structural stability, and phase diagrams. It is shown that in nickel and palladium under high pressures (\(>\)0.14 TPa) and temperatures (\(>\)4 kK), the body-centered cubic structure is thermodynamically most stable instead of the face-centered cubic one. Calculated results suggest that nickel under Earth-core conditions (\(P\)\(\sim\)0.3 TPa, \(T\)\(\sim\)6 kK) have a bcc structure. No structural changes were found to occur in Rh and Ir under pressures to 1 TPa at least. The paper also provides estimations for the pressure and temperature at which the metals of interest begin to melt under shock compression.
## I Introduction
Persistent interest to the study of materials properties under high pressures and temperatures is determined, on the one hand, by progressive advances in the experimental methods used to study materials under these conditions and, on the other hand, by the discovery of interesting physical effects, for example, superconductivity at near-room temperatures. These studies also help judge the state of materials in the planetary interiors and gain a better understanding of processes in matter in the interest of inertial confinement fusion. Worthy of note is also the fact that the gradual growth of pressure (up to 1 TPa) accessible in experiments on static compression in diamond anvil cells both at room and at high temperatures [1; 2] raises the issue related to the appropriate pressure standard for strongly compressed matter. Some noble metals that were earlier used as the pressure standard have recently been found to exhibit polymorphism [3; 4; 5; 6; 7; 8] under high pressures and temperatures thus limiting their use for that purpose.
_Ab initio_ calculations into the behavior of Ni, Pd, Rh, and Ir under extreme conditions are relatively poor. The most thoroughly studied is nickel because its behavior under high \(P\) and \(T\) is of interest in research of planetary interiors. Its structural stability under pressure and zero temperature was studied in theoretical works [9; 10; 11]. The authors of papers [9; 10] performed calculations to pressures about 0.3 TPa and did not discover any structural changes. Much higher pressures were reached in calculations [11], which showed that at \(P\), above 6.3 TPa and \(T\)\(=\)0 K, face-centered cubic Ni must transform into a hexagonal close-packed structure. In turn, static experiments [12] did not reveal any structural transformations up to 0.37 TPa at room temperature.
Phonon spectra, thermodynamic and magnetic properties of nickel up to 0.1 TPa were calculated in paper [13]. The study showed that the magnetic fcc phase remained dynamically stable and its magnetic moment gradually reduced as compression grew. The evolution of the magnetic properties of Ni at higher pressures was investigated theoretically and experimentally in Refs. [11; 14; 15]. Experiments [14; 15] suggest that the metal remains ferromagnetic at least to \(P\)\(=\)0.26 TPa. Calculations [11] estimate the pressure at which Ni completely loses its magnetic properties to be about 1 TPa.
Special attention is given to the melting curve of nickel. Data from early static experiments in laser-heated diamond anvil cells [16; 17] (laser-speckle method for detection of melting) disagree with _ab initio_ calculations [18; 19] and shock data [20]. The temperatures of melting at \(P\) above 20 GPa reported in Refs. [18; 19; 20] are markedly higher than in experiments [16; 17]. But new results of static measurements [21; 22] taken with an experimental technique (X-ray diffraction, XAS) different from that used in Refs. [16; 17] show excellent agreement with data from [18; 19; 20]. That is, we can see that the optical method used in measurements [16; 17] does not give correct values for the melting temperature of nickel at high pressures.
The relative stability of Pd and Rh was studied from first principles in papers [23; 24]. These calculations show the fcc phase of Pd and Rh to be thermodynamically most stable at least to pressures about 1 TPa and \(T\)\(=\)0 K. Experimental studies of structural stability at room temperature below 80 GPa also show no phase transitions [25; 26; 27]. The melting curves of Pd and Rh were measured in papers [17; 28] under rather low pressures \(<\) 30 GPa.
Iridium at high pressures (up to 0.26 TPa) and room temperature was studied in static experiments [29; 30; 31]. In experiment [29], additional peaks were found in its diffraction pattern at relatively low \(P\) which pointed, in the authors' view, to a transition into a new hexagonal structure. But later experiments did not confirm that [30; 31]. It seem so that no structural transitions occur in iridium at room temperature to pressures about 0.26 TPa. Polymorphism in Ir was not also seen in laser-heated diamond anvil cell experiments at \(T\)\(\leq\)3.1 kK and \(P\)\(<\)50 GPa [32]. Experimental data on the melting curve of Ir under pressure are very poor. The only estimation for its melting point at about 40 GPa is provided in experiments [32].
The relative stability of Ir phases was theoretically studied in papers [33; 34]. Calculations [33] did not predict any structural changes to occur under pressures below \(\sim\)0.1 TPa, but the authors of article [34] found a thermodynamic stability region for the random-stacking hexagonal close-packed structure (rhcp) at high pressures and above-room temperatures. It |
2308.09422 | Deeply-virtual and photoproduction of mesons at higher-order and
higher-twist | Both deeply-virtual and photoproduction of mesons offer promising access to
generalized parton distributions and complementary description of different
kinematical regions. The higher-order contributions offer stabilizing effect
with respect to the dependence on renormalization scales, while higher-twist
effects have been identified as especially important in the case of the
production of pseudo-scalar mesons. This was confirmed by recent evaluation of
the complete twist-3 contribution to $\pi$ and $\eta$/$\eta'$ photoproduction
and its confrontation with experimental data. | K. Passek-K. | 2023-08-18T09:41:53Z | http://arxiv.org/abs/2308.09422v1 | # Deeply-virtual and photoproduction of mesons at higher-order and higher-twist +
###### Abstract
Both deeply-virtual and photoproduction of mesons offer promising access to generalized parton distributions and complementary description of different kinematical regions. The higher-order contributions offer stabilizing effect with respect to the dependence on renormalization scales, while higher-twist effects have been identified as especially important in the case of the production of pseudo-scalar mesons. This was confirmed by recent evaluation of the complete twist-3 contribution to \(\pi\) and \(\eta/\eta^{\prime}\) photoproduction and its confrontation with experimental data.
## 1 Introduction
Historically, most of the information about the high-energy nucleon structure came from the deeply inelastic scattering (DIS). From DIS data one extracts the parton distribution functions (PDFs) being the probabilities that a certain parton is found in a nucleon with certain longitudinal momentum fraction of the nucleon momentum. Through PDFs the one dimensional structure of the nucleon is thus revealed. The hard-exclusive processes offer insight into the transverse distribution of partons and corresponding generalized parton distributions (GPDs) give access to nucleon 3D structure. GPDs are functions of three variables: \(x\), the parton's "average" longitudinal momentum fraction, \(\xi\), the longitudinal momentum transfer (skewness parameter), and \(t\), momentum transfer squared, while their evolution with energy is encapsulated in the dependence on the factorization scale. At leading twist-2 there are eight quark GPDs and eight gluon GPDs classified according to different quantum numbers (parity, chirality), as well as different GPDs for different quark flavours. To reveal their form is thus not an easy task and information from several processes have to be combined.
For the description of the hard exclusive processes one employs the handbag mechanism in which only one quark from the incoming nucleon and one from the outgoing nucleon participate in the hard subprocess while all other partons are spectators. The simplest and well-investigated process to which this approach has been applied is Compton scattering \(\gamma^{(*)}N\to\gamma N\), while meson electroproduction \(\gamma^{(*)}N\to MN^{\prime}\) represents the natural extension and offers access to quark flavours. A prerequisite for the handbag mechanism is the presence of at least one large scale, which allows for the use of perturbative expansion in the strong coupling constant and the power, i.e., twist, expansion. Two kinematic regions have been extensively studied: the deeply virtual (DV) region, where the virtuality \(Q^{2}\) of the incoming photon is large, and the momentum transfer \((-t)\) from the incoming to the outgoing nucleon is small; and the wide-angle (WA) region, where \((-t)\), \((-u)\), and \(s\) are all large, while \(Q^{2}\) is smaller than \((-t)\) (\(Q^{2}=0\) in the case of photoproduction). Factorization proofs exist for all orders for DV Compton scattering (DVCS) [1] and DV meson production (DVMP) [2], with the process amplitudes factorizing into hard perturbatively calculable subprocess amplitudes and GPDs that encapsulate the soft hadron-parton transitions and the hadron structure. However, general factorization proofs are still lacking for WA processes, although it has been shown that factorization holds to next-to-leading order in the strong coupling for WA Compton scattering (WACS) [3, 4] and to leading order for WA meson production (WAMP) [5]. It is argued that in the symmetric frame where skewness is zero, the amplitudes can be represented as a product of subprocess amplitudes and form factors that represent \(1/x\) moments of GPDs at zero-skewness.
Both DVCS and WACS were widely investigated in the last decades and the handbag factorization achieved a good description of the experimental data. The leading twist-2 description of DV vector meson production only considers the contributions of longitudinally polarized photons, specifically \(\gamma_{L}^{*}N\to V_{L}N^{\prime}\). This description has been observed to be in relatively good agreement with the current experimental data (see [6, 7] and the references therein). However, there is still a lack of systematic separation between longitudinal and transverse experimental data. The contributions of transversely polarized photons \(\gamma_{T}^{(*)}N\to V_{L,T}N^{\prime}\) have also been investigated by including the twist-3 corrections to the meson state [8, 9]. On the other hand, the experimental data for DV pion production [10, 11, 12, 13] suggest the high significance of transversely polarized photons, which are not accounted for by the leading twist-2 \(\gamma_{L}^{*}N\to\pi N^{\prime}\) contributions. As in the vector meson case, a twist-3 calculation has been proposed, which incorporates twist-2 chiral-odd, i.e., transversity (parton helicity flip), GPDs and twist-3 pion corrections. The calculation including only the twist-3 2-body pion Fock component (Wandzura-Wilczek approximation) has al
ready achieved a successful agreement with the data [14]. Experimental data for WA pion production [15, 16, 17] also indicate that the twist-2 contributions [5] are not sufficient. But unlike DVMP, the twist-3 contribution to pion photoproduction was found to vanish in the commonly used Wandzura-Wilczek approximation. In [19], both 2- and 3-body twist-3 Fock components of \(\pi_{0}\) were considered and successfully fitted to CLAS data [17]. This work was extended to photoproduction of \(\eta\) and \(\eta^{\prime}\) mesons [20] and WA electroproduction of \(\pi^{\pm},\pi^{0}\)[21]. The application of the latter analytical results for the subprocess amplitudes to the DVMP subprocess amplitudes is straightforward, and the phenomenological analysis is underway.
The DV and WA regions enable complementary access to GPDs at small and large (\(-t\)), respectively. A vast amount of experimental data needs to be confronted with reliable theoretical predictions, which should include higher-order perturbative predictions as well as higher-twist contributions. Here, we provide a brief overview of some recent developments.
## 2 Deeply-virtual meson production at twist-2 and NLO
The DVMP amplitude \(\gamma^{*}N\to MN^{\prime}\) can be expressed through, so-called, transition form factors
\[{}^{a}{\cal T}(\xi,t,Q^{2})=\int_{-1}^{1}\frac{{\rm d}x}{2\xi}\;\int_{0}^{1}{ \rm d}u\;T^{a}(x,\xi,u,\mu)\;F^{a}(x,\xi,t,\mu)\;\phi(u,\mu) \tag{1}\]
with \(a\) denoting quark and gluon contributions, and \(u\) the longitudinal momentum fraction of the meson's parton. The factorization scale \(\mu\) separates the short-distance dynamics, represented by the subprocess amplitudes \(T^{a}\), from the long-distance dynamics represented by the hadron wave functions: the GPD \(F^{a}\) and the meson distribution amplitude (DA) \(\phi\).
Transition form factors \({}^{a}{\cal T}\) have the similar role in DVMP as Compton form factors in DVCS, but they additionally depend on meson DA, i.e., meson structure, making the analysis of the process both more challenging, as well as, potentially more rewarding. In contrast to DVCS, DVMP enables easy access to GPDs of different quark flavours, and offers the natural distinction of GPDs of different parity: at twist-2 chiral-even GPDs \(H^{q}\), \(E^{q}\) contribute to the production of longitudinally polarized vector mesons (\(V_{L}\)) and scalar (\(S\)) mesons, while \(\widetilde{H}^{q}\) and \(\widetilde{E}^{q}\) appear in the production of pseudoscalar (\(P\)) and axial-vector (\(A_{L}\)) mesons. Moreover, the contribution of gluon GPDs \(H^{g}\), \(E^{g}\) (\(\widetilde{H}^{g}\), \(\widetilde{E}^{g}\)) to the production of neutral \(V_{L}\) (\(A_{L}\)) mesons is more significant since, unlike in DVCS, they contribute already at the leading order. Therefore, their form is phenomenologically more accessible.
The twist-2 DVMP subprocess amplitudes \(\gamma_{L}^{*}q\to(q\bar{q})q\) and \(\gamma_{L}^{*}g\to(q\bar{q})g\)
are calculated perturbatively order by order in the strong coupling constant
\[T^{a}(x,\xi,u,\mu)=\frac{\alpha_{s}(\mu_{R})}{4\pi}T^{a(1)}(x,\xi,u)+\frac{\alpha_ {s}^{2}(\mu_{R})}{(4\pi)^{2}}T^{a(2)}(x,\xi,u,\mu_{R},\mu)+\cdots \tag{2}\]
and they have been determined to next-to-leading order (NLO) for flavour non-singlet and singlet \(P\) and \(V_{L}\) mesons [22, 23, 24], as well as, for the (crossed) production of \(S\) and \(A_{L}\) mesons [25, 24]. Predictions at finite order are inherently dependent on the renormalization scale \(\mu_{R}\) and scheme, introducing additional theoretical uncertainty. Therefore, the inclusion of higher-order corrections is crucial to reduce this dependence and stabilize predictions. Although meson DAs (\(\phi\)) and GPDs (\(F^{a}\)) are intrinsically nonperturbative quantities, their evolution can be calculated perturbatively. The complete closed form is known to NLO order [26], and more recently, NNLO contributions to the evolution kernels have been obtained [27].
The evolution is simpler to implement in the conformal momentum representation. Conformal moments are analogous to Mellin moments in DIS
Figure 1: Relative NLO corrections to the imaginary part of the flavor singlet TFF (solid) broken down to the gluon (dashed), pure singlet quark (dash-dotted) and ‘non-singlet’ quark (dotted) contributions (Ref. [25]).
Figure 2: Skewness ratio for GPD \(H\) (preliminary K. Kumerücki, Transversity 2022).
and represent the moments with respect to the eigenfunctions of the leading order evolution kernels, i.e., with respect to Gegenbauer polynomials \(C_{n}^{3/2}\) and \(C_{n}^{5/2}\) for quarks and gluon, respectively. The convolution over \(x\) and \(u\) in transition form factors (1) is thus replaced by the summation over conformal moments, and consequently the series is summed using the Mellin-Barnes integral over complex conformal moment \(j\)[25]
\[{}^{a}{\cal T}(\xi,t,Q^{2}) = \frac{1}{2i}\int_{c-i\infty}^{c+i\infty}\!dj\left[i\pm\left\{ \frac{\tan}{\cot}\right\}\left(\frac{\pi j}{2}\right)\right]\,\xi^{-j-1} \tag{3}\] \[\times\left[T_{jk}^{a}(\mu)\stackrel{{ k}}{{ \otimes}}\phi_{k}(\mu)\right]F_{j}^{a}(\xi,t,\mu)\,.\]
This approach has been developed and extensively applied to DVCS [28], and then extended to DVMP. Regardless of whether one considers Compton or transition form factors in momentum fraction (1) or conformal momentum space (3), complete deconvolution is impossible, and GPD access is only possible through different modelling approaches. Dedicated software is now available: PARTONS [29] and Gepard [30] for analysis in momentum fraction and conformal momentum space, respectively. The DV\(V_{L}\)P is included in the latter.
While there has been a lot of interest in the DVCS process, there are relatively few NLO phenomenological analyses of the DVMP process [22, 31, 25], despite the availability of experimental data. The complete set of \(x\) and \(j\) space analytical results for all meson channels can be found in [25, 24]. The numerical analysis performed there shows that NLO corrections are important and model-dependent (Fig.1). The effects of LO GPD and DA evolution are significant and for NLO calculations one should include also NLO evolution. Gluon corrections play a significant role in small \(\xi\) production of vector mesons, and there may be a need for resummation of the large logarithmic \(\ln(1/\xi)\) terms observed in both gluon evolution and gluon coefficient function. Finally, the choice of meson distribution amplitude is found to have a significant impact on the results.
Since GPDs are process-independent quantities, the simultaneous description and global fits of GPDs to DIS, DVCS and DVMP data represent the next necessary step. Through these one hopes to gain additional information on the importance and stability of NLO predictions and validity of different models. Using the conformal momentum representation [25] the first global fits on DIS, DVCS and DV\(V_{L}\)P small-x HERA collider data have been performed at LO [32] (\(\chi^{2}/n_{\rm d.o.f}\approx 2\)), and at NLO [33] (Bayesian analysis). The recent NLO analysis using corrected NLO analytical results [24] and Gepard software shows promising agreement of theory and experiment (\(\chi^{2}/n_{\rm d.o.f}=254.3/231\)) and indicates that a global description of DVCS and DV\(V_{L}\)P is reachable at NLO (Fig. 2).
## 3 Pseudoscalar meson production at higher-twist
The twist-3 LO prediction for the electroproduction of the pseudoscalar meson \(P\), which includes 2- and 3-body meson Fock states, was first calculated for the WA region [21]. Here, we review the findings and their confrontation with experimental data for photoproduction (\(Q^{2}=0\)). The analytical expressions obtained for the subprocess amplitudes can also be applied for the DV\(P\)P analysis (\(t\to 0\)).
The helicity amplitudes for \(\gamma^{(*)}N\to PN^{\prime}\) process in the WA angle region can be expressed in terms of the subprocess amplitudes \({\cal H}\) multiplied by the soft form factors, \(R_{i}^{P}\) and \(S_{i}^{P}\), which represent of \(1/x\)-moments of zero-skewness GPDs \(\int_{0}^{1}\frac{dx}{x}\,F_{a}^{a}(x,t)\). The \(R\)-type form factors are related to the helicity non-flip GPDs \(H\), \(\widetilde{H}\) and \(E\). The \(S\)-type form factors are related to the helicity-flip or transversity GPDs \(H_{T}\), \(\bar{E}_{T}\) and \(\widetilde{H}_{T}\)1.
Footnote 1: The GPDs \(\bar{E}\) and \(\bar{E}_{T}\) and their associated form factors decouple at zero skewness.
The amplitudes \({\cal H}\) correspond to the subprocesses \(\gamma^{(*)}q\to Pq^{\prime}\) and they are calculated using handbag diagrams as the ones depicted on Fig. 3. The meson \(P\) is replaced by an appropriate 2- or 3-body Fock state. The projector \(\pi\to q\bar{q}\) contributes to the subprocess amplitudes corresponding to the diagrams depicted on Fig. 3a and its structure is given by
\[{\cal P}_{2}^{P}\sim f_{\pi}\left\{\gamma_{5}\not{p}\phi_{\pi}(u,\mu)+\mu_{ \pi}(\mu)\gamma_{5}\Big{[}\,\phi_{\pi p}(u,\mu)-[\ldots]\phi^{\prime}_{\pi \sigma}(u,\mu)+[\ldots]\phi_{\pi\sigma}(u,\mu)\Big{]}\right\}. \tag{4}\]
The first term in (4) corresponds to the twist-2 part, while the twist-3 part is proportional to the chiral condensate \(\mu_{\pi}=m_{\pi}^{2}/(m_{u}+m_{d})\cong 2\) GeV (at the factorization scale \(\mu_{F}=2\) GeV). This parameter is large and although the twist-3 cross-section for pion electroproduction is suppressed by \(\mu_{\pi}^{2}/Q^{2}\) as compared to twist-2 cross section, for the range of \(Q^{2}\) accessible in current experiments the suppression factor is of order unity2. The 3-body \(\pi\to q\bar{q}g\) projector contributes to the amplitudes corresponding to Fig. 3b,
Figure 3: Generic diagrams for 2- and 3-body subprocess amplitudes.
\[{\cal P}_{3}^{P}\sim f_{3\pi}(\mu)\gamma_{5}[\ldots]\,\phi_{3\pi}(u_{1},u_{2},u_{ g},\mu)\,. \tag{5}\]
The helicity non-flip amplitudes are generated by twist-2, while the helicity flip ones are of twist-3 origin.
In addition to twist-2 DA \(\phi_{\pi}\) there are two 2-body twist-3 DAs, \(\phi_{\pi p}\) and \(\phi_{\pi\sigma}\), and 3-body twist-3 DA \(\phi_{3\pi}\). Twist-3 DAs are connected by equations of motion (EOMs). By EOMs and DA symmetry properties, it is possible to express the twist-3 subprocess amplitudes in terms of only two twist-3 DAs, and combine 2- and 3-body contributions. Applying EOMs also results in an inhomogeneous linear first-order differential equation, which can be used to determine \(\phi_{\pi p}\) (and \(\phi_{\pi\sigma}\)) from a known 3-body DA \(\phi_{3\pi}\)[34]3.
Footnote 3: It is important to note that the same gauge must be used consistently for the constituent gluon in the \(q\bar{q}g\) projector and EOMs.
In meson electroproduction, both transverse and longitudinal photons contribute to twist-2 subprocess amplitudes. As expected, the longitudinal contribution vanishes in the photoproduction limit, while in the DVMP limit only longitudinal photons contribute. The general structure of the twist-3 contributions for both transverse and longitudinal photons reads
\[{\cal H}^{P,tw3} = {\cal H}^{P,tw3,q\bar{q}}+{\cal H}^{P,tw3,q\bar{q}g} \tag{6}\] \[= ({\cal H}^{P,\phi_{\pi p}}+\underbrace{{\cal H}^{P,\phi_{\pi 2}^{EOM}} )+({\cal H}^{P,q\bar{q}g,C_{F}}}_{{\cal H}^{P,\phi_{3\pi},C_{F}}}+{\cal H}^{P, q\bar{q}g,C_{G}})\] \[= {\cal H}^{P,\phi_{\pi p}}\ \ +\ \ \ \ \ \ \ {\cal H}^{P,\phi_{3\pi},C_{F}}\ \ \ \ \ \ \ \ \ \ \ +{\cal H}^{P,\phi_{3\pi},C_{G}}\,,\]
where \({\cal H}^{P,tw3,q\bar{q}}\) is the twist-3 2-body contribution proportional to the \(C_{F}\) color factor, and \({\cal H}^{P,tw3,q\bar{q}g}\) is the twist-3 3-body contribution with \(C_{F}\) and \(C_{G}\) proportional parts. The \(C_{G}\) part is gauge invariant, whereas for \(C_{F}\) contributions, only the sum of 2- and 3-body parts is gauge invariant with respect to the choice of photon or virtual gluon gauge. EOMs are used to obtain this sum, as well as the complete twist-3 contribution expressed through only two twist-3 DAs, \(\phi_{3\pi}\) and \(\phi_{\pi p}\). The twist-3 subprocess amplitude for longitudinal photons vanishes both for photoproduction and DVMP. One finds that for photoproduction \({\cal H}^{P,\phi_{\pi p}}=0\)[19]. For DVMP, \({\cal H}^{P,\phi_{\pi 2}^{EOM}}=0\), and while no end-point singularities are present for \(t\neq 0\), they must be considered in the limit \(t\to 0\) since for \({\cal H}^{P,\phi_{\pi p}}\sim\int_{0}^{1}\frac{du}{u}\phi_{Pp}(u)\). In [14] the modified hard-scattering picture has been used to regularize the 2-body contributions. With the complete twist-3 contribution now available [21] the analysis in modified and collinear picture is underway.
In [19] the cross-section for \(\pi^{0}\) photoproduction has been fitted to [17] data. The results are displayed in Fig. 4. The twist-2 prediction lies well below the data. However, by including the twist-3 contributions one obtains reasonable agreement with the experiment. Twist-3 is more important in
the backward hemisphere (\(\theta\) is c.m.s. scattering angle). In [21], the analysis was extended to \(\pi^{+}\) and \(\pi^{-}\), using only a few available experimental data [15, 16]. In [20], \(\eta\) (preliminary GlueX data) and \(\eta^{\prime}\) photoproduction was studied. A similar behavior in photoproduction cross-sections was observed, except for \(\eta^{\prime}\), where the twist-2 contribution was significant, offering the possibility of determining the 2-gluon twist-2 DA.
For pion electroproduction there are four partial cross sections. In [21] the theoretical predictions were given and the importance of the measurement was stressed. Different combinations of form factors make it possible to extract transversity GPDs (\(F_{T}^{q}\)), which have a large \(-t\) behavior that is important for parton tomography.
In meson photoproduction, spin-dependent observables such as the correlations of the helicities of the photon and either the incoming or outgoing
Figure 4: The cross section for \(\pi^{0}\) photoproduction with twist-3 contributions. Solid (dotted) curve: full (twist-2) result. Dashed curve: full result with fixed renormalization and factorization scale. Data taken CLAS [17] (open circles) and from SLAC [15] (\(s=10.3\) GeV\({}^{2}\)) (Ref. [21]).
Figure 5: Results for the helicity correlation parameters \(A_{LL}\) and \(K_{LL}\) for \(\pi^{+}\), \(\pi^{-}\) and \(\eta^{\prime}\) photoproduction (Refs. [21, 20]).
ing nucleon, i.e., \(A_{LL}\) and \(K_{LL}\), offer additional insight that is less sensitive to particular parameters. It can be shown that \(A_{LL}^{P,tw2}=K_{LL}^{P,tw2}\) and \(A_{LL}^{P,tw3}=-K_{LL}^{P,tw3}\), indicating that the measurement of \(A_{LL}\) and \(K_{LL}\) offers a characteristic signature for the dominance of twist-2 or twist-3, similar to the role that the comparison of \(\sigma_{T}\) and \(\sigma_{L}\) has in DVMP. From Fig. 5, it is clear that our numerical results suggest the dominance of twist-3 for large \(\theta\), while twist-2 increases in the forward direction.
## 4 Summary and outlook
Twist-2 NLO contributions to DVMP amplitudes are available and need to be compared with experimental data. The preliminary comparison of vector meson production to data seems satisfactory, but NLO corrections are significant, and the first DIS, DVCS, and DV\(V_{L}\)P fits have been performed. For pseudoscalar meson production, twist-3 contributions dominate, and a complete analysis of 2- and 3-body twist-3 contributions is ongoing. The available twist-2 NLO contributions should also be tested. It is important to note that the choice of meson distribution amplitude significantly affects the DVMP predictions. In WA photoproduction of \(\pi\) mesons, the twist-2 analysis falls short by an order of magnitude. The complete twist-3 contribution has been included, and it was found that the meson's twist-3 contributions dominate for \(\pi\)s and \(\eta\). Future experimental goals include the clear separation of longitudinally and transversely polarized photon contributions.
_Acknowledgements_ This publication is supported by the Croatian Science Foundation project IP-2019-04-9709, by the EU Horizon 2020 research and innovation programme, STRONG-2020 project, grant agreement No 824093.
|
2310.05052 | Accurate battery lifetime prediction across diverse aging conditions
with deep learning | Accurately predicting the lifetime of battery cells in early cycles holds
tremendous value for battery research and development as well as numerous
downstream applications. This task is rather challenging because diverse
conditions, such as electrode materials, operating conditions, and working
environments, collectively determine complex capacity-degradation behaviors.
However, current prediction methods are developed and validated under limited
aging conditions, resulting in questionable adaptability to varied aging
conditions and an inability to fully benefit from historical data collected
under different conditions. Here we introduce a universal deep learning
approach that is capable of accommodating various aging conditions and
facilitating effective learning under low-resource conditions by leveraging
data from rich conditions. Our key finding is that incorporating inter-cell
feature differences, rather than solely considering single-cell
characteristics, significantly increases the accuracy of battery lifetime
prediction and its cross-condition robustness. Accordingly, we develop a
holistic learning framework accommodating both single-cell and inter-cell
modeling. A comprehensive benchmark is built for evaluation, encompassing 401
battery cells utilizing 5 prevalent electrode materials across 168 cycling
conditions. We demonstrate remarkable capabilities in learning across diverse
aging conditions, exclusively achieving 10% prediction error using the first
100 cycles, and in facilitating low-resource learning, almost halving the error
of single-cell modeling in many cases. More broadly, by breaking the learning
boundaries among different aging conditions, our approach could significantly
accelerate the development and optimization of lithium-ion batteries. | Han Zhang, Yuqi Li, Shun Zheng, Ziheng Lu, Xiaofan Gui, Wei Xu, Jiang Bian | 2023-10-08T07:25:27Z | http://arxiv.org/abs/2310.05052v3 | # Accurate battery lifetime prediction across diverse aging conditions with deep learning
###### Abstract
Accurately predicting the lifetime of battery cells in early cycles holds tremendous value for battery research and development as well as numerous downstream applications [1, 2, 3, 4]. This task is rather challenging because diverse conditions, such as electrode materials, operating conditions, and working environments, collectively determine complex capacity-degradation behaviors. However, current prediction methods are developed and validated under limited aging conditions [1, 2, 5], resulting in questionable adaptability to varied aging conditions and an inability to fully benefit from historical data collected under different conditions. Here we introduce a universal deep learning approach that is capable of accommodating various aging conditions and facilitating effective learning under low-resource conditions by leveraging data from rich conditions. Our key finding is that incorporating inter-cell feature differences, rather than solely considering single-cell characteristics, significantly increases the accuracy of battery lifetime prediction and its cross-condition robustness. Accordingly, we develop a holistic learning framework accommodating both single-cell and inter-cell modeling. A comprehensive benchmark is built for evaluation, encompassing 401 battery cells utilizing 5 prevalent electrode materials across 168 cycling conditions. We demonstrate remarkable capabilities in learning across diverse aging conditions, exclusively achieving 10% prediction error using the first 100 cycles, and in facilitating low-resource learning, almost halving the error of single-cell modeling in many cases. More broadly, by breaking the learning boundaries among different aging conditions, our approach could significantly accelerate the development and optimization of lithium-ion batteries.
Owing to their high energy densities and low production costs, lithium-ion batteries have been widely adopted in modern industry, propelling the surge of renewable energy solutions and electric vehicles[6, 7, 8]. Nevertheless, the capacity of lithium-ion batteries inevitably fades with cyclic operations due to their intrinsic electrochemical mechanisms. Unexpected rapid degradation not only leads to poor user experiences, such as range anxiety for electric vehicles, but can also affect the operation of essential facilities, such as the stability of power grids. To proactively mitigate these side effects, accurately predicting battery lifetime in early cycles has been identified as a critical task[4, 9]. This task is rather challenging because numerous factors, including electrode materials, charging and discharging protocols, and working environments, collectively influence the complex battery aging process. Recent data-driven approaches that leverage machine learning have made remarkable progress in this direction [1, 2, 3, 5], identifying critical electrical features that highly correlate with cycle life.
However, existing methods for battery lifetime prediction have been developed and validated under limited aging conditions, such as testing only lithium-iron-phosphate (\(LiFePO_{4}\)) batteries and using single charging or discharging protocols [1, 2, 5]. Data characteristics under these restricted conditions affect feature extraction and model design, potentially limiting the success and generalization of their conclusions. It remains questionable whether these methods perform well under varied aging conditions. Moreover, focusing on limited aging conditions restricts the research development of leveraging historical data collected under different conditions. This limitation separates battery datasets emphasizing different aging factors as isolated islands, hindering the development of general modeling approaches.
Here we introduce a deep learning framework, BatLiNet, tailored to predict battery lifetime across a variety of aging conditions--including electrode materials, cycling protocols, and temperature fluctuations. At its core, the framework innovates with "inter-cell learning", which contrasts pairs of batteries to discern lifetime differences, a significant leap from traditional models that focus solely on individual "intracell" data. This dual approach not only captures individual degradation patterns but also contextualizes them within a broader, comparative aging landscape.
Our findings demonstrate that this inter-cell perspective crucially enhances the model's predictive precision and robustness, especially under conditions where data is sparse, such as with novel electrode materials. By integrating the comparative inter-cell strategy with the conventional intra-cell analysis into a singular, unified framework, we bridge the gap between isolated and relative aging scenarios.
The neural networks embedded within BatLiNet adeply navigate the nonlinear intricacies inherent to battery aging,
ensuring the framework's adeptness in learning from limited resources. This adaptability, coupled with our method's attention to diverse aging conditions, positions BatLiNet as a comprehensive solution for accurate, resilient battery lifetime predictions, essential for advancing reliable energy storage systems.
To validate the effectiveness of BatLiNet, we construct a comprehensive benchmark by collecting all public datasets [10, 11, 12, 5, 13, 10, 11] that emphasize different aging conditions and contain the necessary information to support learning and evaluation. To the best of our knowledge, this benchmark is the largest and most diverse in terms of aging conditions for battery lifetime prediction. Our results demonstrate the remarkable effectiveness of BatLiNet in robustly producing accurate predictions across diverse aging conditions and in boosting prediction performance for low-resource conditions. Regarding learning across diverse aging conditions, our approach exclusively achieves a 10% prediction error using the first 100 cycles. Moreover, as for learning in low-resource scenarios, we use 2 to 8 cells to achieve 20% prediction error in predicting the lifetime of new batteries, halving the error of direct learning on rare data in many cases.
Figure 1: An overview of the BatLiNet framework, where we adopt a lithium-iron-phosphate battery cell as the target cell and leverage another lithium-cobalt-oxide battery cell as the reference. **a**: The feature construction for intra-cell and inter-cell learning. **b**: The correlations between constructed features and prediction labels for both intra-cell (the upper part) and inter-cell (the lower part) learning. **c**: The overall pipeline of the BatLiNet framework.
Figure 2: Visualization of datasets. **a**: The coverage of the dataset employed in this work significantly surpasses that of previous undertakings, namely the NE[1], CLO[3], SNL[10], HUST[5], and CALCE[11, 12, 13]. **b**: A visualization of the capacity degradation with respect to the cycles for all the cells. The curves are normalized by their nominal capacities. The first 1500 and 100 cycles are shown. The degradation patterns are highly non-linear and are difficult to distinguish in the first 100 cycles.
## A Comprehensive Benchmark Covering Diverse Aging Conditions
To comprehensively evaluate and compare BatLiNetagainst existing modeling approaches, we curated, to the best of our knowledge, all publicly available battery data for battery lifetime prediction into five benchmark datasets.
MATR-1, MATR-2, and HUST consist of commercial 18650 lithium-ion phosphate/graphite (LFP) batteries of the same model. MATR-1 and MATR-2 use the same training set and evaluate the prediction performance on trained or unseen charging protocols, respectively. The batteries in HUST employ an identical charging protocol but different discharge rates to examine the model's generalization capability across diverse discharge protocols.
Additionally, we collected battery data used in prior studies including CLO [3], CALCE [11, 12, 13], HNEI [14], UL-PUR [15], RWTH [16] and SNL [10]. By combining these batteries with MATR-1, MATR-2 and HUST, we obtained a total collection of 401 batteries and developed two datasets MIX-100 and MIX-20. MIX-100 examines the typical early prediction case where models must forecast the 80% end-of-life point using just the first 100 cycles of data. MIX-20 poses a more difficult challenge - models must predict the number of cycles before capacity degrades to 90% of nominal, using only the first 20 cycles. Notably, any batteries reaching end-of-life prematurely during the initial cycles were excluded from the experiments.
Figure 1(a) shows that the collected dataset exhibits a more comprehensive coverage that goes beyond the confines of previous works. Figure 1(b) further visualizes the capacity degradation of the collected batteries in the first 1500 and 100 cycles of the batteries. Here the long-term degradation is highly non-linear and demonstrates considerable variation among batteries. In contrast, the degradation within the initial 100 cycles remains indiscernible for most batteries. Such diverse and intricate characteristics require the model to capture and distinguish the degradation patterns caused by various factors for generalization.
### Accurate Battery Lifetime Prediction
#### Empowered by BatLiNet
In Table 1, we present the performance comparison of BatLiNet against various baselines on the five benchmark datasets. The "variance", "discharge", and "full" models [1] employ linear regression on statistical features derived from discharge capacity-voltage curves. Ridge Regression instead directly fits a linear model to the raw discharge curves [2]. Partial Least-Squares Regression (PLSR) [17] and Principal Component Regression (PCR) [18] first group cells based on their raw discharge curves and then fit linear models on the result groups. The Support Vector Machine (SVM) [19] and Random Forest [20] are non-linear statistical models with a stronger fitting ability. For deep models, we compare BatLiNetagainst Multi-layer Perceptron (MLP)[2], the Long-Short Term Memory network (LSTM)[21], and Convolutional Networks (CNN)[22].
The "Discharge" model demonstrates strong performance on MATR-1, MATR-2, and HUST, yet struggles with diverse chemistries and intricate aging patterns in MIX-100 and MIX-20. In comparison, Ridge Regression shows lower prediction error when applied across battery types, indicating that such LFP-focused features may not effectively generalize to diverse aging conditions. By explicitly differentiating cells beforehand, PLSR and PCR incorporate nonlinear complexity into linear frameworks and further improve over Ridge Regression. Both SVM and RF excel on MIX-100 and MIX-20 compared to other linear baselines, confirming that the complex degradation patterns call for models with stronger fitting capability.
The deep learning models exhibit high variability in performance depending on random initialization. As shown in Figure 3, the error distribution of the deep models fluctuates widely across random seeds. The MLP architecture demonstrates relatively modest prediction errors with the lowest variance on all datasets. In contrast, CNN displays the largest variance in performance, yet achieves the lowest errors on three datasets among baselines given optimal seeds [2]. LSTM strikes a favorable balance between accuracy and variance. However, all deep models are susceptible to overfitting and exhibit minimal predictive advantage compared to conventional statistical approaches regarding both regression accuracy and error variance.
While no single technique consistently optimizes performance across datasets, BatLiNetachieves \(\leq 11\%\) mean absolute percentage error (MAPE) given the first 100 cycles and \(\leq 18\%\) for the first 20 cycles. This reduces the root mean square error (RMSE) versus the top baseline by 31.4%, 5.8%, 18.0%, 30.2%, and 25.11% on the five datasets, respectively. By jointly training the intra-cell and inter-cell branches, BatLiNetcombines strong fitting capabilities beyond conventional statistical models with robustness to random initialization lacking in existing deep models. Through this blend of strengths, BatLiNetdelivers accurate and reliable lifetime predictions across diverse battery aging conditions.
### Facilitating Effective Learning in Low-resource Scenarios
In practice, battery development is often constrained by limited resources, resulting in a restricted number of available test batteries. To develop an accurate life prediction model, cycling a considerable number of batteries to end-of-life is required for label collection. However, this process is both time-consuming and unsustainable. An alternative methodology is to leverage the abundant historical data of batteries with varying aging states to augment model training under such low-resource scenarios.
To simulate resource-constrained applications, the cells in MIX-100 were sorted by cathode material, and the 275 LFP/graphite cells were leveraged to augment the prediction performance on the 37 LCO cells, 22 NCA cells, and 69 NMC cells. Among the test batteries, 21 LCO cells, 14 NCA cells, and 53 NMC cells were randomly sampled for model evaluation, and we employ 1, 2, 4, 8, and 16 cells from the remainder to simulate varying cycling test budgets. Three learning paradigms were investigated to comprehensively analyze the influence of historical data: 1) direct training of a CNN on the target cells, 2) transfer learning where a CNN was pre-trained on LFP cells then fine-tuned on the target cells, and 3) training BatLiNeton the combined LFP and target cells.
Figure 4 presents the performance of the three learning paradigms under resource-constrained conditions. Pre-training the model on historical LFP/graphite cell data substantially reduced prediction error by 10% for NCA cells. However, it resulted in inferior predictions for LCO and NMC cells, suggesting that degradation patterns vary between battery chemistries. In contrast, BatLiNet leveraged the LFP cells to improve over direct learning for all three cathode materials, indicating the inter-cell modeling between LFP-target pairs generalizes across cathode materials. Notably, BatLiNetfurther reduced prediction error by 10% over transfer learning, achieving 20.26% MAPE using only two to eight target cells and historical LFP data. This high data efficiency greatly mitigates the data hunger in battery development and
\begin{table}
\begin{tabular}{l c c c c c c c c c} \hline \hline \multirow{2}{*}{Method} & \multicolumn{2}{c}{MATR-1} & \multicolumn{2}{c}{MATR-2} & \multicolumn{2}{c}{HUST} & \multicolumn{2}{c}{MIX-100} & \multicolumn{2}{c}{MIX-20} \\ & RMSE & MAPE(\%) & RMSE & MAPE(\%) & RMSE & MAPE(\%) & RMSE & MAPE(\%) & RMSE & MAPE(\%) \\ \hline Training Mean & 399 & 28 & 511 & 36 & 420 & 18 & 573 & 59 & 593 & 102 \\ \hline “Variance” Model[1] & 138 & 15 & 196 & 12 & 398 & 17 & 521 & 39 & 601 & 95 \\ “Discharge” Model[1] & 86 & 8 & 173 & 11 & 322 & 14 & 1743 & 47 & \textgreater{}2000 & \textgreater{}100 \\ “Full” Model[1] & 100 & 11 & 214 & 12 & 335 & 14 & 331 & 22 & 441 & 53 \\ Ridge Regression[2] & 125 & 13 & 188 & 11 & 1047 & 36 & 395 & 30 & 806 & 150 \\ PCR[2] & 100 & 11 & 176 & 11 & 435 & 19 & 384 & 28 & 701 & 78 \\ PLSR[2] & 97 & 10 & 193 & 11 & 431 & 18 & 371 & 26 & 543 & 77 \\ SVM Regression & 140 & 15 & 300 & 18 & 344 & 16 & 257 & 18 & 438 & 46 \\ Random Forest[2] & 140 & 15 & 202 & 11 & 348 & 16 & 211 & 14 & 288 & 31 \\ \hline MLP[2] & 162\(\pm\)7 & 12\(\pm\)0 & 207\(\pm\)4 & 11\(\pm\)0 & 444\(\pm\)5 & 18\(\pm\)1 & 455\(\pm\)37 & 27\(\pm\)1 & 532\(\pm\)25 & 61\(\pm\)6 \\ LSTM & 123\(\pm\)11 & 12\(\pm\)2 & 226\(\pm\)36 & 14\(\pm\)2 & 442\(\pm\)32 & 20\(\pm\)1 & 266\(\pm\)11 & 15\(\pm\)1 & 417\(\pm\)62 & 37\(\pm\)7 \\ CNN[2] & 115\(\pm\)96 & 9\(\pm\)6 & 237\(\pm\)107 & 17\(\pm\)8 & 445\(\pm\)35 & 21\(\pm\)1 & 261\(\pm\)38 & 15\(\pm\)1 & 785\(\pm\)132 & 41\(\pm\)4 \\ \hline BatLiNet (ours) & **59\(\pm\)2** & **6\(\pm\)0** & **163\(\pm\)12** & **11\(\pm\)1** & **264\(\pm\)9** & **10\(\pm\)1** & **158\(\pm\)7** & **10\(\pm\)0** & **201\(\pm\)18** & **18\(\pm\)1** \\ \hline \hline \end{tabular}
\end{table}
Table 1: Comparison of prediction errors with baselines. We employ bold font to emphasize the best-performing method and utilize underlines to denote the second-best methods. For neural-network-based methods, we report the mean and standard deviation of eight random seeds.
Figure 3: Distribution of prediction error for deep learning models across five benchmark datasets. Box plots illustrate the distribution of root mean squared error (RMSE) over eight random initializations of each deep architecture.
can substantially reduce downstream application modeling costs such as developing novel electrode materials or electrolytes. In contrast, Attia et al. [3] developed a linear model trained on a dataset of 41 LFP cells when optimizing the rapid charging protocol specifically for batteries possessing the same lithium iron phosphate cathode chemistry.
## Working Mechanisms of BatLiNet
Here we conduct further ablation experiments to study the mechanism of BatLiNet.
Figure 5a compares the predictive performance of different modeling branches in BatLiNet. Across all datasets, intra-cell models exhibit the highest error and variance. The inter-cell model shows good robustness with lower variance across all datasets, yet its prediction accuracy is slightly lower than that of the intra-cell modeling on some datasets, as its predictions are implicitly derived through the reference cells. The ensemble of the two separately trained branches obtains better performance on HUST and MIX-100, yet on other datasets it inherits the flaws of either branch, resulting in higher variance or suboptimal accuracy. BatLiNetachieved the most robust and precise lifetime estimation across all datasets, demonstrating that through the co-training with shared prediction layer, the model better integrates the strength of both branches and yields the strongest predictive performance.
Figure 5b displays the performance of BatLiNetwhen using a subset of the six capacity-indexed features. It's evident that all six features play a crucial role in achieving improved predictive performance. The current-based features can lead to high variance and large errors, yet they are indispensable for the extreme early prediction scenarios in MIX-20. The voltage- and discharge-based features demonstrate strong performance across all datasets. Merging voltage and current features from both charge and discharge stages results in enhanced performance, with the best outcomes achieved by employing all six features.
## Conclusion
By introducing inter-cell learning and unifying it with intra-cell learning, our framework BatLiNet has significantly boosted the performance of battery lifetime prediction across diverse aging conditions. It is noteworthy that the proposed inter-cell learning, enabling the modeling of the relations and differences among diverse aging conditions, not only provides exclusive yet complementary value for traditional intra-cell learning but also facilitates the effective knowledge transfer from rich conditions with abundant historical data to low-resource scenarios of emerging needs. Looking into the future, our method can be leveraged in various developing aspects that lead to varied aging conditions, such as different fast charging protocols [23] and new electrode materials [24]. Specifically, as an accurate and robust lifetime predictor across different aging conditions, our method holds the largely improved capability in accelerating the development and optimization of lithium-ion batteries [3]. Moreover, the idea of modeling inter-cell differences can be extended to other crucial prediction tasks, such as predicting the state of charge and health [9, 25], broadly benefiting battery management and applications.
|
2305.01082 | Contextual Multilingual Spellchecker for User Queries | Spellchecking is one of the most fundamental and widely used search features.
Correcting incorrectly spelled user queries not only enhances the user
experience but is expected by the user. However, most widely available
spellchecking solutions are either lower accuracy than state-of-the-art
solutions or too slow to be used for search use cases where latency is a key
requirement. Furthermore, most innovative recent architectures focus on English
and are not trained in a multilingual fashion and are trained for spell
correction in longer text, which is a different paradigm from spell correction
for user queries, where context is sparse (most queries are 1-2 words long).
Finally, since most enterprises have unique vocabularies such as product names,
off-the-shelf spelling solutions fall short of users' needs. In this work, we
build a multilingual spellchecker that is extremely fast and scalable and that
adapts its vocabulary and hence speller output based on a specific product's
needs. Furthermore, our speller out-performs general purpose spellers by a wide
margin on in-domain datasets. Our multilingual speller is used in search in
Adobe products, powering autocomplete in various applications. | Sanat Sharma, Josep Valls-Vargas, Tracy Holloway King, Francois Guerin, Chirag Arora | 2023-05-01T20:29:59Z | http://arxiv.org/abs/2305.01082v2 | # Contextual Multilingual Spellchecker for User Queries
###### Abstract.
Spellchecking is one of the most fundamental and widely used search features. Correcting incorrectly spelled user queries not only enhances the user experience but is expected by the user. However, most widely available spellchecking solutions are either lower accuracy than state-of-the-art solutions or too slow to be used for search use cases where latency is a key requirement. Furthermore, most innovative recent architectures focus on English and are not trained in a multilingual fashion and are trained for spell correction in longer text, which is a different paradigm from spell correction for user queries, where context is sparse (most queries are 1-2 words long). Finally, since most enterprises have unique vocabularies such as product names, off-the-shelf spelling solutions fall short of users' needs.
In this work, we build a multilingual spellchecker that is extremely fast and scalable and that adapts its vocabulary and hence speller output based on a specific product's needs. Furthermore our speller out-performs general purpose spellers by a wide margin on in-domain datasets. Our multilingual speller is used in search in Adobe products, powering autocomplete in various applications.
spellcheck, spell correction, neural networks, query processing +
Footnote †: journal: Acm Referenst
2021
## 1. Introduction
Spellcheck is a widely studied problem in search and NLP research. Spellcheckers generally comprise two parts: creating a list of candidate corrections and ranking those candidates. Most widely used spellcheckers are built for English and utilize behavioral (Senat Sharma et al., 2018) and/or contextual signals (Beng et al., 2019) for ranking the suggested corrections. Recent works have also utilized other extrinsic data such as search results (Beng et al., 2019) or public domain multi-word datasets (Beng et al., 2019) as ranking signals. Although most spellers are built for English, some works have developed custom spellers for non-English languages such as Bengali (Beng et al., 2019) or Dutch (Beng et al., 2019). These works are hard to scale across multiple languages since they are language specific. Most of the work for spell correction has been around correction in sentences or paragraphs where context is plentiful. In such cases, neural models such as transformers and LSTMs perform well since they capture textual context (Krizhevsky et al., 2014). However, these systems are usually slower than their frequentists counterparts and do not show much improvement in search query cases where textual context is minimal.
Our work takes a best-of-both-worlds approach: We utilize contextual signals such as search results, behavioral data, and phonetic signals to suggest candidates, while incorporating a small neural model for ranking. In addition, we use a suggestion model that is language agnostic and can scale to multiple languages.
We divide the speller into four components: a behavioral data analysis pipeline to finetune the downstream components; a product specific rule engine to correct common errors and provide editorial overrides; a suggested that takes in user queries and suggests potential replacements for incorrectly spelled tokens; and a neural ranker that calculates the probability of the suggested tokens. We evaluate our speller on both general purpose and product specific domains and showcase significant improvement over current methods.
Our approach is currently used in production by the autocomplete feature in Adobe search and is being integrated in Adobe Express and Adobe Stock for online spell correction.
Our main contributions and business impact are:
1. A novel approach for creating a fast, multilingual spellchecker for search queries
2. A novel, low latency architecture for deploying and scaling the spellchecker
3. Significant improvement over widely available state-of-the-art spellcheckers for short user queries
Training datasets
Finding public spellcheck datasets is surprisingly hard, with very few benchmarks available for validation. Furthermore, since we require data for training our models, we decided to employ a bootstrapping approach for dataset generation and leveraged crowd workers for manual curation. This section describes how we created the training data, as well as some datasets used for initial internal evaluation. The evaluation datasets are described in section 5.
### Artificially Generated Query Dataset
We extracted user queries from search over Adobe Stock images for English, French and German locales for analysis. Since we use full queries, the model has some context for multi-word queries.
**Data Preprocessing:** We removed queries with spelling errors from the dataset by applying the updated Hungell1 dictionaries to check for spelling errors and then had the remaining queries reviewed by crowd workers. This created our ground truth dataset.
Footnote 1: [http://hungell.github.io/](http://hungell.github.io/)
**Artificial Injection of Errors**: Most spelling errors are due to one of the following reasons: missing a letter, adding a letter, typing an incorrect letter. To create our artificial dataset, for each query in the list of correctly spelled queries, we injected one or more spelling errors using one of the following techniques in a probability-weighted fashion:
1. Change the order of letters (e.g. "change" to "change"; "check" to "chekc"). This the most common spelling error.
2. Remove or add a vowel (e.g. "malleable" to "mallable" or "malleable").
3. Add an additional character (e.g. "fresh" to "freshh" or "freshh").
4. Replace a character with another character (e.g. "fresh" to "freshh" or "freshh").
5. Replace a cemented characters and their unaccented counterparts with another character in the same class (e.g. "francais" to "francais"; "worter" to "worter").
6. For words with two identical letters in a row, have only one letter (e.g. change "happiness" to "happiness").
The artificial errors were patterned on real life errors and were weighted at a ratio of 7:5:4:2:7:2 respectively. For addition of vowels, only vowels that usually follow one another were chosen, e.g. for 'e', 'i' was much more likely to be added than 'u'. Each query in the example set had one or more errors injected into them.
Our final artificial dataset size is shown in in Table 1.
Table 2 shows example input queries and their artificially misspelled training counterparts. Only some words in the query have added errors so that the model also learns to recognize correctly spelled words.
### Birkbeck Corpus
The Birkbeck corpus2 contains 36,133 misspellings of 6,136 words. It is an amalgamation of errors taken from the native-speaker section (British and American writers) of the Birkbeck spelling error corpus, a collection of spelling errors gathered from various sources, available with detailed documentation from the Oxford Text Archive.3 It includes the results of spelling tests and errors from free writing, primarily from schoolchildren, university students and adult literacy students. We utilize 18,295 misspellings from Birkbeck as part of our English training dataset.
Footnote 2: [http://www.dcs.bbk.ac.uk/~ROGE/corpora.html](http://www.dcs.bbk.ac.uk/~ROGE/corpora.html)
Footnote 3: [http://pt.da.hdsa.ac.uk/](http://pt.da.hdsa.ac.uk/)
### Commonly Misspelled Word Corpora
The Aspell (Artificially, 2017) corpus contains \(\sim\)1500 common misspellings. Wikipedia4 lists commonly misspelled words. We used these to mine for queries in our domain that feature these misspelled words and for internal evaluation for model selection.
Footnote 4: [https://en.wikipedia.org/wiki/Commonly_misspelled_English_words](https://en.wikipedia.org/wiki/Commonly_misspelled_English_words)
## 3. Model
Following common practice, we divide the spellcheck model into two modules: a suggester module and a ranker module. The suggest module takes in the user query and suggests possible correction tokens for any incorrectly spelled tokens. The ranker module ranks the suggestions and outputs the most probable candidate. This is shown in Figure 1.
### Symmetric Delete Suggester
We utilize the Symmetric Delete5(Artificially, 2017) algorithm for our suggester module. Symmetric Delete generates a permutation index for words in the dictionary at index time. Instead of calculating transposes + replaces + inserts + deletes at runtime, Symmetric Delete only calculates deletes of the index dictionary. The symmetric delete suggest has two key advantages:
Footnote 5: [https://github.com/wolfgarbe/SymSpell](https://github.com/wolfgarbe/SymSpell)
* **Latency**: The module is extremely fast for up to 2 edit distances, with an average of \(\sim\)1ms latency. This is critical for
query spell correction. The speed comes from inexpensive delete-only edit candidate generation and pre-calculation.
* **Language Agnostic**: The module is language agnostic, not requiring language specific characteristics to generate suggestions.
#### Index Time Operation
At index time, we utilize a dictionary of correct words and generate the symmetric delete index from those. The dictionary of correct words is generated from known language dictionaries, including FastText (Beng et al., 2015) word dictionaries, Adobe-specific product terms (e.g. product names, file extensions) and behavioral data (e.g. popular queries). The addition of custom vocabulary is important because most enterprises have custom language that is not supported by the open source dictionaries.
#### Runtime Operation
At runtime, given a user query, we first check if the query is correctly spelled. If it is incorrect, we find all candidates within 1 edit distance. If <3 candidates are generated, we then utilize 2 edit distance suggestions. This balances speed and precision, as increasing the edit distance leads to more suggestions but higher latency. In our analysis of Adobe user queries, we found that 88% of spelling errors are 1 edit distance away. So, 2 edit distance suggestions are used sparingly.
### Neural Ranker
We utilize a neural network to rank the suggestions from the suggester module. Due to our low latency requirements, we use a multilayer perceptron network (MLP) rather than recurrent neural nets or transformers. Our MLP consists of 5 fully connected layers, with dropout and batch normalization. Since MLPs do not do well at token level understanding, we utilize the features for each suggestion rather than the tokens themselves in order to improve performance on unseen words (i.e. unique spelling errors).
The features we utilize for each suggestion are below. All features were scaled and normalized (0-1) before being fed to the neural network.
* **Word Count**: In most cases, we want to recommend more common words. We store the number of occurrences of each word in the query set. The word counts vary based on application, enabling per-application suggestions.
* **Asset Frequency**: In most cases, we want to correct to a word which retrieves more search results. For each word, we store the number of assets associated with it. This feature is application specific.
* **Download Count**: Query success is indicated by downloads in Adobe Stock. We store the number of downloads for the first 100 (first page) results for each word. This feature is only used on Adobe Stock.
* **Levenshtein Distance**: Standard string edit distance measurement (K
### Adobe Express User Queries
We performed a quantitative analysis on user queries from Adobe Express, a web-based product to create assets from templates. We generated a misspelling dataset from Adobe Express queries by mining queries using the commonly misspelled words from the Wikipedia and Aspell datasets. Additionally we added synthetic perturbations on the mined queries based on common misspellings for each of the 3 languages under consideration (English, French, German). Finally, high frequency spelling errors seen in the application were added via human annotation. There are 6355 queries for English, 1187 for German, and 1128 for French.
This dataset is very different from the dataset that our model was trained on (section 2) but uses dictionaries from the same distribution. This gives us a better representation of real world performance across domains. We tested the performance against NeuSpell (a state-of-the-art neural spelling model) (Bahavoral et al., 2017) and Aspell (a widely used speller) (Bahavoral et al., 2017). As shown in Table 3, our approach outperforms off-the-shelf state-of-the-art approaches in our specific domain, while taking a fraction of the time (under 1 ms on average as opposed to 40+ ms).
### Adobe Creative Cloud Home User Queries
We performed a qualitative analysis on user queries from Adobe Creative Cloud Home, one of the main gateways for users to search about Adobe products. We utilized English queries from a single day. The evaluation set comprised 7123 unique queries and their frequencies.
We crowd-sourced and manually checked the correctness of the response from the speller. Results are depicted in Table 4. Nearly 50% of all unique queries entered by users contained a spelling error, highlighting the need for a task-specific speller. Most of the common spelling errors revolved around product names with the words "creative" or "acrobat" being spelled incorrectly in many different ways. For this application, having higher boosting for Adobe product name candidates led to better results due to the nature of the queries, highlighting the need for application-specific contextual signals.
## 6. Conclusions and Next Steps
In this paper we described a novel approach for creating a fast, multilingual spellchecker for user queries. This includes a novel, low latency architecture for deploying and scaling the spellchecker. The resulting speller shows significant improvement over widely available state-of-the-art spellcheckers for short user queries.
Next steps focus on two areas. The first is using the English, French, and German spellers to replace the current production query-time spellers given their success in offline spell correction for autocomplete. The second is extending the speller to ~10 and eventually ~35 languages in order to cover the primary languages used in our search applications. This will allow us to use the same high-quality, custom-tuned, low-latency speller for all query spell correction, both offline for autocomplete suggestions and online for user queries.
\begin{table}
\begin{tabular}{|l|c|c|c|} \hline Model & Recall & Precision & Accuracy \\ \hline Aspell & 29.5\% & 98.9\% & 45.5\% \\ \hline Neuspell & 57.6\% & 84.2\% & 75.7\% \\ \hline Ours & 96.4\% & 87.3\% & 82.2\% \\ \hline \end{tabular}
\end{table}
Table 4. Accuracy metrics on the Creative Cloud Home dataset. Recall is the rate of incorrect queries that have been properly corrected. Precision is the rate of corrected queries where the correction is correct.
\begin{table}
\begin{tabular}{|l|c|c|c|c|} \hline Model & \multicolumn{3}{c|}{Accuracy} & Latency \\ \cline{2-5} & English & French & German & (ms) \\ \hline Aspell & 51.6\% & 60.8\% & 29.7\% & 40 \\ \hline Neuspell & 75.5\% & 37.5\% & 36.6\% & 50 \\ \hline Ours & 81.7\% & 85.0\% & 84.8\% & \textless{}1 \\ \hline \end{tabular}
\end{table}
Table 3. Accuracy and latency of different spell correction models on the Adobe Express query dataset
Figure 2. Spellcheck Service Architecture. The MWE module handles task-specific multi-word expressions before the suggester and ranker are called. Behavioral pipelines keep features updated. The postprocessor enables task-specific confidence boosting.
## Adobe Company Portrait
Adobe Inc. enables customers to change the world through digital experiences and creativity. The Adobe search and discovery team supports search and recommendations across customer text, image, video, and other document types as well as over Adobe Stock assets and Adobe help and tutorials.
## Main Author Bio
Sanat Sharma is a senior machine learning engineer at Adobe Inc. He earned his Master's degree from University of Texas, Austin in 2020, with a focus on NLP. Sanat's work focuses on search improvements and contextual recommendations, and his work has been published at conferences such as CVPR.
|
2305.08918 | Black holes that are too cold to respect cosmic censorship | In this essay it is proved that there are black holes that are dangerously
cold. In particular, by analyzing the emission spectra of highly charged black
holes we reveal the fact that near-extremal black holes whose
Bekenstein-Hawking temperatures lie in the regime $T_{\text{BH}}\lesssim
m^6_e/e^3$ may turn into horizonless naked singularities, thus violating the
cosmic censorship principle, if they emit a photon with the characteristic
thermal energy $\omega=O(T_{\text{BH}})$ [here $\{m_e,e\}$ are respectively the
proper mass and the electric charge of the electron, the lightest charged
particle]. We therefore raise here the conjecture that, in the yet unknown
quantum theory of gravity, the temperatures of well behaved black-hole
spacetimes are fundamentally bounded from below by the relation
$T_{\text{BH}}\gtrsim m^6_e/e^3$. | Shahar Hod | 2023-05-15T18:00:12Z | http://arxiv.org/abs/2305.08918v1 | # Black holes that are too cold to respect cosmic censorship
###### Abstract
In this essay it is proved that there are black holes that are dangerously cold. In particular, by analyzing the emission spectra of highly charged black holes we reveal the fact that near-extremal black holes whose Bekenstein-Hawking temperatures lie in the regime \(T_{\rm BH}\lesssim m_{e}^{6}/e^{3}\) may turn into horizonless naked singularities, thus violating the cosmic censorship principle, if they emit a photon with the characteristic thermal energy \(\omega=O(T_{\rm BH})\) [here \(\{m_{e},e\}\) are respectively the proper mass and the electric charge of the electron, the lightest charged particle]. We therefore raise here the conjecture that, in the yet unknown quantum theory of gravity, the temperatures of well behaved black-hole spacetimes are fundamentally bounded from below by the relation \(T_{\rm BH}\gtrsim m_{e}^{6}/e^{3}\).
Email: [email protected]
## Introduction
The mathematically elegant singularity theorems of Hawking and Penrose [1; 2] have questioned the utility of Einstein's theory of general relativity in describing gravitational phenomena in highly curved spacetimes. In particular, the Einstein field equations are known to lose their predictive power in the presence of infinitely curved regions that contain spacetime singularities.
Following this intriguing observation and in order to guarantee the deterministic nature of a self-consistent theory of gravity, Penrose conjectured that a mysterious (and diligent) "cosmic censor" protects far away observers from being exposed to the pathological properties of spacetime singularities [2]. This physically important principle asserts, in particular, that spacetime singularities are always hidden inside of black holes with stable shielding horizons. If true, the cosmic censorship principle would guarantee that we live in a spacetime region in which general relativity is a self-consistent theory of gravity [2].
The most studied curved spacetime in the physics literature, the Kerr-Newman spacetime, describes a black hole of mass \(M\), electric charge \(Q\), and angular momentum \(J=Ma\) that contains an hidden singularity. The characteristic inequality (we use natural Planck units in which \(G=c=\hbar=k_{\rm B}=4\pi\epsilon_{0}=1\)) [3; 4]
\[M^{2}-Q^{2}-a^{2}\geq 0 \tag{1}\]
provides a necessary condition for the existence of an engulfing event horizon that protects far away observers from being exposed to this inner spacetime singularity. Extremal black-hole configurations, which satisfy the critical relation \(M^{2}-Q^{2}-a^{2}=0\), are on the verge of exposing their inner singularities.
In the present essay we shall explicitly prove that well behaved charged black holes that respect the condition (1) may turn into horizonless naked singularities that violate the cosmic censorship principle if they quantum mechanically emit massless photons whose characteristic energies are of the same order of magnitude as the thermal energy of the black hole.
### Hawking evaporation of near-extremal charged black holes
We shall now analyze the physical and mathematical properties of the Hawking emission spectra of near-extremal charged black holes whose Hawking-Bekenstein temperatures are characterized by the strong dimensionless inequality [5; 6]
\[\epsilon\equiv 2\pi MT_{\rm BH}=\frac{M(M^{2}-Q^{2})^{1/2}}{[M+(M^{2}-Q^{2})^{1/ 2}]^{2}}\ll 1. \tag{2}\]
The emission rate of neutral bosonic fields from the black hole is given by the familiar Hawking relation [6]
\[\frac{dN}{dt}=\frac{1}{2\pi}\sum_{l,m}\int_{0}^{\infty}\frac{S_{lm}(\omega)}{e ^{\omega/T_{\rm BH}}-1}d\omega\, \tag{3}\]
where \(\{l,m\}\) are the spheroidal and azimuthal angular harmonic indices of the radiated field mode. The partial back-scattering of the emitted fields by the effective curvature-centrifugal barrier outside the black hole is encoded in the energy-dependent absorption probability factor \(S_{lm}(\omega)\)[6].
Interestingly, and most importantly for our analysis, it has been explicitly proved in [7] that the \(l\)-dependent centrifugal barrier outside the black hole mainly affects (blocks) the propagation of high-\(l\) modes. Thus, the Hawking spectra of spherically symmetric large-mass black holes [8] are dominated by the emission of massless field modes with the smallest known angular momentum: namely, by electromagnetic field quanta with \(l=1\)[9].
The appearance of both a thermal factor \((e^{\omega/T_{\rm BH}}-1)^{-1}\) and an absorption factor \(S_{lm}(\omega)\) in the black-hole emission formula (3) implies that the radiation spectrum has a well defined peak which is characterized by the relation (below we shall determine the exact value of the dimensionless ratio \(\omega_{\rm peak}/T_{\rm BH}\))
\[\omega_{\rm peak}\sim T_{\rm BH}\ll 1. \tag{4}\]
The energy-dependent absorption probability factor of the dominant electromagnetic field mode with \(l=1\) is known in a closed analytical form in the characteristic regime (4) [7; 10]:
\[S_{1m}(\nu)=\frac{256}{9}\epsilon^{8}(4\nu^{8}+5\nu^{6}+\nu^{4})\cdot[1+O(\nu \epsilon)]\, \tag{5}\]
where we have used here the dimensionless frequency-temperature parameter
\[\nu\equiv\frac{\omega}{4\pi T_{\rm BH}}. \tag{6}\]
Substituting the relation (5) into the integral expression (3) for the black-hole radiation rate, one obtains the remarkably compact near-extremal relation [11]
\[\frac{dN_{\gamma}}{dt}=\frac{512\epsilon^{9}}{3\pi M}\int_{0}^{\infty}{\cal N}( \nu)d\nu\, \tag{7}\]
where
\[{\cal N}(\nu)\equiv\frac{(1+\nu^{2})(1+4\nu^{2})\nu^{4}}{e^{4\pi\nu}-1}. \tag{8}\]
From Eq. (8) one deduces that the emission spectra of black holes in the dimensionless near-extremal regime (2) are characterized by a well defined peak at \(\nu_{\rm peak}\simeq 0.399\). Thus, the characteristic energy of an emitted field quantum is given by the functional relation [see Eq. (6)]
\[E=\omega=\nu_{\rm peak}\cdot 4\pi T_{\rm BH}. \tag{9}\]
The emission of neutral field modes from near-extremal black holes is dangerous from the point of view of the cosmic censorship principle since it allows the emitting black hole to reduce its mass without reducing its electric charge. The emission of neutral field quanta therefore decreases the magnitude of the expression \(M^{2}-Q^{2}-a^{2}\) that appears in the necessary condition (1) for the existence of a shielding horizon that protects far away observers from being exposed to the pathological properties of the inner black-hole singularity.
In particular, the emission of a photon with the characteristic energy (9) and an azimuthal angular momentum \(m\) (with \(|m|\in\{0,1\}\)) produces a new spacetime configuration whose physical parameters are characterized by the following relations:
\[M_{\rm new}=M-E\ \ \ \ ;\ \ \ \ Q_{\rm new}=Q\ \ \ \ ;\ \ \ |a|_{\rm new}= \frac{|m|}{M-E}. \tag{10}\]
Intriguingly, taking cognizance of the necessary condition (1) for the existence of a black-hole horizon that covers the central singularity, one deduces from (9) and (10) that a near-extremal black hole in the dimensionless low-temperature regime
\[T_{\rm BH}<T_{\rm BH}^{\rm critical}\equiv\frac{{\cal C}}{\pi M^{3}} \tag{11}\]
with \({\cal C}\equiv\nu_{\rm peak}+\sqrt{\nu_{\rm peak}^{2}+1/4}\) that emits a photon with the characteristic energy (9) and angular momentum \(l=|m|=1\) leaves behind it an horizonless naked singularity that violates the black-hole condition (1).
### Huge, cold black holes endanger the cosmic censorship principle
The intriguing conclusion that black holes in the regime (11) are too cold to respect cosmic censorship is based on our assumption that the radiation spectra of near-extremal black holes are dominated by the emission of neutral massless field modes (mainly by photons with \(l=1\)). In particular, since the physical parameters of the positron, the lightest charged particle of the Standard Model, are characterized by the strong inequality \(e\gg m_{e}\), even a single positron emission would push a near-extremal black hole away from the dangerous extremal limit by increasing its temperature (2).
As we shall now prove explicitly, there exists a critical black-hole mass, \(M=M_{\rm min}\), above which the radiation spectra of near-extremal black holes are dominated by the emission of neutral massless field modes with the smallest known angular momentum (that is, by photons with \(l=1\)[7]). Black holes in the near-extremal regime (11) with \(M>M_{\rm min}\) may evaporate into horizonless naked singularities that violate the black-hole condition (1), thus violating the cosmic censorship principle.
In order to determine the value of the critical black-hole mass \(M_{\rm min}\), one may use the relation [see Eqs. (2), (7), and (8)]
\[\frac{dN_{\gamma}}{dt}=\frac{256\xi}{\pi}\cdot M^{8}T_{\rm BH}^{9} \tag{12}\]
with \(\xi\equiv 8\pi^{4}\zeta(5)+75\pi^{2}\zeta(7)+210\zeta(9)\simeq 1764.9\) for the emission rate of neutral massless photons with \(l=1\) from near-extremal black holes in the dimensionless regime (11). In addition, one may use the fact that, in the regime \(1\ll Mm_{e}\ll Qe\ll(Mm_{e})^{2}\), the emission rate of charged field quanta (positrons) by near-extremal black holes is well approximated by the Schwinger pair-production formula [7; 12; 13]
\[\frac{dN_{e^{+}}}{dt}=\frac{e^{3}}{2\pi^{3}m_{e}^{2}}\cdot\exp(-E_{\rm c}/E_{+ })\, \tag{13}\]
where \(E_{+}=Q/r_{+}^{2}\simeq 1/Q\) is the electric field strength of the near-extremal black hole and \(E_{\rm c}=\pi m_{e}^{2}/e\) is the critical electric field for quantum production of electron-positron pairs.
Our assumption that the emission spectrum of the near-extremal black hole with the critical temperature (11) is dominated by neutral massless photons with \(l=1\) corresponds to the relation
\[\frac{dN_{e^{+}}}{dt}<\frac{dN_{\gamma}}{dt}. \tag{14}\]
Taking cognizance of Eqs. (11), (12), and (13), one can express the inequality (14) in the form
\[\left(\frac{\pi m_{e}^{2}}{e}M\right)^{19}\cdot\exp\Big{(}-\frac{\pi m_{e}^{2}}{ e}M\Big{)}<\frac{512\xi\mathcal{C}^{9}}{\pi^{8}e^{2}}\cdot\Big{(}\frac{\pi m_{e}^{2}} {e}\Big{)}^{20}\, \tag{15}\]
which yields the critical relation (note that, in natural Planck units, the physical parameters of the positron are given by \(e\simeq 1/137.036^{1/2}\) and \(m_{e}\simeq 4.19\cdot 10^{-23}\))
\[M>M_{\rm min}\equiv\frac{e}{\pi m_{e}^{2}}\cdot x_{\rm min}\simeq 1.55\times 1 0^{43}\cdot x_{\rm min} \tag{16}\]
with \(x_{\rm min}\simeq 2124.7\)[14].
The Hawking radiation spectrum of the near-extremal black hole with the critical temperature \(T_{\rm BH}^{\rm critical}=\mathcal{C}/\pi M^{3}\) [see Eq. (11)] is dominated, in the large-mass regime (16), by the emission of neutral massless photons that endanger the integrity of the black-hole horizon.
### Summary and discussion
The Penrose cosmic censorship principle asserts that general relativity is a deterministic theory of gravity and that pathological spacetime singularities are always hidden inside of black holes with stable shielding horizons [1; 2].
In the present essay we have explicitly proved that near-extremal (cold and large) black holes may evaporate into naked singularities that violate the cosmic censorship principle. In particular, taking cognizance of the analytically derived relations (11) and (16), one deduces that the threat to the validity of the principle is limited to the extreme physical regime
\[T_{\rm BH}<T_{\rm BH}^{\rm critical}=\frac{\mathcal{C}}{\pi x_{\rm min}^{3}} \cdot\Big{(}\frac{\pi m_{e}^{2}}{e}\Big{)}^{3}\simeq 10^{-140}\simeq 10^{-108} \ ^{\circ}K \tag{17}\]
of ultra-cold black holes.
Since we believe that cosmic censorship should be one of the cornerstones of a self-consistent theory of gravity in curved spacetimes, we here raise the conjecture that, in the yet unknown quantum theory of gravity, the temperatures of well behaved black-hole spacetimes are fundamentally bounded from below by a relation of the form
\[T_{\rm BH}\gtrsim\frac{m_{e}^{6}}{e^{3}}. \tag{18}\]
If the lower bound (18) is indeed respected, then the emission of characteristic quanta from near-extremal black holes would not endanger the validity of cosmic censorship, a princi
ple which is fundamentally important for a self-consistent formulation of the microscopic quantum theory of gravity.
## Acknowledgments
This research is supported by the Carmel Science Foundation. I thank Don Page for interesting correspondence. I would also like to thank Yael Oren, Arbel M. Ongo, Ayelet B. Lata, and Alona B. Tea for stimulating discussions.
|
2304.07902 | Origin of filaments in finite-time in Newtonian and non-Newtonian
thin-films | The sticky fluids found in pitcher plant leaf vessels can leave fractal-like
filaments behind when dewetting from a substrate. To understand the origin of
these filaments, we investigate the dynamics of a retreating thin-film of
aqueous polyethylene oxide (PEO) solutions which partially wet polydimethyl
siloxane (PDMS) substrates. Under certain conditions the retreating film
generates regularly-spaced liquid filaments. The early-stage thin-film dynamics
of dewetting are investigated to identify a theoretical criterion for liquid
filament formation. Starting with a linear stability analysis of a Newtonian or
simple non-Newtonian (power-law) thin-film, a critical film thickness is
identified which depends on the Hamaker constant for the fluid-substrate pair
and the surface tension of the fluid. When the measured film thickness is
smaller than this value, the film is unstable and forms filaments as a result
of van der Waals forces dominating its behaviour. This critical film-height is
compared with experimental measurements of film thickness obtained for receding
films of Newtonian (glycerol-water mixtures) and non-Newtonian (PEO) solutions
generated on substrates inclined at angles 0 $^{\circ}$, 30 $^{\circ}$, and 60
$^{\circ}$ to the vertical. The observations of filament and its absence show
good agreement with the theory. The evolution of the thin-film shape is
modelled numerically to show that the formation of filaments arises because the
thin-film equation features a singular solution after a finite-time, hence
termed a "finite-time singularity". | Saksham Sharma, D. Ian Wilson | 2023-04-16T21:49:07Z | http://arxiv.org/abs/2304.07902v1 | # Origin of filaments in finite-time in Newtonian and non-Newtonian thin-films
###### Abstract
The sticky fluids found in pitcher plant leaf vessels can leave fractal-like filaments behind when dewetting from a substrate. To understand the origin of these filaments, we investigate the dynamics of a retreating thin-film of aqueous polyethylene oxide (PEO) solutions which partially wet polydimethyl siloxane (PDMS) substrates. Under certain conditions the retreating film generates regularly-spaced liquid filaments. The early-stage thin-film dynamics of dewetting are investigated to identify a theoretical criterion for liquid filament formation. Starting with a linear stability analysis of a Newtonian or simple non-Newtonian (power-law) thin-film, a critical film thickness is identified which depends on the Hamaker constant for the fluid-substrate pair and the surface tension of the fluid. When the measured film thickness is smaller than this value, the film is unstable and forms filaments as a result of van der Waals forces dominating its behaviour. This critical film-height is compared with experimental measurements of film thickness obtained for receding films of Newtonian (glycerol-water mixtures) and non-Newtonian (PEO) solutions generated on substrates inclined at angles \(0^{\circ}\), \(30^{\circ}\), and \(60^{\circ}\) to the vertical. The observations of filament and its absence show good agreement with the theory. Further analysis of the former case, involving a stability analysis of the contact line, yields a prediction of the spacing (wavelength) \(\hat{\lambda_{f}}\) between filaments as \(\hat{\lambda_{f}}\hat{\gamma}\propto Ca_{a}\), where \(Ca\) is the capillary number for contact line motion: our experiments yield \(\hat{\lambda_{f}}\hat{\gamma}\propto Ca^{1.08}\) and earlier studies in the literature reported \(\hat{\lambda_{f}}\hat{\gamma}\propto Ca^{0.945}\). The evolution of the thin-film shape is modelled numerically to show that the formation of filaments arises because the thin-film equation features a_singular solution after a finite-time, hence termed a "finite-time singularity".
This manuscript was compiled on April 18, 2023
###### Abstract
Receding thin films arise in a range of fields and are important in coating, printing and drying applications, where the stability of the thin film near the moving contact line will determine the uniformity of the product. Our interest in this topic arises from observation of residual filaments left by the evaporation of sessile droplets of the sticky digestive fluid secreted by \(N\). _Rafflesiana_ pitcher plants. Fig. 1 shows that these filaments are formed in the early stages of evaporation, where the shrinking drop forms a receding thin film at the contact line, and can be generated artificially by sucking liquid from the drop (Fig. 1(c)). The filaments exhibit regular spacing on relatively smooth surfaces, indicating that the filaments arise from an instability in the thin film rather than contact line pinning.
Picture plant fluids are non-Newtonian aqueous solutions containing long-chain polysaccharides (1). Deblais et al. (2) observed similar filaments with thin films of viscous Newtonian and non-Newtonian liquids (glycerine and synthetic polymer solutions, respectively) generated by a blade arrangement which allowed the initial height and contact line velocity \(\hat{U}\) to be controlled independently. As the contact line velocity \(\hat{U}\) decreased, uniform films with a straight contact line were replaced by ones with regularly spaced cusps and rivitsits: for Newtonian liquids the rivits were unstable and gave rise to droplets whereas the higher extensional viscosity of the polymer solutions stabilised the filaments and gave patterns analogous to those in Fig. 1(c)). They reported that the threshold of cusp (and filament) formation corresponded to a critical value of the capillary number, \(Ca=\hat{\eta}\hat{U}/\hat{\gamma}\), where \(\hat{\eta}\) is the apparent viscosity and \(\hat{\gamma}\) the surface tension, but did not provide a theoretical treatment.
In the present work, we provide a theoretical explanation for the observed filament formation. It draws on recent experimental work by Xue & Stone on a liquid film draining down a glass slide under the action of gravity. The thin-film non-linear PDE used to model the film considered three forces: viscous resistance, gravity, and surface tension. A major assumption there was the consideration of perfectly wetting (zero degree contact angle) fluids, arising from the difficulty of incorporating partial wetting in the model, as remarked by co-author Stone when presenting this work at the GKB 100 symposium (3). Partial wetting is included in the present study, following the work by Witelski and coworkers on the stability analysis and evolution of thin-films ((4), (5), (6) and (7)) by employing expressions for the van der Waals forces' dependency on the film thickness. We demonstrate that the formation of cusps and filaments observed during the dewetting of pitcher plant fluids has its roots in the inherent instability of the thin-film. Strictly speaking, this instability gives rise to a 'finite-time singularity'.
The article is organised as follows. The hydrodynamic equations for the thin-film are presented in Sec. 1 and stability analysis is then performed to find the critical criterion for filament formation (Sec. 2). The rationale behind the finite-time singularity feature of the thin-film equation is discussed in Sec. 3 along with some numerical investigations. The criterion is compared with experimental results in Sec. 4. A scaling law for the spacing between filaments is derived and compared with experimental data in Sec. 5. The key findings and potential further directions for this work are discussed in Sec. 6.
## References
* [1]
Figure 1: Filaments formed by overnight evaporation of ground pitcher _N. Raftlesiana_ fluid on (a) polystyrene and (b) horosilicate glass surfaces. (c) Forced shrinkage of a sessile drop of _N. Raftlesiana_ on borosilicate glass, caused by withdrawal of liquid via pipette labelled P. Evaporation in (a) and (b) results in concentration of dissolved species. In the drop, slowing evaporation and triggering the transition from a receding film to a pinned state and ’coffee ring effect’ features. Liquid removal in (c) is faster and there is little evaporation: filaments are observed once the contact line starts to recede.
## 1 Theory
Consider a thin liquid film of local thickness \(\hat{h}(\hat{x},\hat{y},\hat{t})\) on the plane (\(Oxy\)) as shown in Fig. 2(a)), with height \(\hat{h}\) pointing towards \(Oz\). \(Ox\) points horizontally along the liquid-substrate-air contact line and \(Oy\) points directly down the slope. A standard thin-film equation for a non-Newtonian liquid (8) exhibiting power-law behaviour with exponent \(n\) is
\[3\hat{\eta}\frac{\partial\hat{h}}{\partial\hat{t}}=\left(\frac{\partial}{ \partial\hat{x}}\left[\hat{m}(\hat{h})\frac{\partial}{\partial\hat{x}}(\hat{p} )\right]-3\hat{g}\hat{g}^{\prime}\hat{h}^{2}\frac{\partial\hat{h}}{\partial\hat {y}}\right)^{1/n}\] [1]
Here \(\hat{\eta}\) is short-form for apparent viscosity \(\hat{\eta}(\dot{\gamma})\), where \(\dot{\gamma}\) is the shear-rate of the thin-film defined at its free surface and given as \(\hat{U}/\hat{h}\) (see Fig. 2(b.ii)), \(\hat{\rho}\) the density, \(\hat{m}(\hat{h})\) the mobility coefficient, \(\hat{p}\) is the hydrodynamic pressure \(\hat{p}(\hat{x},\hat{t})\) of the thin-film, and \(g^{\prime}=g\cos{\alpha}\); that denotes that the term is dimensional. The boundary condition at the film-substrate interface (on surface \(Oxy\)) gives \(\hat{m}(\hat{h})=\hat{h}^{k}\) where \(k\in[1,3]\). The pressure \(\hat{p}(\hat{x},\hat{t})\) in the thin-film (6, see Eq. (1.3)) is given by
\[\hat{p}=\hat{\Pi}(\hat{h})-\hat{\gamma}\frac{\partial^{2}\hat{h}}{\partial \hat{x}^{2}}\] [2]
where \(\hat{\Pi}(\hat{h})\) is the disjoining pressure which accounts for the van der Waals interactions between the thin-film and the substrate. The second term on the RHS accounts for the effect of surface tension \(\dot{\gamma}\) and curvature of the thin-film. The disjoining pressure \(\hat{\Pi}(\hat{h})\) can be written in the form
\[\hat{\Pi}(\hat{h})=\frac{\hat{A}}{\hat{h}^{3}}\left[1-\frac{\hat{h}_{UTF}}{ \hat{h}}\right] \tag{3}\]
where van der Waals forces are characterised by the Hamaker constant \(\hat{A}\) (\(\hat{A}>0\) means the interaction is hydrophobic and \(\hat{A}<0\) hydrophilic) and \(\hat{h}_{UTF}\) is the height of the adsorbed precursor film in Fig.2(b)) (6). There are two timescales in Eq. (1): i) \(\hat{T}_{x}\), when the pressure term (first term on the RHS) is dominant, and ii) \(\hat{T}_{y}\), when the gravity term (second term on the RHS) dominates. A scaling analysis gives these timescales as
\[\hat{T}_{x}=\frac{3\hat{\eta}\hat{L}_{x}^{4}}{\hat{h}^{3}\dot{\gamma}};\quad \hat{T}_{y}=\frac{\hat{\eta}\hat{L}_{y}}{\hat{\rho}g^{\prime}\hat{h}^{2}}\] [4]
with \(\hat{L_{x}}=\tilde{h}\sqrt{\hat{\gamma}\hat{h}_{UTF}^{2}/\hat{A}}\) ((see 6, p. 016301-2)), and length scale \(\hat{L_{y}}\) the length of the glass substrate in the \(Oy\) direction (25 mm in our tests). With \(\bar{h}=O(10^{-4})\) m, \(\hat{A}=O(10^{-17})\) J, \(\hat{h}_{UTF}=O(10^{-9})\) m and \(\hat{\gamma}=O(10^{-2})\) N/m, this gives \(\hat{T}_{x}=O(10^{-8})\) s and \(\hat{T}_{y}=O(1)\) so the first term on the RHS in Eq. (1) is expected to be dominant.
This means that early-stage dynamics of the thin-film, when surface tension and van der Waals interactions dominate, are characterised by a time scale of \(10^{-8}\) s, compared to the intermediate stage where gravity is important. Similar arguments about time scales have been reported previously (6, p. 016301-2). It means that, as soon as the thin film is deposited on the substrate (or, in these experiments, the substrate is withdrawn from the liquid pool), an interplay between surface tension and van der Waals interaction forces begins.
The experiments reported here employed Newtonian solutions (mixtures of glycerol and water) of different viscosity as well as non-Newtonian ones (mixtures of polyethylene oxide, PEO, in water). Scaled viscosity is used to label the liquids when presenting results, given by \(\hat{\eta}/(\hat{\rho}\hat{\gamma}^{3}/g^{\prime})^{1/4}\). The PEO solutions exhibit shear-thinning, which can be described by the Cross model (see Supplementary Material). The shear rate in the experimental films lie in the power law regime for this model (see Supp. Fig. S1), which is why this model is used in Eq. (1).
Hence, in the next section, we focus on the early-stage dynamics of the thin-film by ignoring the gravity term in Eq. (1), _viz._
\[3\hat{\eta}\frac{\partial\hat{h}}{\partial\hat{t}}=\frac{\partial}{\partial \hat{x}}\left[\hat{h}^{k}\frac{\partial}{\partial\hat{x}}\left(\frac{\hat{A} }{\hat{h}^{3}}\left[1-\frac{\hat{h}_{UTF}}{\hat{h}}\right]-\hat{\gamma}\frac{ \partial^{2}\hat{h}}{\partial\hat{x}^{2}}\right)\right]^{1/n}.\] [5]
Eq. (5) is non-dimensionalised by introducing scales
\[\hat{h}=\bar{h}h,\hat{\omega}=\hat{L_{x}}x,\quad\hat{t}=\hat{T_{x}}t,\quad \hat{h}_{UTF}=\bar{h}\zeta\] [6]
where the variables without hats are dimensionless and \(\zeta=\hat{h}_{UTF}/\hat{h}<\)1. This yields
\[\frac{\partial h}{\partial t}=\frac{\partial}{\partial\hat{x}}\left[\hat{h}^{k }\frac{\partial}{\partial x}\left(\Gamma(h)-\frac{\partial^{2}h}{\partial x^{2 }}\right)\right]^{1/n}\] [7]
where \(\Gamma(h)\) is the dimensionless form of \(\hat{\Pi}(h)\) and can be written as
\[\Gamma(h)=\frac{\zeta^{2}}{h^{3}}\left[1-\frac{\zeta}{h}\right]\] [8]
and the dimensionless pressure term is
\[p=\Gamma(h)-\frac{\partial^{2}h}{\partial x^{2}}\] [9]
which will be used in the next section to simplify the analysis of this PDE.
## 2 Stability analysis of the steady state solution of the thin-film PDE
We investigate the tendency of a thin-film to maintain (or lose its) stability at an early stage. Since the formation of filaments happens almost instantaneously, the phenomenon is primarily linked to the interplay of surface tension and van der Waals interaction forces. A linear stability analysis is performed with respect to arbitrary infinitesimal perturbations. We start by finding a Lyapunov function for Eq. (7) - in effect, modelling it as a dynamic system - because the existence of such a function provides an indication of the nature of stability of the system (9). Consider the following integral
\[I[h]=\int_{0}^{1}\left(\frac{1}{2}\left|\frac{\partial h}{\partial x}\right|^{2} -\varsigma(h)\right)dx\] [10]
where \(\partial\zeta(h)/\partial h=\Gamma(h)\). The reason behind choosing this integral is that its first derivative w.r.t \(h\), has the property
\[\frac{\partial I}{\partial h}=\frac{\partial}{\partial h}\left(\frac{1}{2} \left|\frac{\partial h}{\partial x}\right|^{2}-\varsigma(h)\right)=\frac{ \partial^{2}h}{\partial x^{2}}-\Gamma(h)=-p.\] [11] |
2303.12416 | Determination of the perturbations in the ionosphere produced by
tsunamis through GNSS observations | During the propagation of a tsunami, gravity and sound waves can be produced,
spreading from its source to the ionosphere's upper layers, thus generating
perturbed electron densities in its E and F regions. These ionospheric
disturbances can be studied in detail using measurements of the ionosphere's
Total Electron Content (TEC), registered by permanent GNSS stations. In this
contribution, the foundations of the VARION method (Variometric Approach for
Real-time Ionosphere Observation) are described in order to obtain TEC's
temporal variations with the aim of detecting such ionospheric disturbances.
Moreover, the numerical results obtained after applying this method to real
cases of tsunamis monitored by those satellites whose Ionospheric Pierce Points
(IPPs) are closest to the tsunami source are presented. Lastly, based on these
ionospheric perturbations reflected in the signals emitted by the satellites, a
preliminary design is described for its potential integration into a Tsunami
early Warning System (TWS) for the Iberian Peninsula. | Leonor Cui Domingo Centeno, VÃctor Puente GarcÃa | 2023-03-22T09:31:24Z | http://arxiv.org/abs/2303.12416v1 | # Determination of the perturbations in the ionosphere produced by tsunamis through GNSS observations
###### Abstract
During the propagation of a tsunami, gravity and sound waves can be produced, spreading from its source to the ionosphere's upper layers, thus generating perturbed electron densities in its E and F regions. These ionospheric disturbances can be studied in detail using measurements of the ionosphere's Total Electron Content (TEC), registered by permanent GNSS stations. In this contribution, the foundations of the VARIION method (Variometric Approach for Real-time Ionosphere Observation) are described in order to obtain TEC's temporal variations with the aim of detecting such ionospheric disturbances. Moreover, the numerical results obtained after applying this method to real cases of tsunamis monitored by those satellites whose ionospheric Pierce Points (IPPs) are closest to the tsunami source are presented. Lastly, based on these ionospheric perturbations reflected in the signals emitted by the satellites, a preliminary design is described for its potential integration into a Tsunami early Warning System (TWS) for the Iberian Peninsula.
Tunami, Ionosphere, GNSS, Total Electron Content (TEC), Ionospheric Pierce Point (IPP), Tsunami Warning System (TWS).
## 1 Introduction
In recent years, the demonstrated ability of GNSS to monitor a variety of events in an accurately, rapidly and cost-effectively way has allowed its use in countless different applications to grow considerably. Particularly, several studies have been carried out to analyze the behavior of the ionosphere in the event of natural hazards through the information that can be obtained, even in real time, from these GNSS observations. The ionosphere is the ionized part of the upper layers of the Earth's atmosphere, extending from 50 to 1000 kilometers above Earth's surface. This ionization is caused directly by the solar radiation activity, which modifies the properties of the ionosphere composition and produces disturbances on the ionospheric plasma densities. Depending on the degree of the ionization, different regions of the ionosphere can be considered with varying compositions at each height level. These regions are labelled as layers \(D\), \(E\), \(F\)1 and \(F\)2, where the \(D\) region covers the upper part of the mesosphere, until the \(F\) region reaching up to a part of the exosphere (Kelley, 2009).
In this article, we focus on the use of the ionosphere's properties for tsunami detection. According to UNESCO IOC NOAA International Tsunami Information Center1, the majority of the tsunami's sources stem from earthquakes, though they can also be triggered by other phenomena such as earthquakes, landslides, volcanic eruptions, earthquakes, thunderstorms, deep convection events, space weather effects and a variety of anthropogenic events (explosions, rocket launches, etc.).
Footnote 1: [https://www.eso.org/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/doc/s/doc/s/doc/s/doc/s/doc/s/doc/doc/s/doc/s/doc/doc/s/doc/doc/s/doc/doc/s/doc/doc/s/doc/doc/s/doc/doc/s/doc/doc/s/doc/doc/s/doc/doc/s/doc/doc/s/doc/doc/s/doc/doc/s/doc/doc/s/doc/doc/s/doc/doc/s/doc/doc/s/doc/doc/s/doc/doc/s/doc/doc/s/doc/doc/s/doc/doc/s/doc/doc/s/doc/doc/s/doc/doc/s/doc/doc/s/doc/doc/s/doc/doc/s/doc/doc/s/doc/doc/s/doc/doc/s/doc/doc/s/doc/doc/s/doc/doc/s/doc/doc/s/doc/doc/s/doc/doc/s/doc/s/doc/doc/s/doc/doc/s/doc/doc/s/doc/doc/s/doc/doc/s/doc/doc/s/doc/doc/s/doc/s/doc/doc/s/doc/doc/s/doc/doc/s/doc/s/doc/doc/s/doc/doc/s/doc/doc/s/doc/doc/s/doc/s/doc/doc/s/doc/doc/s/doc/s/doc/doc/s/doc/s/doc/doc/s/doc/doc/s/doc/s/doc/s/doc/s/doc/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/doc/s/doc/s/doc/s/doc/s/doc/s/doc/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/doc/s/doc/s/doc/s/doc/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/s/](https://www.eso.org/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/doc/s/doc/s/doc/s/doc/s/doc/s/doc/doc/s/doc/s/doc/doc/s/doc/doc/s/doc/doc/s/doc/doc/s/doc/doc/s/doc/doc/s/doc/doc/s/doc/doc/s/doc/doc/s/doc/doc/s/doc/doc/s/doc/doc/s/doc/doc/s/doc/doc/s/doc/doc/s/doc/doc/s/doc/doc/s/doc/doc/s/doc/doc/s/doc/doc/s/doc/doc/s/doc/doc/s/doc/doc/s/doc/doc/s/doc/doc/s/doc/doc/s/doc/doc/s/doc/doc/s/doc/doc/s/doc/doc/s/doc/doc/s/doc/doc/s/doc/doc/s/doc/doc/s/doc/doc/s/doc/doc/s/doc/s/doc/doc/s/doc/doc/s/doc/doc/s/doc/doc/s/doc/doc/s/doc/doc/s/doc/doc/s/doc/s/doc/doc/s/doc/doc/s/doc/doc/s/doc/s/doc/doc/s/doc/doc/s/doc/doc/s/doc/doc/s/doc/s/doc/doc/s/doc/doc/s/doc/s/doc/doc/s/doc/s/doc/doc/s/doc/doc/s/doc/s/doc/s/doc/s/doc/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/doc/s/doc/s/doc/s/doc/s/doc/s/doc/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/doc/s/doc/s/doc/s/doc/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/doc/s/s/)
These changes in the ionosphere can be analyzed using measurements of the Total Electron Content (TEC) of the ionosphere -- continuously collected by the operating GNSS ground-based receivers--, through a method based on a Variometric Approach for Real-time Ionosphere Observation, known by its acronym, VARION, as originally introduced in Savastano et al. (2017). Therefore, this study aims to analyze the numerical results obtained using this efficient and successful method in different potential tsunami hazard scenarios in order to outline the feasibility of integrating into a Tsunami early Warning System.
## 2 Variometric Approach for Real-time Ionosphere Observation Method
The Variometric Approach for Real-time Ionosphere Observation (VARION) method is a GNSS processing algorithm that focuses on real-time estimation of Slant Total Electron Content (STEC) variations. STEC is the measurement of the total number of free electrons in a unit cross column section along the ray path between the satellite and the receiver. This STEC measurement is represented in red in Figure 1.
These estimations of STEC are affected by the transmitted signals as they travel through the ionosphere. Moreover, the estimates also depend on the local time (LT) hours of the day. Particularly, during the day, the ionosphere is more ionized than at night, reaching its highest degree of ionization between 12 LT and 16 LT hours. There is also a dependence on the receiver's latitude and longitude, as well as the solar and geomagnetic activity present in this layer.
Furthermore, the VARION method is based on the Variometric Approach for Displacements Analysis Stand-alone Engine, known by its acronym VADASE, which consists of an estimation of the ground velocities and displacements induced by several earthquakes in a real-time scenario (Benedetti et al., 2015).
### Mathematical Model
The basis for the VARION method lies on single time differences of geometry-free combinations of GPS carrier-phase measurements, using a dual-frequency GPS receiver in a stand-alone operational mode. In this section, we describe the mathematical foundations of the VARION method, used for providing the time series of Slant TEC during the time in which a tsunami occurred with the aim of its detection.
Let \(L_{IR}^{S}(t)\) be the standard raw carrier-phase observations, then in length units it is given by:
\[L_{IR}^{S}(t)=\rho_{R}^{S}(t)+c(\delta t_{R}(t)-\delta t^{S}(t))+T_{R}^{S}(t)- I_{IR}^{S}(t)+\lambda_{i}N_{IR}^{S}(t)+p_{R}^{S}(t)+m_{R}^{S}(t)+\varepsilon_{R}^{S}(t), \tag{1}\]
where the subscripts, \(i\) and \(R\), respectively correspond to the signal frequency and the receiver, and the superscript \(S\) refers to the satellite. Furthermore, \(\lambda\) is the carrier-phase wavelength, \(\rho_{R}^{S}\) the geometric range, \(c\) the speed of light, \(\delta t_{R}\) and \(\delta t^{S}\) are respectively the receiver and the satellite clock errors, \(T_{R}^{S}\) and \(I_{R}^{S}\) respectively refer to the tropospheric and the ionospheric delays along the satellite-receiver path, \(N_{IR}^{S}\) is the ambiguity of carrier-phase, \(p_{R}^{S}\) gathers the sum of other effects, such as variations on phase center of antenna, phase wind-up and the relativistic effects, and lastly, \(m_{IR}^{S}\) and \(\varepsilon_{R}^{S}\) correspond to the multipath effect and other residuals such as the noise, respectively.
Assuming that no cycle slips occur, the unknown ambiguity of carrier-phase remains constant between two consecutive epochs. Moreover, due to the dual frequency GPS observation, we consider the so-called geometry-free combination or \(L_{4}\) combination. This combination cancels the geometric part of the measurement, leaving all the frequency-dependent effects. It is expressed as \(L_{4}=L_{1}-L_{2}\), where \(L_{1}\) and \(L_{2}\) are respectively the GPS signals.
Figure 1: Slant TEC measurement between satellite-receiver ray
Therefore, differentiating (I) with respect to time between two consecutive epochs (\(t,t+1\)), we obtain the geometry-free time single-difference observation equation:
\[L_{AR}^{S}(t+1)-L_{AR}^{S}(t)=\frac{f_{1}^{2}-f_{2}^{S}}{f_{2}^{2}}\left(I_{AR}^{ S}(t+1)-I_{AR}^{S}(t)\right), \tag{2}\]
wherein \(f_{1}\)and \(f_{2}\) are respectively the \(L_{1}\) and \(L_{2}\) carrier frequencies for GPS. Their values are derived from the fundamental frequency \(f_{0}=10.23\) MHz as \(f_{1}=154f_{0}=1575.42\) MHz and \(f_{2}=120f_{0}=1227.60\) MHz.
Hence, considering the ionospheric refraction along the geometric range in (2), the Slant TEC variations between two consecutive epochs are:
\[\delta TEC(t)=\frac{f_{1}^{S}f_{2}}{A(f_{1}^{S}-f_{2})}\left(L_{AR}^{S}(t+1)-L_ {AR}^{S}(t)\right), \tag{3}\]
with \(A\) being a constant derived from the plasma frequency given by Teunissen (2017):
\[A=\frac{1}{2}\frac{\mathbf{a}_{2}^{2}}{4\pi^{2}\mathbf{a}_{\mathbf{a}_{\mathbf{a}_{\mathbf{a}_{\mathbf{ a}_{\mathbf{a}_{\mathbf{a}_{\mathbf{a}_{\mathbf{a}}}}}}}}}}=\mathbf{40.3081\cdot 10^{16}}\ \ \mathbf{m}^{3}/\mathbf{s}^{2}, \tag{4}\]
where in turn \(\mathbf{a}_{\mathbf{e}}\) is the electron charge, \(\mathbf{e}_{0}\) is the dielectric constant of vacuum and \(\mathbf{m}_{\mathbf{e}}\) is the electron mass.
Afterwards, estimations of Slant TEC variations are computed by numerical integration of (3) over a time interval, \(\left[\mathbf{t}_{0},\mathbf{t}_{\mathbf{t}}\right]\), during which the tsunami occurred. Thus, integration by the Trapezoidal Rule yields:
\[\Delta TEC(t_{\mathbf{t}},t_{0}):=\mathbf{t}_{\mathbf{t}}^{T}\delta TEC\left(t\right)\,dt \approx\sum_{\mathbf{k}=1}^{\mathbf{N}}\frac{\delta TEC(t_{\mathbf{k}-1})+\delta TEC(t_{ \mathbf{k}})}{2}\Delta t_{\mathbf{k}}, \tag{5}\]
where \(\Delta t_{\mathbf{k}}=t_{\mathbf{k}}-t_{\mathbf{k}-1}\) is the time grid spacing. Without loss of generality, we can consider an equally spaced grid of size \(\Delta t_{\mathbf{k}}=\mathbf{h}\), therefore having an error term of order \(\mathcal{O}(\mathbf{h}^{2})\). These TEC measurements are expressed in TEC Units (TECU), where 1 TEC corresponds to \(\mathbf{10^{16}}\) electrons per square meter.
The VARION method may use the Kolbuchar broadcast ionospheric model or even global maps, such as IONEX files, regarding the geolocation of the TEC measurements. Furthermore, it should be pointed out that this method may be implemented for all GNSS constellations, considering their observation codes, that is (LIC, LZW) for GPS, (LIC, L2C) or (LIP, L2P) for GLONAS, (LIC, L5Q) or (LIX, L5X) for Galileo and finally (L2), L6D or (L2I, L7) for BeIDou.
Nevertheless, due to the ionospheric effects on the signals emitted between satellite and receiver, it is also crucial to analyze the geometry described by them during the time interval in which the tsunami occurs. The main objective consists of selecting those satellites that best monitor the tsunami.
### Criteria for Satellite Selection
Most of the algorithms assume that the estimated TEC is mainly contributed by the \(\mathbf{F_{2}}\) region of the ionosphere, which justifies that the ionosphere can be modeled by a single-layer ionospheric layer, located at the height of the \(\mathbf{F_{2}}\) peak. This assumption allows us to attribute the estimated TEC to a specific point, termed as Ionospheric Pierce Point (PP), defined as the point of intersection between the satellite-receiver line of sight and a thin ionospheric shell located at a fixed altitude, nominally taken as 350 km.
According to El-Gizawy (2003), the location of the IPP is computed using the ellipsoidal receiver coordinates in GRS80 geodetic reference system, (\(\mathbf{\phi}_{\mathbf{r}},\mathbf{\lambda}_{\mathbf{r}}\)), and the satellite ephemeris. This configuration, shown in Figure 2, allows us to obtain the Ionospheric Pierce Point (IPP) coordinates, (\(\mathbf{\phi}_{IPP},\mathbf{\lambda}_{IPP}\)), given by:
\[\begin{cases}\mathbf{\phi}_{IPP}=\mathbf{\phi}_{\mathbf{r}}+\mathbf{\psi}\cos\mathbf{A},\\ \mathbf{\lambda}_{IPP}=\mathbf{\lambda}_{\mathbf{r}}+\frac{\mathbf{\psi}\sin\mathbf{A}}{\cos\mathbf{ \phi}_{IPP}},\end{cases} \tag{6}\]
where \(\mathbf{A}\) is the azimuth angle of the satellite and \(\mathbf{\psi}\) the offset angle, defined from the center of a supposedly spherical Earth, expressed as \(\mathbf{\psi}=\mathbf{E}^{\prime}-\mathbf{E}\), with \(\mathbf{E}^{\prime}\) and \(\mathbf{E}\) being the elevation angles at the IPP and the user's receiver respectively.
The elevation angle \(\mathbf{E^{\prime}}\) can be expressed as:
\[E^{\prime}=\arccos\left(\frac{R_{R}}{\mathbf{a}_{R}+\mathbf{h}_{none}}\cos E\right), \tag{7}\]
being \(\mathbf{R_{R}}\) the mean radius of the spherical Earth and \(\mathbf{h_{none}}\) the height of IPP, set as 350 km.
We seek to determine the satellite coordinates, expressed by its elevation and azimuth in (6), from the known receiver and satellite geodetic coordinates, expressed in the Earth-Centered Earth-Fixed (ECF) coordinate system \(\mathbf{(x,y,z)}\). For this purpose, as defined in Subirana (2011), by means of a system transformation into the local East, North, Up (ENU) coordinate system --done through two rotation matrices--, we obtain the elevation and azimuth of the satellite in the local coordinate (ENU) system, given by:
\[\begin{cases}\mathbf{E}=\arccos(\mathbf{\beta}\cdot\overline{\mathbf{u}}),\\ \mathbf{A}=\arctan\left(\frac{\mathbf{\beta}\cdot\overline{\mathbf{e}}}{\overline{\mathbf{ \beta}}\cdot\overline{\mathbf{u}}}\right),\end{cases} \tag{8}\]
where \(\overline{\mathbf{\varphi}}\) is the line of sight unit vector and finally, the unit vectors \(\mathbf{\bar{e}},\overline{\mathbf{u}}\) and \(\overline{\mathbf{u}}\) are defined as:
\[\begin{cases}\mathbf{\bar{e}}=(-\sin\lambda_{r},\cos\lambda_{r},0),\\ \overline{\mathbf{u}}=(-\cos\lambda_{r}\sin\phi_{r},-\sin\lambda_{r}\sin\phi_{r}, \cos\phi_{r}),\\ \overline{\mathbf{u}}=(\cos\lambda_{r}\cos\phi_{r},\sin\lambda_{r}\cos\phi_{r}, \sin\phi_{r}).\end{cases} \tag{9}\]
Depending on the moment magnitude scale, denoted by \(\mathbf{Mw}\), we define a threshold distance, \(\mathbf{\kappa}\), for those satellites whose IPs are closest to the tsunami source coordinates, \((\mathbf{\phi}_{\text{res}},\lambda_{\text{res}})\). According to Kamogawa et al. (2016), this threshold distance is such that if \(\mathbf{Mw}\geq\mathbf{8.4}\), then \(\mathbf{\kappa}\) is set as 100 km while if \(\mathbf{Mw}<\mathbf{8.4}\), \(\mathbf{\kappa}\) takes 50 km. Hence, we selected those satellites whose IPs coordinates lie within a circle of radius \(\mathbf{\kappa}\), centered in the tsunami source, this is:
\[(\mathbf{\phi}_{IPP}-\mathbf{\phi}_{\text{res}})^{2}+(\lambda_{IPP}-\lambda_{\text{res }})^{2}\leq\mathbf{\kappa}^{2}. \tag{10}\]
## 3 Real cases of Tsunamis
For the algorithm validation, we consider the ionospheric effects generated in the following real cases of tsunamis. Firstly, we consider the one that occurred in the Tohoku region of Japan, on the \(\mathbf{11^{th}}\) of March 2011 at 05:46:24 UTC. This undersea megathrust earthquake had a magnitude of 9.0 - 9.1 with its epicenter located at \(\mathbf{\phi}_{\text{res}}=\mathbf{38^{\circ}}.\mathbf{297}\)\(\mathbf{N}\) and \(\lambda_{\text{res}}=\mathbf{142^{\circ}}.\mathbf{373}\)\(\mathbf{S}\), in the north-western Pacific Ocean, approximately 72 km east of the Oshika Peninsula of Tohoku, at a relatively shallow depth of 32 km. It was the most powerful earthquake ever recorded in Japan, and hence it is known as the _Great East Japan Earthquake_, lasting approximately six minutes and causing a consequential tsunami (Shibahara, 2011).
Besides, the Tonga tsunami in Oceania occurred on the \(\mathbf{15^{th}}\) of January 2022 at 04:14:45 UTC. It was caused by the eruption of a submarine volcano, Hunga Tonga-Hunga H'apai, in the Tonga archipelago located in the southern Pacific Ocean, whose coordinates are \(\mathbf{\phi}_{\text{res}}=\mathbf{20^{\circ}}.\mathbf{546}\)\(\mathbf{S}\) and \(\mathbf{\lambda}_{\text{res}}=\mathbf{175^{\circ}}.\mathbf{390}\)\(\mathbf{W}\). The eruption triggered several tsunamis, not only in Tonga, but also in Fiji, American Samoa, Vanuatu and along the Pacific coast, including damaging tsunamis in New Zealand, Japan, the United States, the Russian Far East, Chile and Peru. It was the largest volcanic eruption since the 1991 eruption of Mount Pinatubo, and the most powerful eruption since the 1883 eruption of Krakatoa (Denamiel, 2022).
Figure 2: Satellite-receiver geometry
In order to analyze the estimations of Slant TEC with the VARION algorithm, in the following section we describe the data used for each real case of tsunami.
## 4 Dataset
First of all, we selected the closest reference station receivers to the tsunami source from a map list of the International GNSS Service (IGS)2. Moreover, for the date on which the tsunami occurred, RINEX Observation files from those receivers are obtained from the Crustal Dynamics Data Information System (CDDIS)3. Finally, the orbit files, Standard Product 3 (SP3), followed by a series of position records and clock corrections for each satellite at each selected time epoch, are retrieved from Center for Orbit Determination in Europe (CODE)4 product series. It should be noted that the satellite position is only needed to compute the position of the IPs.
Footnote 2: [https://docs.org/netnet/x/station-map-list](https://docs.org/netnet/x/station-map-list)
Footnote 3: [https://code.ncs.gov/archive/gross/data/](https://code.ncs.gov/archive/gross/data/)
Footnote 4: [http://ftp.aib.upibe.ch/CODE/](http://ftp.aib.upibe.ch/CODE/)
Given that the RINEX files are sampled every 30 seconds and the SP3 files every 15 minutes, we compute the satellite positions by means of a Lagrange interpolation, considering seventh-order interpolating polynomials, based on 4 points on each side of the emission time of signal, \(\mathbf{T_{emission}}\), which is computed as \(\mathbf{T_{emission}}=\mathbf{T_{ reception}}-\mathbf{\Delta t}\), where \(\mathbf{T_{ reception}}\) is the reception time of signal and \(\mathbf{\Delta t}\) the signal travel time, expressed as \(\mathbf{\Delta t}=\frac{\mathbf{Q}}{c}\), being \(\mathbf{Q}\) the satellite-receiver pseudororange measure and \(\mathbf{c}\) the speed of light. Thus, for the validation of the algorithm, the numerical results for each real case of a tsunami are presented in the next section.
## 5 Numerical Results
To start with, for each case we show the closest stations chosen to the location where tsunami occurred and those that indeed have data in the respective RINEX and SP3 files. Through the RINEX file, we retrieved daily 30-second data for the Tohoku event and hourly 30-second data for Tonga event. Note that despite the fact that the Japanese region has a very dense GNSS network, for this date case, only GPS data were available (other constellations were not sampled). Besides, with regard to the SP3 files, final orbits with 15 min sample interval are considered for a better accuracy in the numerical results. This way, Figure 3 displays all the possible receivers for both cases, from which we chose the ones colored in red, leaving the green ones. Likewise, the Tohoku earthquake's epicenter and the location of the volcanic eruption for Tonga are represented with a blue pin symbol.
As we can see, there are several possible stations for each event, however not all of them provided data for the respective tsunami dates, so we selected those in red being MIZU, USUD and TSKB stations for Japan and FTNA, TONG and SAMO for Tonga.
For each station, in Figure 4 all the GPS satellite tracks are shown, represented by their azimuth and elevation coordinates, expressed in the ENU coordinate system, and measured from the receivers chosen in a polar plot for each case of tsunami. In both cases, a mask of elevation of \(\mathbf{10^{7}}\) has been considered in order to disregard satellites that appear close to the horizon as those at low angles present more atmospheric noise than satellites orbiting higher above the horizon, and they are subject to signal interference such as slip measurements or even multipath effect.
Figure 3: Map of possible IGS stations
As it can be seen, owing to the threshold distance \(\mathbf{\kappa}\), dependent on the moment magnitude, of all possible satellites, colored in blue, we are left with those whose IPs coordinates fulfill the equation of a circle established in (10). Therefore, the polar plot of the satellite paths at the Ionospheric Pierce Points in azimuth-elevation coordinates are colored in red. With regard to Figure 4, on the one hand, for the Tohoku event, we have 2,3 and 3 candidate satellites for each respective station USUD, TSKB and MIZU. On the other hand, we have 3,2 and 3 candidate satellites for FTNA, SAMO and TONG stations respectively. The number of satellites with their IPP coordinates closest to the tsunami source mainly depends on different aspects such as the GNSS data available for those specific dates; the fact that there may be no satellites in view from the station --since the emitted signal does not pass through the TEC depression in the ionosphere, called Tsunami Ionospheric Hole (ITH), which is produced by the waves after the tsunami occurrence over the area of its origin--; or even the aliasing effect that produces measurement errors in the signal due to an incorrectly adjusted sampling frequency. Note that the differences between Figure 4a and Figure 4b arise from the available datasets as we selected daily data for Tohoku's stations and hourly data for Tonga's stations.
In Figures 5 and 6 we plot the Slant TEC time series after applying the VARION method for both real cases of tsunami.
As it is shown, the dashed line indicates the moment when the event source was triggered, so all the time series are computed from the time of occurrence, spanning from 10 minutes before the rupture to as late as 40 minutes after it, depending on the case. Note that, for each station we identify the candidate satellites with their SVN number instead of the satellite's PRN for uniqueness. In addition, leap seconds are added for more accurate estimations, considering I5 and I8 leap seconds for Tohoku and Tonga events respectively.
These numerical results show the Slant TEC behavior. At the first stages, the time series from Figures 5 and 6 are statistically stationary, verifying that their mean and variance are both constant over this time period. However, minutes after the main shock, the Slant TEC values begin to oscillate with no clear trend, reflecting the perturbations on the ionosphere layer caused by these stunamis. As it can be seen, related to the necessary detection time from the origin of the event to detect ionospheric anomalies, we have the following. For the Tohoku event, all the selected stations monitor these disturbances in less than I5-20 minutes, where the amplitude of the Slant TEC disturbances varies in absolute value in a range of 0.4 TECU. On the other hand, for the Tonga event, the chosen stations take a few minutes more for monitoring the perturbations, possibly because of the satellite arrangement. For this case, significant amplitude of Slant TEC disturbances about \(\pm\)1.0 TECU are found minutes after the volcanic eruption, higher than normal.
Lastly, for analyzing the results, in Figure 7 we plot only the track of candidate satellites whose IPPs coordinates are closest to the tsunami source. These IPPs coordinates are colored in red, while the source and receiver coordinates are respectively colored in blue and green.
Through these numerical results, the VARION algorithm has demonstrated to be capable of detecting ionospheric perturbations generated by tsunami-driven gravity waves and can be considered as a novel contribution in future operations of Tsunami early Warning Systems.
## 6 TSUNAMI Warning Systems
Tsumami Warning Systems (TWS) are based on detecting tsunamis in advance, issuing warnings on those potentially destructive tsunamis and effectively preventing tsunami disasters. This requires tsunami forecasts to be made in real time so that affected locations receive evacuation warnings in time to avoid casualties and large damages to civilian properties. Hence, for their prevention, they verify the tsunami occurrence before its waves arrive in the coasts by the features that the GNSS measurements should have in order to alert which locations would be affected, the arrival time, the wave height estimation on each one of those locations as well as the confirmation of whether the tsunami has actually been generated or not.
As previously mentioned, the main sources of tsunamis are due to earthquakes, therefore the current systems are mainly based on the use of seismometers, rapidly detecting the location and size of the potentially destructive earthquakes by recording the motion of the ground. Thus, considering the seismic parameters obtained, these systems use different tools to provide risk assessments, such as numerical simulations, decision matrices or even tsunami scenario databases processed by central processing facilities, to assess whether the detected earthquake may trigger a tsunami. For instance, numerical tsunami simulations are mainly based on the estimation of the tsunami's wave properties or even the estimation of the initial tsunami height using the long-period seismic wave records, where the vertical displacement of the rupture area is required to be calculated at the same time (Blewitt et al., 2009). For this purpose, early warning systems commonly use the MOST (Method of Splitting Tsunami) numerical model from the National Oceanic and Atmospheric Administration's (NOAA) Tsunami Warning System, which uses a finite difference scheme for the characteristic form of the shallow water wave equations, thus providing an estimation of the tsunami wave height. Similarly, DART/WP-GITM software uses the ionospheric perturbations in order to estimate the tsunami wave properties through a tsunami parameter inversion model (Meng et al., 2015).
Owing to the tens of additional minutes that are required for the accurate calculation of the rupture area, it is difficult to issue a warning of the precise height of the next near-field tsunami early enough for residents in coastal areas to evacuate. To overcome this problem, offshore tsunamis are monitored using positioning system (GPS) buoys, as they issue faster confirmation, linked to ocean bottom pressure sensors and GNSS satellites in order to provide high accuracy in the measurements of tidal level. For instance, the Deep-ocean Assessment and Reporting of Tsunamis (DART) buoys are located 300 km away from coasts and constantly monitor the changes in the water columns (Meinig, 2005). Nonetheless, the acquisition, installation and maintenance costs of a sufficiently dense network for such a monitoring system would require a high budget and manpower. Therefore, less expensive practical Tsunami early Warning Systems are urgently needed. Moreover, the use of tide gauges, located at the coastlines, monitoring continuously the changes in the water height or even if the use of GNSS satellites as well as satellite altimeters measuring the ocean surface height, may help in the detection of tsunami.
Figure 7: Candidate satellite tracks at the Ionospheric Pierce Points
However, there are several uncertainties in tsunami's prediction such as inaccurate knowledge of the marine active faults, no direct match between earthquake's size and tsunami's severity, the estimation of the hypocentral location and its magnitude, or even if the lack of information on the current sources of the earthquake, i.e., rupture extent, fault geometry, direction of slip, etc. In addition, other anomalies can be encountered which affect the ionosphere unrelated to tsunami generation. They are mainly produced by external origins such as ionization difference due to changes caused by solar radiation activity, geomagnetic storms or even to a lesser degree, strong meteorological process may affect this ionosphere layer.
For this purpose, the Tsunami early Warning System described in this article, based on using GNS5 satellites for ionosphere monitoring, overcomes most of these limitations by means of the potential integration of the VARION method into these warning systems. In particular, the possibility of incorporating this method into the Spanish Tsunami Warning System is discussed in the following section.
## 7 Spanish Tsunami Warning System
Historically, the Iberian Peninsula has suffered several tsunamis in the past, notably in the Atlantic area of the southwest of the Iberian Peninsula as can be seen in Figure 8.
The largest known historical tsunami affected the Iberian Peninsula's coast in 1755, after the devastating effects caused by an earthquake of magnitude 8.5 in Lisbon, simply known as the _Lisbon earthquake_, ravaging several Portuguese and Spanish cities, and also causing thousands of deaths. Its hypocenter was located offshore, southwest of the Iberian Peninsula, in the Azores-Gibratatat fault zone, from where waves of up to 10-15 meters on the coast of the Gulf of Cadiz were produced (Oliveira, 2008). On the other hand, moderate tsunamis have occurred on the Mediterranean coast, located near the northern coast of Algeria and mainly in the Alboran Sea and in the faults in the Gulf of Cadiz, where the Spanish tsunami threat is maximum likelihood (Aniell Quiroga, 2017).
Considering this background, the Spanish Tsunami Warning System (STWS) was implemented by the National Geographic Institute of Spain (IGN5, being operational since November 2015. It consists of an automatic procedure that evaluates whether a tsunami may affect the Spanish coasts. In this way, in case of a tsunami threat, an alert message is sent to the National Civil Protection authorities with an estimate of the degree of such hazard at different locations along the coasts, the estimated severity of the tsunami and the time of arrival for those places. In addition, several Spanish stations collect multi-frequency data from GPS satellites, while others track multi-GNSS data from other constellations (GLONASS, Galileo and BeiDou), allowing us to better analyze the ionosphere. Moreover, with the help of the ionosonde, situated in "EI Arenosilo", the analysis of the ionogram provided, and the geomagnetic observatories, located in San Pablo, San Fernando and Guimar, will allow us to discern the sources of ionospheric disturbances not related to tsunamis, such as geomagnetic storms.
For these reasons, we propose to implement and integrate this complementary novel approach, the VARION method, into the Spanish Tsunami Warning System (STWS) in order to detect the occurrence of a tsunami before its waves reach the coasts of the Iberian Peninsula.
Figure 8: Historical tsunamis in the Iberian Peninsula. Adapted from Cantavella (2021).
It should be noted that to incorporate the ionospheric anomalies generated by stunamigenic earthquakes into the Tsunami Warning Systems, their magnitude ranges that produce disturbances in the ionosphere have been analyzed. In Le et al., (2011), the authors establish that earthquakes with a Mw magnitude lower than 6 produce less obvious ionospheric disturbances in TEC, causing small perturbations difficult to remark and thus generating confusion with other events, such as geomagnetic storms, severe meteorological processes, etc.
Numerous studies are currently underway to improve these Tsunami early Warning Systems, including the development of new implementations. According to Martire et al., (2023), the GUARDIAN (GNSS Upper Atmospheric Real-time Disaster Information and Alert Network) system, is an ionospheric monitoring software in near-real time (NRT) through multi-GNSS TEC measurements in order to augment the early natural disaster warnings. Related to these initiatives, close international collaborations are maintained with the aim of disaster risk reduction. For instance, there is a special commission within the International Union of Geodesy and Geophysics called Geophysical Risk and Sustainability (IUGG GeoRisk)6 with the aim of researching on geophysical hazards and the disaster preparedness. Secondly, the International Association of Geodesy (IAG) includes the GGOS Geohazards Focus Group7 for enhancing the GNSS-based Tsunami Early Warning Systems (GTEWS) as well as many member countries from United Nations collaborate under the agreement on the Sendai Framework for Disaster Risk Reduction with the goal of upgrading the disaster preparedness.
Footnote 6: [https://uogo.org/associations-commissions/commissions/](https://uogo.org/associations-commissions/commissions/)
Footnote 7: [https://goos.org/about/uogo/fa/goohazards/](https://goos.org/about/uogo/fa/goohazards/)
## 8 Conclusions
In this paper we analyzed the perturbations in the ionosphere caused by a tsunami through Total Electron Content (TEC) measurements, steadily registered by permanent GNSS stations. For this tsunami detection, we implemented a real-time approach termed as VARION (Variometric Approach for Real-time Ionosphere Observation) method, characterized by its independence of cycle slips and a very low computational time. This method provides estimations of the Slant TEC (STEC) time series during the time interval in which the tsunami occurred, that are monitored by those GNSS satellites whose ionospheric Pierce Point (IPP) coordinates were closest to the source.
Its demonstration has been performed on two real cases of tsunami, one of them recorded in the Tohoku region, caused by an earthquake; and the other one occurring in Tonga due to a volcanic eruption. The analysis of the numerical results plotted in Figure 5 and 6 proved the expected variations in the Slant TEC time series since the moment of rupture, thus reflecting the consequent disturbances on the ionosphere caused by these stunamis.
This approach has a low cost of implementation as well as it reduces the confirmation time of events, since the method takes advantage of the existing infrastructures such as the GNSS derived data and the stations available in the area for monitoring the ionospheric disturbances. Thus, combining the GNSS satellites integration and operation capabilities with its GNSS receiver development, we have presented this novel method to be integrated into a Tsunami Warning System (TWS) and more specifically into the Spanish Tsunami Warning System (STWS) as an auxiliary technique that complements the existing methodologies in the verification of potential tsunami hazards for the Iberian Peninsula.
For future lines of work, the automatization of VARION algorithm by using GNSS broadcast real-time data combined with real-time data from different sources, such as tide gauges, seismometers, buoys or even GNSS receivers are considered. The large amount of available data at no additional cost, is obtained through the multiple satellite constellations orbiting the Earth and the number of ground-based GNSS receivers placed worldwide. Thus, once an event is detected at a specific location, such a system would begin processing the Slant TEC through real-time data using multiple available stations located near the tsunami source in search of disturbed ionospheric signals minutes after the atmospheric wave reaches the ionosphere. Its continuous analysis also considers the stations that have available data in their RINEX files; the orbit positions and clock offsets of GPS satellites; and finally the selection of station-satellite pairs that verify that their IPP coordinates are within the previously established threshold distances from the source. The above mentioned reasons make GNSS-based real-time monitoring of the ionosphere an interesting enhancement for natural hazard early warning systems. |
2303.15379 | Online $k$-Median with Consistent Clusters | We consider the online $k$-median clustering problem in which $n$ points
arrive online and must be irrevocably assigned to a cluster on arrival. As
there are lower bound instances that show that an online algorithm cannot
achieve a competitive ratio that is a function of $n$ and $k$, we consider a
beyond worst-case analysis model in which the algorithm is provided a priori
with a predicted budget $B$ that upper bounds the optimal objective value. We
give an algorithm that achieves a competitive ratio that is exponential in the
the number $k$ of clusters, and show that the competitive ratio of every
algorithm must be linear in $k$. To the best of our knowledge this is the first
investigation in the literature that considers cluster consistency using
competitive analysis. | Benjamin Moseley, Heather Newman, Kirk Pruhs | 2023-03-27T16:53:04Z | http://arxiv.org/abs/2303.15379v1 | # Online \(k\)-Median with Consistent Clusters+
###### Abstract
We consider the online \(k\)-median clustering problem in which \(n\) points arrive online and must be irrevocably assigned to a cluster on arrival. As there are lower bound instances that show that an online algorithm cannot achieve a competitive ratio that is a function of \(n\) and \(k\), we consider a beyond worst-case analysis model in which the algorithm is provided a priori with a predicted budget \(B\) that upper bounds the optimal objective value. We give an algorithm that achieves a competitive ratio that is exponential in the the number \(k\) of clusters, and show that the competitive ratio of every algorithm must be linear in \(k\). To the best of our knowledge this is the first investigation in the literature that considers cluster consistency using competitive analysis.
## 1 Introduction
Clustering problems, such as \(k\)-means clustering and \(k\)-median clustering, are a classic genre of learning / data mining problems [2]. Typically the input consists of a collection \(X=\{x_{1},\ldots,x_{n}\}\) of points in some metric space \(\mathcal{M}\) (typically \(\Re^{d}\) with the 1-norm or 2-norm) and a positive integer \(k\). The output for a **center-based** clustering problem is a collection \(c_{1},\ldots,c_{k}\) of \(k\) points from \(X\), called centers, that succinctly summarize the data points. The implicit cluster \(C_{i}\) corresponding to the center \(c_{i}\) is the collection of points in \(X\) whose closest center is \(c_{i}\), that is \(C_{i}=\{x_{j}\mid\arg\min_{k\in\llbracket k\rrbracket}d(x_{j},c_{h})=i\}\), where \(d(\cdot,\cdot)\) is the distance function for the metric space. The output for a **cluster-based** clustering problem is a partition \(C_{1},\ldots C_{k}\) of \(X\) into \(k\) parts, called clusters. The implicit center of each cluster \(C_{i}\) is then \(c_{i}=\arg\min_{x_{k}\in C_{i}}\sum_{x_{j}\in C_{i}}d(x_{h},x_{j})\). For both center-based clustering and cluster-based clustering, the objective is to minimize the cost of the clustering. This paper considers the \(k\)-median objective which is the aggregate distance from each point to the center of its cluster, that is \(\sum_{i=1}^{k}\sum_{x_{j}\in C_{i}}d(x_{j},c_{i})\).
Here we consider applications where the data points in \(X\) arrive online over time. In a center-based clustering problem, the online algorithm maintains a collection of centers. In a cluster-based problem, the online algorithm needs to assign the data points to a cluster when they arrive, that is each point \(x_{j}\) needs to be assigned a label \(\ell_{j}\in\llbracket k\rrbracket\) when \(x_{j}\) arrives.
An application of online clustering given by [14] is the task of clustering news articles that arrive online. For example at Yahoo news or Google news. We will refer to these outlets as the news providers. The news provider selects some (approximately) fixed number \(k\) of articles to feature on the news homepage, and has a "view complete coverage" link next to each article to see all the news stories on this topic. The problem of selecting the best \(k\) articles that
summarize all current news articles is better modeled as a center-based clustering. The problem of partitioning all news articles into clusters of similar articles is better modeled as a cluster-based clustering. Other applications can be found in [13, 11], but we will use the news story clustering application as our running canonical example.
A line of research [9, 8, 11, 13] within online clustering goes by moniker of consistent clustering. Research on consistent clustering studies the trade-offs and relationship between the following objectives:
* **Maximizing the Quality of the Clustering:** One seeks the cost of the clustering to be small. The most common metric to measure the quality of a solution is the ratio of the cost of this solution to cost of the optimal solution. The most common metric to measure the quality of an online algorithm is the competitive ratio, which is the maximum (over all inputs) of the ratio of the cost of the online algorithm's solution to the optimal cost.
* **Maximizing Consistency:** Ideally one would like the centers in a center-based problem, or the clusters in a cluster-based problem, to be consistent over time. That is, they should change as little as possible. So for example, the news provider doesn't want the clusters to completely change every time a new news article is written.
### Prior Work on Consistent Clustering
\(k\)-median clustering is NP-hard, but constant factor approximation polynomial-time algorithms are known [5, 10, 1, 12, 3].
All prior algorithmic research on consistent clustering that we are aware of [9, 8, 11, 13] is center-based. That is, the online algorithm explicitly maintains a collection of centers, and the clustering is implicit. That is, each point is assumed to be associated with the closest center, but there are no restrictions on how often points' associated centers can change.
The first paper [13] in this line of research gave a lower bound that showed that one can not simultaneously have both high quality and maximum consistency. That is, they showed that if a center cannot be changed once it is established, then there is no algorithm whose competitive ratio can be bounded by any function of \(n\) and \(k\).
Thus various "beyond worst-case analysis" (see [17]) approaches have been used in the literature to attempt to circumvent the obstacle presented by this lower bound. One approach is to use bi-criteria analysis or _resource augmentation_ analysis. This analysis allows the online algorithm to use more than \(k\) centers, and then compares the cost of the algorithm's clustering to the optimal one using \(k\) centers [13]. A second approach is to allow the algorithm _recourse_, which in this setting means allowing the algorithm to change the centers (or clusters) a small number of times [11, 8, 9]. Another approach is to consider learning augmented algorithms, which assumes that the algorithm receives some advice/information a priori about the input. For example, in the news application, the news provider presumably has prior data that it could use to predict with some reasonable accuracy some properties of the input.
[13] gives a randomized algorithm for \(k\)-means clustering and analyzes this algorithm using _resource augmentation_ analysis. [13] shows that the expected number of clusters/centers used by their algorithm is \(O(k\log n\log n\Delta)\) and at all times the expected cost of the clustering using these centers is at most \(O(\log n)\) times the optimal cost using \(k\) clusters. Here \(\Delta\) is the aspect ratio of the data points, which is the ratio between the distance between the furthest pair of points and the distance between the closest pair of points. The algorithm leverages a randomized online algorithm for facility location from [15] to decide whether to create a new center at a newly arriving data point. Once a center is established, it is maintained throughout the course of the algorithm. Finally, [13] gives a randomized algorithm that requires a priori knowledge of \(n\) and a lower bound of the optimal with \(k\) centers, and that maintains a collection of \(O(k\log n\log\alpha)\) centers in expectation that has expected cost \(O(1)\) times the optimal cost with \(k\) centers. Here \(\alpha\) is the ratio between actual optimal cost with \(k\) centers and the lower bound provided a priori to the algorithm.
[11] give a randomized algorithm for \(k\)-means or \(k\)-median clustering that uses _recourse_.
The algorithm maintains the invariant that the cost of the current centers is always \(O(1)\)-competitive with the optimal clustering of the data points seen _to date_. To maintain this invariant, the expected number of cluster center changes used by the algorithm is \(O(k^{2}\log^{4}n\Delta)\). [11] show a similar lower bound, that is they show that every algorithm requires \(\Omega(k\log_{c}\frac{\Delta}{k})\) center changes to maintain \(O(c)\)-competitiveness. Further, [11] show that it possible to maintain \(O(1)\)-competitiveness with \(O(k\log^{2}n\Delta)\) center changes, but this is just an existential result, and no algorithm to achieve this was given. In followup paper [8] gave a randomized algorithm that maintains \(O(1)\)-competitiveness with \(O(k\operatorname{polylog}(n\Delta)\) cluster changes. The results in [11] were extended to \(k\)-median clustering with outliers (so one could opt to not cluster a pre-specified number of points) in [9]. [11] also observes that for \(k\)-center clustering an algorithm from [6] can be used to maintain an \(O(1)\)-competitive clustering with \(O(k\log n\Delta)\) center changes.
While not directly germane for the work in this paper, there is also research on online clustering in the streaming setting, where the emphasis is more on the algorithm using a small amount of memory, or quickly responding to the arrival of a new data point (e.g. [4, 7]).
### Our Contribution
Our initial research goal was to investigate consistent clustering for cluster-based problems (recall that all the past algorithmic consistent clustering publications that we are aware of focus on center-based clustering). We are interested in applications where the focus is on explicitly maintaining consistent clusters (and not necessarily on maintaining consistent centers). The application where Google or Yahoo news is trying to maintain collections of similar news articles is an example of such an application. Note that even the algorithms from [13] that are perfectly consistent from a center perspective, in that once a center is established it persists until the end of the algorithm, are not necessarily consistent from a cluster perspective in that a data point could change clusters every time a new center is established. All one can say (at least naively) about the cluster consistency of the algorithms from [13] is that no data point changes clusters more than \(O(k\log n\log n\Delta)\) times.
Specifically, our research goal is to determine whether some reasonable competitiveness with respect to cost can be achieved if each data point has to be irrevocably assigned a cluster, or equivalently a label, when it arrives.
The lower bound from [13] implies that we will need to take some beyond worst-case analysis approach as otherwise there is no algorithm with bounded approximation ratio. For reasons we explain in Section 2, we will take a learning augmented algorithms approach, and assume that the algorithm is provided a priori with an estimated upper bound \(B\) of the cost of the final optimal clustering. Our algorithms will then try maintain a clustering of low cost relative to \(B\), _not_ the current optimal cost for the points that have arrived to date. Thus we will say an algorithm is \(c\)-competitive with respect to the budget if the algorithms cost is at most \(c\cdot B\) on instances where the optimal cost is at most \(B\) (after all points arrive).
We develop a lower bound in Section B that the competitiveness of any algorithm relative to the budget must be \(\Omega(k)\). In light of this observation, a natural question is whether one can find an algorithm whose competitiveness with respect to the budget is \(O(f(k))\) for some function \(f\), that depends only on the number of centers \(k\), and not on the number of points \(n\). Such an algorithm is constant competitive when \(k\) is fixed. We answer this in the affirmative by giving a (polynomial-time deterministic) online algorithm that is \(O(k^{5}3^{k})\)-competitive with respect to the budget. Thus, we know the competitive ratio must depend on \(k\) by the lower bound and we give an upper-bound that only depends on \(k\). Note that in most clustering applications while the number of points \(n\) to be clustered may be large, the number \(k\) of clusters is typically a modestly small constant, say 10 for example [2].
Technical Overview
To understand the motivation for learning-augmented approach, let us consider the lower bound instance from [13]. It is sufficient to assume \(k=2\). The first point \(x_{1}\) arrives and is assigned some irrevocable label. Then assume the second data point \(x_{2}\) arrives a unit distance from \(x_{1}\). If the online algorithm assigns \(x_{2}\) the same label as \(x_{1}\) then the cost of the algorithm's clustering is \(1\), and the optimal cost is \(0\) (which is the cost if each of these data points were given a different labels). This results in the algorithm having in unbounded competitiveness. In contrast, if the algorithm gave \(x_{2}\) the a different label as \(x_{1}\) then the third data point \(x_{3}\) could arrive very far away. In which case, the algorithm's clustering would necessarily have very high cost (as \(x_{3}\)'s label would have to be either the same as \(x_{1}\)'s or the same as \(x_{2}\)'s). However, the optimal clustering would have cost \(1\) (by giving \(x_{1}\) and \(x_{2}\) the same label and giving \(x_{3}\) the remaining label). Again, this results in competitiveness that can only be bounded by the aspect ratio of metric space (much larger than \(n\) or \(k\)).
Intuitively, the dilemma faced by the online algorithm when \(x_{2}\) arrives is that it does not know whether the distance between \(x_{1}\) and \(x_{2}\) is small or large. Equipped with an estimate of the optimal cost \(B\), the algorithm could resolve this dilemma by giving \(x_{2}\) a different label than \(x_{1}\) if their distance is larger than \(B\) and the same label otherwise. Thus we will assume that the algorithm is provided a priori with a budge \(B\) that is an upper bound on the optimal cost (and ideally is not too much larger than the optimal cost).
To better understand the design of our algorithm it is useful to understand some instances that illustrate some properties that a competitive algorithm must have.
The first observation is that any reasonably competitive algorithm can never use \(t+1\) labels if it is the case that the points to date could be clustered with cost at most \(B\) using at most \(t\) labels. If the algorithm ever allowed this to happen, it could be that the next \(k-t\) data points could arrive very far from the previous data points, and very far from each other. Thus after these data points arrive, the algorithm's cost would increase to something like the diameter of the metric space, while there would still be a clustering of cost at most \(B\), since the clustering that used \(t\) labels could be extended with no additional cost by giving each new \(k-t\) data points a new label.
In light of this last observation, a natural greedy algorithm would maintain the invariant that the number of labels it uses is always equal to the minimum number of labels necessary for a clustering of cost at most \(B\), and then give each new data point the label that minimizes the increase in cost. To see why such an algorithm (and other similar algorithms) can have unbounded cost even when \(k=2\), and the metric space is the real line, consider the following instance (see Figure 1). Let \(\alpha\) be an arbitrarily large positive integer. We construct an example in which the budget \(B=2\). The first point arrives at location \(-2\). Say the algorithm gives this point the label blue. The next point arrives at location \(1\). Now, we know that any offline clustering with cost at most \(2\) must use at least \(2\) clusters. So the greedy algorithm would give this new point a second label, say green, as that would minimize the increase in the objective. Then a total of \(\alpha\) additional points arrive at location \(1\). The algorithm labels these points green. Then \(\alpha\) points arrive at the origin \(0\). It is still the case that only \(2\) clusters are required in order to have cost at most \(2\), since we may place the points at location \(-2\) and the origin in one cluster, and the points at location \(1\) in the other cluster. However the algorithm would assign each point arriving at the origin the label green, since this increases the objective by at most \(1\) while assigning such a point the label blue increases the objective by \(2\). Yet, this results in a solution for the algorithm in which the contribution of green points towards the objective is \(\alpha\).
Figure 1: An example in which the natural greedy algorithm incurs arbitrarily large cost.
Upon reflection of this lower bound instance for the natural greedy algorithm, there appear to us to be two natural hypotheses as to the "mistake" that this algorithm is making, and corresponding two natural paths towards a remedy:
* One hypothesis is that greedy assignment is a mistake, and then the natural remedy would be to use some label assignment rule that is more sophisticated than greedy.
* Another hypothesis is that the algorithm was too hasty in using a new label. Thus the natural remedy would then be to delay using a new label until it is more clear as to a region where arriving data points should be given this new label. Note in the example in Figure 1 that if the algorithm had waited until some reasonable number of data points had arrived at the origin before using the second label, then the algorithm might reasonably have been able to see that the right choice was to give the remaining points arriving at the origin the second label of green.
Here we primarily adopt the second remedy/approach (although we also consider an alternate greedy assignment policy). To apply this remedy we must address the following two questions:
* Under what conditions can the algorithm justify the use of an additional label, say going from \(t-1\) labels to \(t\) labels?
* And when this can be justified, how should we modify our prior partition of space into \(t-1\) parts into a partition into \(t\) parts?
At a high level our answer to the first question is that we do not use \(t\) labels until there exist \(t\) well-separated points \(x_{\alpha(1)},\ldots,x_{\alpha(t)}\). We will say that a collection of points \(x_{\alpha(1)},\ldots,x_{\alpha(t)}\) from a collection \(S\) of points is \(\beta\)**-well-separated** with respect to \(w_{S}\) if
\[\min\{w_{S}(x_{\alpha(i)}),w_{S}(x_{\alpha(j)})\}\cdot d(x_{\alpha(i)},x_{ \alpha(j)})\geq\beta\cdot B\ \ \mbox{ for all }i,j\in[\![t]\!],i\neq j\] ( \[\star\] )
Here \(w_{S}(x_{h})\) is what we call the **natural weight** of point \(x_{h}\) in \(S\), which is the maximum number of points in \(S\) whose distances to \(x_{h}\) sum to at most \(2B\). (If \(S\) is clear, we may drop it from the notation). Intuitively, if we have \(t\) well-separated points then we then know that not only must any near optimal solution use \(t\) labels, but such a solution cannot combine the points near \(x_{\alpha(i)}\) and the points near \(x_{\alpha(j)}\) into a single cluster (assuming \(i\neq j\)).
To drill down a bit further, we maintain a collection of points \(p_{1},\ldots,p_{t}\) from the online stream \(X\) which we call _pivots_. When a new point arrives, it is assigned the label \(i\) of the pivot \(p_{i}\) nearest to it (so we still maintain a form of greedy label assignment). While one might reasonably think that the pivots are intuitively centers for the clusters, this intuition is only partially correct. In fact there are scenarios where some pivots are in fact poor centers for the corresponding cluster. What is critical is that the pivots are located so as to guarantee that using greedy assignment in the future results in a relatively low cost assignment. In order for our cost analysis to be tractable we want the algorithm to maintain the following invariants:
* Each pivot \(p_{i}\) is reasonably located in a region where it would not be too costly to assign points arriving in this region the label \(i\).
* The pivots are well-separated (for some appropriate choice of \(\beta\)).1 Footnote 1: \(\beta\) will have to both be initialized sufficiently large and also decrease as the number of pivots increases. We show this is necessary in Appendix E.
* There is no other point that is well-separated from the pivots.
* The locations of the pivots should not move very often.
Note that some of these invariants can intuitively be in opposition to each other, which seemingly requires that the design of the algorithm is a bit complicated, as there are several different scenarios where maintaining this invariant requires different actions. But we will now try to give a brief, definitely over-simplified, overview of how the algorithm maintains these invariants. As the pivots are not necessarily good centers (for example pivot \(p_{1}\) at location \(-2\) as the points arrive at location \(1\) in Figure 1), the algorithm also maintains a collection \(c_{1},\ldots,c_{t}\) of estimated
centers for the \(t\) labels that have been used to date. The pivots and the estimated centers are updated in two scenarios. The first scenario is when there is an applicable Add Operation, and the second is when there is an applicable Exchange Operation. In both scenarios the estimated centers are recomputed from the previously computed estimated centers and the newly computed (near) optimal centers. Then a sequence of Add and Exchange Operations are executed.
An Add Operation is applicable when there is a point \(x_{\alpha}\) that is well-separated from the current pivots. Intuitively, this means a new label can be justified, but the implementation requires the consideration of several possible scenarios. In the simplest scenario \(x_{\alpha}\) is near a cluster of new points that are all far from previous points, and the pivot \(p_{t+1}\) for the new label \((t+1)\) is set to \(x_{\alpha}\). In some scenarios an old pivot \(p_{i}\) (\(i\leq t\)) is set to \(x_{\alpha}\) and \(p_{t+1}\) is set to \(p_{i}\) (so the new pivot inherits an old label and an old pivot location gets the new label). Intuitively, this occurs when the estimated center \(c_{i}\) for cluster \(i\) is near the location of \(x_{\alpha}\). Also there are scenarios where an old pivot \(p_{i}\) (\(i\leq t\)) is set to the estimated center \(c_{i}\) of cluster \(i\) and \(p_{t+1}\) is set to \(p_{i}\) (so again the new pivot inherits an old label and an old pivot location gets the new label). Finally, there are scenarios where two new pivots are created.2
Footnote 2: This is needed to avoid label conflicts. See Appendix F.
An Exchange Operation is applicable when there are two points \(x_{\alpha}\) and \(x_{\gamma}\) near a pivot \(p_{j}\) that are well-separated from each other and the other pivots (besides \(p_{j}\)). So intuitively the cluster of points labeled \(j\) appear to be splitting into two clusters. In the simplest scenario the location of pivot \(p_{j}\) is set to the location of one of \(x_{\alpha}\) or \(x_{\gamma}\), and the location of the new pivot \(p_{t+1}\) is set to the location of the other one. This scenario occurs in the instance depicted in Figure 1. The first pivot \(p_{1}\) is initially set to location \(-2\). The points arriving at location \(1\) would all be assigned the label \(1\) (blue) as there is no point well-separated from \(p_{1}\) (the points located at \(1\) are not separated from \(p_{1}\) because the points at \(p_{1}\) can be cheaply moved to location \(1\)). When enough points have arrived at the origin, then the points \(x_{\alpha}=0\) and at \(x_{\gamma}=1\) are near \(p_{1}\) (because the point at \(p_{1}\) can be cheaply moved to either \(x_{\alpha}\) or \(x_{\gamma}\)), and are well-separated from each other and the pivots other than \(p_{1}\). Thus our algorithm would locate \(p_{1}\) at \(1\) and \(p_{2}\) at the origin. While this scenario gives some intuition, there are several other more complicated scenarios.
The analysis is broken into two parts. The first part is to show that certain structural invariants are maintained through carefully engineered algorithmic choices. The well-separated invariant is used to show we do not use more than \(k\) labels and for bounding the cost. The second part of the analysis is to bound the cost (Section 6). We inductively assume that the cost of points that were labelled using the first \(t-1\) labels is bounded. Then, we show that the points that were labelled when exactly \(t\) labels were in use are clustered near optimally under our greedy procedure. The key challenge is showing the points given the same label combined have bounded cost.
## 3 Preliminaries
First we establish a lower bound depending on \(k\) for the competitive ratio of our algorithm.
**Theorem 1**.: _The competitiveness versus the budget of any deterministic algorithm for cluster-centric consistent clustering is \(\Omega(k)\)._
The proof of Theorem 1 is in Appendix B.
**Assumptions.** We only assume that the online stream \(X\) lies in a metric space. We allow multiple points to arrive at the same location; each duplicated point still has to pay the cost to its nearest center. Whenever we refer to centers, we enforce these come from \(X\) itself. (This will be used to ensure our algorithm is well-defined.)
We state our results assuming \(B=\mathsf{OPT}\), but all results still hold by replacing \(\mathsf{OPT}\) with \(B\), as long as \(\mathsf{OPT}\leq B\). Our algorithm will need to compute offline \(k\)-median clusterings; to do this in poly-time, use any \(c\)-approximation algorithm. Thus, replace \(B\) with \(c\cdot B\) to run our online algorithm in poly-time.
**Terminology.** Recall the term _natural weight_ from Section 2. We will always take \(S\) to be some prefix of \(X\). As such, we may refer to the natural weights at a particular point in time to mean the natural weights in \(S\), where \(S\) is the prefix at that time.
Note that for \(p\in S\), \(w_{S}(p)\geq 1\), since \(d(p,p)=0\), and that \(w_{S}(p)\) can only increase over time as \(S\) enlarges.
Recall the term \(\beta\)_-well-separated_ from Section 2. Related terms we will use are the following: We will say \(p\) is \(\beta\)**-well-separated from** a set of points \(\{x_{\alpha(1)},\ldots,x_{\alpha(m)}\}\) w.r.t. \(w\) if \(\min\{w(p),w(x_{\alpha(i)})\}\cdot d(p,x_{\alpha(i)})\geq\beta\cdot\mathsf{ OPT}\) for all \(i\in[m]\). If \(m=2\) and the well-separated condition (\(\bullet\)) is not satisfied, we say the pair of points is \(\beta\)-attached, or that one is \(\beta\)-attached to the other, w.r.t. \(w\).
## 4 Algorithm Description
The algorithm sees an online sequence \(X=\{x_{1},x_{2},\ldots x_{n}\}\) of points. Let \(X_{i}=\{x_{1},x_{2},\ldots x_{i}\}\) and \(w_{i}\) be shorthand for \(w_{X_{i}}\). The algorithm maintains the following:
* a collection of previously arriving points \(p_{1},\ldots,p_{t}\) that have been designated as _pivots_, where \(t\) is the number of labels used by the algorithm to date and pivot \(p_{j}\) is associated with label \(j\),
* a separation parameter \(\beta_{t}=8\cdot 3^{k-t+2}\), and
* a collection of previously arriving points \(c_{1},\ldots,c_{T}\) that have been designated as estimated centers, where \(T\leq t\).
Initially the first point \(x_{1}\) is given the label \(1\), the first pivot \(p_{1}\) is set to \(x_{1}\), and the collection of estimated centers is empty. The algorithm handles the arrival of each subsequent point \(x_{i}\) in the following manner. The subroutines are described in Sections 4.1, 4.2, 4.3.
1. **If** there is an applicable Add or Exchange Operation **then** compute new Estimated Centers 1. **Repeat** while there is an applicable Add Operation or Exchange Operation 1. **If** there is an applicable Add Operation **then** apply an arbitrary applicable one 2. **Else** apply an arbitrary applicable Exchange Operation
2. Give \(x_{i}\) the label \(j\), where \(p_{j}\) is the nearest pivot to \(x_{i}\)
Let \(T\) be the number of pivots during an execution of the outer loop (1). The Estimated Centers subroutine computes \(T\) new estimated centers \(c_{1},\ldots,c_{T}\) from the current pivots \(p_{1},\ldots p_{T}\), the points \(X_{i-1}\), and the current estimated centers \(c_{1},\ldots c_{s}\) (\(s<T\)).
Let \(t\geq T\) be the number of pivots during an execution of the inner loop (a). The Add Operation subroutine is applicable if there is a point \(x_{\alpha}\in X_{i}\) such that \(x_{\alpha}\) is \(\beta_{t+1}\)-well-separated from the current pivots \(p_{1},\ldots,p_{t}\) with respect to the weight function \(w_{i}\). The execution of the Add Operation subroutine depends on \(x_{\alpha}\), \(X_{i}\), the current pivots \(p_{1},\ldots,p_{t}\), and the current estimated centers \(c_{1},\ldots,c_{T}\). The Add Operation subroutine adds one or two new pivots, and possibly changes the location of up to two previous pivots.
The Exchange Operation subroutine is applicable if there there exists two points \(x_{\alpha}\) and \(x_{\gamma}\) in \(X_{i}\), and a pivot \(p_{j}\) such that:
* \(x_{\alpha}\) and \(x_{\gamma}\) are each \(\beta_{t+1}\)-attached to \(p_{j}\) w.r.t. \(w_{i}\),
* \(w_{i}(p_{j})\leq w_{i}(x_{\alpha})\),
* \(w_{i}(p_{j})\leq w_{i}(x_{\gamma})\), and
* The collection of the \(t+1\) points, consisting of \(x_{\alpha}\), \(x_{\gamma}\), and the pivots other than \(p_{j}\), are \(\beta_{t+1}\)-well-separated w.r.t. \(w_{i}\).
The execution of the Exchange Operation subroutine depends on \(x_{\alpha}\), \(x_{\gamma}\), \(X_{i}\), the current pivots \(p_{1},\ldots,p_{t}\), and the current estimated centers \(c_{1},\ldots,c_{T}\). The Exchange Operation adds one or two new pivots, and possibly changes the location of one previous pivot.
### The Estimated Center Subroutine
Choose \(y_{1},\ldots,y_{k}\in X_{i-1}\) to be an optimal collection of \(k\) centers for the points in \(X_{i-1}\). (By Fact 1, there exists a collection that gives a clustering of cost at most 2OPT.) Define \(p(y_{h})\) to be the pivot with the minimum weighted distance to \(y_{h}\), that is,
\[p(y_{h})=\operatorname*{arg\,min}_{p_{j}}\left(\min\{w_{i-1}(p_{j}),w_{i-1}(y_ {h})\}\cdot d(p_{j},y_{h})\right)\] ( \[\dagger\] )
For a pivot \(p_{j}\), let \(\delta(p_{j})\) be a collection of optimal centers and estimated centers defined as follows: the current estimated center \(c_{j}\) (when it exists) is in \(\delta(p_{j})\) if \(w_{i-1}(c_{j})>w_{i-1}(p_{j})\) and \(c_{j}\) is \(\beta_{t+1}\)-attached to \(p_{j}\) w.r.t. \(w_{i-1}\), and \(\delta(p_{j})\) contains an optimal center \(y_{h}\) if \(w_{i-1}(y_{h})>w_{i-1}(p(y_{h}))\) for \(p(y_{h})=p_{j}\). For each \(j\in[T]\) we define the new **estimated center**\(c_{j}\) as follows: If \(w_{i-1}(p_{j})\geq\max_{p\in\delta(p_{j})}w_{i-1}(p)\) then \(c_{j}=p_{j}\), else
\[c_{j}=\operatorname*{arg\,max}_{p\in\delta(p_{j})}w_{i-1}(p)\] ( \[\ddagger\] )
### The Add Operation Subroutine
For shorthand, let \(w_{t}=w_{i-1}\) when \(t=T\) and \(w_{t}=w_{i}\) when \(t>T\).3 The description of the Add Operation subroutine is then:
Footnote 3: We are overloading subscripts here for ease. We could instead write \(v_{t}\), but we retain \(w\) to recall weights.
1. **If** there is an estimated center \(c_{j}\) that is \(\beta_{t+1}\)-well-separated from \(p_{1},\ldots,p_{t}\) w.r.t. \(w_{i}\) then set \(p_{t+1}=p_{j}\) and set \(p_{j}=c_{j}\).
2. **Else if** it is the case that for every estimated center \(c_{j}\) that is \(\beta_{t+2}\)-attached to \(x_{\alpha}\) w.r.t. \(w_{i}\) it is also the case that \(w_{t}(c_{j})<w_{t}(p_{j})\), then set \(p_{t+1}=x_{\alpha}\).
3. **Else if** there exists a unique estimated center \(c_{j}\) that both is \(\beta_{t+2}\)-attached to \(x_{\alpha}\) w.r.t. \(w_{i}\) and satisfies \(w_{t}(c_{j})\geq w_{t}(p_{j})\), then set \(p_{t+1}=p_{j}\) and \(p_{j}=x_{\alpha}\).
4. **Else** Let \(c_{f}\) and \(c_{g}\) be estimated centers such that each is \(\beta_{t+2}\)-attached to \(x_{\alpha}\) w.r.t. \(w_{i}\), \(w_{t}(c_{f})\geq w_{t}(p_{f})\), and \(w_{t}(c_{g})\geq w_{t}(p_{g})\). Then set \(p_{t+1}=p_{f}\), \(p_{t+2}=p_{g}\), \(p_{f}=c_{f}\), and \(p_{g}=c_{g}\).
Figure 2: **Left:** The Estimated Center subroutine. Arrows point from smaller to larger natural weights, and the small points around the larger points are the the set attaining the weight of the larger point. **Right:** Case (1) of the Add Operation. Dashed lines represent well-separation. Colors except black represent labels. The pivot for label 2 (red) moves to location \(c_{2}\), while the old location for red’s pivot becomes the pivot of the new label (5, brown).
### The Exchange Operation Subroutine
The description of the Exchange Operation subroutine is:
1. **If**\(j>T\) then set \(p_{j}=x_{\alpha}\) and \(p_{t+1}=x_{\gamma}\).
2. **Else if**\(w_{i}(c_{j})<w_{i}(p_{j})\) then set \(p_{j}=x_{\alpha}\) and \(p_{t+1}=x_{\gamma}\).
3. **Else if**\(c_{j}\) is \(\beta_{t+2}\)-attached to \(x_{\alpha}\) w.r.t. \(w_{i}\) then set \(p_{j}=x_{\alpha}\) and \(p_{t+1}=x_{\gamma}\).
4. **Else if**\(c_{j}\) is \(\beta_{t+2}\)-attached to \(x_{\gamma}\) w.r.t. \(w_{i}\) then set \(p_{j}=x_{\gamma}\) and \(p_{t+1}=x_{\alpha}\).
5. **Else** set \(p_{t+1}=x_{\alpha}\), \(p_{t+2}=x_{\gamma}\), and \(p_{j}=c_{j}\).
## 5 Algorithm Guarantees and Invariants
We will establish two main guarantees for our algorithm and a set of invariants the algorithm will maintain to establish these guarantees. The most challenging and interesting algorithmic property is that the algorithm has bounded cost. The second is that the algorithm is feasible. The theorem below gives the cost guarantees; its proof is in Section 6. We state our results assuming \(B=\mathsf{OPT}\), but all results still hold by replacing \(\mathsf{OPT}\) with \(B\), as long as \(\mathsf{OPT}\leqslant B\).
**Theorem 2**.: _The cost of the algorithm is \(O(k^{5}\cdot 3^{k}\cdot\mathsf{OPT})\)._
The next theorem states that the algorithm never uses more than \(k\) labels and therefore produces a feasible solution. The proof of this theorem is in Section 5.3.
**Theorem 3**.: _The algorithm uses at most \(k\) labels._
### Notation and Definitions
We now define some notation and definitions used in the analysis.
* **Phase**\(t\) refers to the set of time steps during which there are exactly \(t\) pivots.
Figure 4: Dashed and solid lines and arrows are as in Figures 2 and 3. Colors except black represent labels. **Left:** The configuration triggering the Exchange Operation; dashed lines between \(p_{1},\dots,p_{4}\) suppressed. **Right:** Case (2) of the Exchange Operation. The old location for label 3’s (green’s) pivot is no longer a pivot location, and label 3’s pivot is now \(x_{\alpha}\). The new label, label 5 (brown), has its pivot at \(x_{\gamma}\).
Figure 3: Case (4) of the Add Operation. Dashed lines represent well-separation while solid lines represent attachment, both w.r.t. \(w_{i}\) (labelled with the appropriate \(\beta\)). The pivots for labels 2 (red) and 3 (green) move to \(c_{2},c_{3}\), resp., while the old locations for these pivots are where the new pivots 5 and 6 (brown and blue) are located.
* \(p_{1}^{t},\ldots,p_{t}^{t}\) denote the pivots for labels \(1\) through \(t\), respectively, during phase \(t\).
* \(w^{t}\) denotes the natural weights at the end of phase \(t\).
* \(X(t)\) is the set of points assigned a label before or during phase \(t\).
* For \(j\in[t]\), let \(C_{j}^{t}\) denote denote the points labelled \(j\) in phases \(1\) through \(t\).
* An **intermediate phase**\(t\) is a phase during which no points are given labels. This is a phase solely used to reset pivots.
* A **non-intermediate phase**\(T\) is a phase in which at least one point is given a label.
For a non-intermediate phase \(T\),
* Let \(T^{-}\) to denote the most recent non-intermediate phase before \(T\), and \(T^{+}\) to denote the first non-intermediate phase after \(T\) (when these exist).4 Footnote 4: Using the notation in Section 4, \(x_{i-1}\) is the last point labelled during phase \(T\), and \(x_{i}\) is the first point labelled during phase \(T^{+}\).
* For \(j\in[T]\), \(c_{j}^{T}\) is the estimated center (\(\ddagger\)) computed at the end of phase \(T\).
* Let \(y_{1},\ldots,y_{k}\) be the optimal collection of \(k\) centers computed at the end of phase \(T\) in The Estimated Center Subroutine. Let \(P_{T}=\{p_{1}^{T},\ldots,p_{T}^{T},y_{1},\ldots,y_{k}\}\) and call this set the **offline centers** for phase \(T\).5 Footnote 5: Note that \(c_{j}^{T}\) and \(P_{T}\) are only defined for non-intermediate phases \(T\).
* The **attachment digraph**\(D(T)\) is a bipartite digraph with vertex set \(P_{T}\), plus \(c_{j}^{T^{-}}\) if \(T>1\), partitioned as \((\{p_{1}^{T},\ldots,p_{T}^{T}\},\{y_{1},\ldots,y_{k},c_{j}^{T^{-}}\})\). There is a directed arc \((y_{i},p(y_{i}))\) if \(w^{T}(y_{i})\leq w^{T}(p(y_{i}))\) and a directed arc \((y_{i},p(y_{i}))\) otherwise. If \(c_{j}^{T^{-}}\) and \(p_{j}^{T}\) are \(\beta_{T+1}\)-attached w.r.t. \(w^{T}\), add the arc \((c_{j}^{T^{-}},p_{j}^{T})\) if \(w^{T}(c_{j}^{T^{-}})\leq w(p_{j}^{T})\) and the arc \((p_{j}^{T},c_{j}^{T^{-}})\) otherwise. \(\delta^{+}(p_{j}^{T})\) and \(\delta^{-}(p_{j}^{T})\) denote the out-degree and in-degree of \(p_{j}^{T}\), respectively.
### Invariants
The first property of our algorithm is that it maintains pivots that are sufficiently far apart with respect to their natural weights. This is at the heart of our analysis for both controlling the number of labels used and the algorithm's cost.
**Lemma 1**.: _Let \(t\in[k]\). The algorithm maintains the invariant that \(p_{1}^{t},\ldots,p_{t}^{t}\) are \(\beta_{t}\)-well-separated w.r.t. the natural weights at the start of phase \(t\) (and thereafter).6_
Footnote 6: Two points well-separated at one time step will also be well-separated at a later time step, since their natural weights can only increase and the well-separation parameter \(\beta_{t}\) can only decrease.
The next lemma is a key technical lemma. It states that the estimated center (\(\ddagger\)) for the points given label \(j\)_before_ phase \(T\) is close, in a weighted sense, to the pivot for label \(j\) in phase \(T\). This is key to showing that points in cluster \(j\) that are labelled _before_ phase \(T\) can be combined with those that are labelled _during_ phase \(T\) at bounded cost. This lemma is in tension with the prior lemma because a pivot must be placed in a location where it is both well-separated from other pivots and is close to center of mass of the points of a given label.
**Lemma 2**.: _Let \(T\) be a non-intermediate phase and let \(j\in[T]\). Let \(w^{t}\) denote the natural weights at the end of phase \(t\). If \(T>1\), then at least one of the following holds:_
* \(w^{T^{-}}(c_{j}^{T^{-}})\leq w^{T}(p_{j}^{T})\) _and_ \(w^{T^{-}}(c_{j}^{T^{-}})\cdot d(c_{j}^{T^{-}},p_{j}^{T})\leq\beta_{T^{-}}(T-T^ {-})\cdot\textsf{OPT}\)_,_ \(\underline{\text{or}}\)__
* \(c_{j}^{T^{-}}\) _is_ \(\beta_{T+1}\)_-attached to_ \(p_{j}^{T}\) _w.r.t._ \(w^{T}\)_._
### Proof of the Algorithm's Feasibility and the Invariants
We begin by showing a bound on the number of well-separated points in the entire point set. Lemma 1 along with Proposition 1 below will immediately imply Theorem 3, which states that the algorithm uses at most \(k\) labels.
**Proposition 1**.: _Let \(X\) be a set of points whose optimal \(k\)-median cost using \(k\) centers is \(\mathsf{OPT}\). Let \(\{x_{1},\ldots,x_{l}\}\) be a set of points in \(X\), and let \(w_{X}\) denote their natural weights in \(X\). Let \(\beta>8\). If \(\{x_{1},\ldots,x_{l}\}\) is \(\beta\)-well-separated w.r.t. \(w_{X}\), then \(l\leqslant k\)._
Proof of Proposition 1.: For shorthand, let \(w_{i}=w_{X}(x_{i})\). By Markov's inequality, for each \(i\in[l]\) there must be at least \(w_{i}/2\) points from \(X\) inside \(B(x_{i},2\mathsf{OPT}/w_{i})\). Consider a clustering on \(X\) with cost \(\mathsf{OPT}\) using \(k\) centers. Then each ball \(B(x_{i},4\mathsf{OPT}/w_{i})\) must contain at least one of these \(k\) centers; for, if not, then at least \(w_{i}/2\) points inside \(B(x_{i},2\mathsf{OPT}/w_{i})\) must each pay strictly more than \(2\mathsf{OPT}/w_{i}\) to reach a center. This contradicts that the optimal \(k\)-median cost is \(\mathsf{OPT}\).
Now it remains to show, using the well-separation assumption, that these balls are disjoint; this will imply that \(l\leqslant k\), since each ball must contain a center. Suppose to the contrary that there exist \(i,j\in[l]\), \(i\neq j\), such that \(p\in B(x_{i},4\mathsf{OPT}/w_{i})\cap B(x_{j},4\mathsf{OPT}/w_{j})\). Applying the triangle inequality gives
\[d(x_{i},x_{j})\leqslant d(p,x_{i})+d(p,x_{j})\leqslant\frac{4\mathsf{OPT}}{w_ {i}}+\frac{4\mathsf{OPT}}{w_{j}}\leqslant\frac{8\mathsf{OPT}}{\min\{w_{i},w_{ j}\}}\]
which contradicts that \(\{(x_{i},w_{i}),(x_{j},w_{j})\}\) is \(\beta\)-well-separated, i.e., that \(\min\{w_{i},w_{j}\}\cdot d(x_{i},x_{j})\geqslant\beta\mathsf{OPT}\), since \(\beta>8\).
The next two propositions will be used to aid the proofs of Lemmas 1 and 2. Recall that for each non-intermediate phase \(T\), we defined a set of offline centers \(P_{T}\) that has cost at most \(2\mathsf{OPT}\) on \(X(T)\) (Section 5.1). In order to compare the (low-cost) offline clustering induced by \(P_{T}\) to our online algorithm's clustering, we relate the offline set of centers \(P_{T}\) (which we _know_ have bounded cost on \(X(T)\)) to the pivots in phase \(T\) (which are used to make the greedy online choices) in the next proposition.
**Proposition 2**.: _Let \(P_{T}=\{p_{1}^{T},\ldots,p_{T}^{T},y_{1},\ldots,y_{k}\}\) be as in Section 5.1. Then \(y_{i}\) and \(p(y_{i})\) are \(\beta_{T+1}\)-attached w.r.t. the natural weights \(w^{T}\) at the end of phase \(T\)._
Proof of Proposition 2.: First we show that \(y_{i}\) is \(\beta_{T+1}\)-attached to at least one of \(p_{1}^{T},\cdots,p_{T}^{T}\) w.r.t. \(w^{T}\). Suppose not. Then an Add Operation would have been executed instead, and phase \(T\) would have terminated in the previous time step, which is a contradiction. That \(y_{i}\) is \(\beta_{T+1}\)-attached to \(p(y_{i})\) in particular follows directly from the definition (\(\dagger\)) of \(p(y_{i})\) as the pivot with the minimum weighted distance to \(y_{i}\).
Observe that the proof of Proposition 2 is a direct consequence of the fact that we always execute an Add Operation or an Exchange Operation when one is available, so during a phase none are available. Each attached pair in Proposition 2 is encoded in the digraph \(D(T)\) by a directed arc. Due to Proposition 2, we can now think of this directed arc as representing the direction in which we could move a certain number of points sitting near one endpoint to the other endpoint at bounded cost.
Algorithmically, we used \(P_{T}\) to define the estimated centers \(c_{j}^{T}\) (\(\ddagger\)). Next we show that the estimated center for a cluster at the end of a phase is attached to the pivot for that cluster in that phase. Thus, while the pivot itself may not be a good center for the cluster, the pivot is close to the estimated center (in at least one direction, in a weighted sense).
**Proposition 3**.: _The estimated center \(c_{j}^{T}\) is \(\beta_{T+1}\)-attached to \(p_{j}^{T}\) w.r.t. the natural weights \(w^{T}\) at the end of phase \(T\). Further, \(w^{T}(c_{j}^{T})\geqslant w^{T}(p_{j}^{T})\), with equality if and only if \(c_{j}^{T}=p_{j}^{T}\)._
The proof of Proposition 3 is a straightforward consequence of Proposition 2 and the definition (\(\ddagger\)) of estimated center; for completeness, it can be found in Appendix C.
Using these propositions, we establish Lemma 1, used heavily in our analysis. The full proof is involved, so we defer it to Appendix C and provide a sketch of the key ideas here.
Proof sketch of Lemma 1.: The proof is by induction. However, we need to couple the induction with a statement about the relative position of the estimated center for a cluster (which stays fixed between intermediate phases) to that cluster's pivot, which may change often as we consecutively reset the pivots between intermediate phases. Roughly, we prove below that if the estimated center for cluster \(j\) has not separated entirely from the present set of pivots, then it must be close (in a weighted sense) to the present pivot for label \(j\).
**Proposition 4**.: _Let \(w_{i-1}\), \(w_{i}\), and \(w_{t}\) be as Section 4.2. For each \(j\in[T]\) and \(t\in[T,T^{+}]\) such that \(p_{1}^{t},\cdots,p_{t}^{t}\) are defined,7_
Footnote 7: Recall in Case 4 of the Add Operation and Case 5 of the Exchange Operation, we go directly from \(t\) to \(t+2\) pivots, skipping phase \(t+1\).
\[p_{1}^{t},\ldots,p_{t}^{t}\text{ are $\beta_{t}$-well-separated w.r.t. }w_{t}.\] ( \[\Diamond\] )
_Moreover, at least one of the following properties holds:_
* \(c_{j}^{T}\) _is_ \(\beta_{t+1}\)_-well-separated from_ \(p_{1}^{t},\ldots,p_{t}^{t}\) _w.r.t._ \(w_{i}\)_._
* \(c_{j}^{T}\) _is_ \(\beta_{t+1}\)_-attached to_ \(p_{j}^{t}\) _w.r.t._ \(w_{t}\)_._
* \(c_{j}^{T}\) _is_ \(f(t,T)\)_-attached to_ \(p_{j}^{t}\) _w.r.t._ \(w_{t}\) _and_ \(w_{t}(c_{j}^{T})<w_{t}(p_{j}^{t})\)_, where_ \(f(t,T)=\beta_{T}\cdot(t-T)\)_._
For the proof sketch we focus on Case 4 of the Add Operation, which will give a flavor of the arguments. This is a concerning case a priori; for, if we were to add \(x_{\alpha}\) to the set of pivots as in Cases 2 and 3, it is ambiguous as to whether \(x_{\alpha}\) should be associated with label \(f\) or \(g\), as both \(c_{f}^{T}\) and \(c_{g}^{T}\) are close to \(x_{\alpha}\) (see Appendix F for a diagram). We maneuver around the issue by making \(c_{f}^{T}\) and \(c_{g}^{T}\) new pivots and excluding \(x_{\alpha}\). However, it is not immediately clear that such a step will preserve the desired invariants. To give intuition, we suppress the separation parameters and the precise weights used, though emphasize both are brittle (e.g., the arguments rely heavily on \(\beta_{t}\) decreasing with \(t\), see also Appendix E). The directions of attachment between points (arrows in Figure 5) are also crucial. We will also see why we need to couple the induction with (a)--(c).
To prove the inductive step for (\(\Diamond\)) when Case 4 of the Add Operation is performed, we need to show (i) \(c_{f}^{T}\) and \(c_{g}^{T}\) are well-separated, (ii), WLOG, \(c_{f}^{T}\) is well-separated from \(p_{f}^{t}\), and (iii), WLOG, \(c_{f}\) is well-separated from \(p_{l}^{t}\), \(l\neq f\). See Figure 5. When we say "close" or "far" below, we always mean in a weighted sense. For (i), because \(p_{f}^{T}\) is close to \(c_{f}^{T}\) (Proposition 3) and likewise for \(p_{g}^{T}\), \(c_{g}^{T}\), then \(c_{f}^{T}\) and \(c_{g}^{T}\) cannot be close, since this would violate that \(p_{f}^{T}\) and \(p_{g}^{T}\) are (inductively) far. To prove (ii), note \(x_{\alpha}\) is far from \(p_{f}^{t}\) by assumption of the Add Operation, and \(c_{f}^{T}\) is close to \(x_{\alpha}\) by assumption of Case 4, so \(p_{f}^{t}\) and \(c_{f}^{T}\) must be far. Finally for (iii), one can (inductively) deduce that (b) must hold when \(j=f\), so \(c_{f}^{T}\) and \(p_{f}^{t}\) are close; but, since \(p_{f}^{t}\) and \(p_{l}^{t}\) are (inductively) far, \(c_{f}^{t}\) and \(p_{l}^{t}\) must be far.
Proving the inductive step for (a)--(c) involves detailed casework. The Add and Exchange Operations are engineered so that, loosely speaking, an estimated center is either attached to the corresponding present pivot, or else breaks off to form its own pivot. A main subtlety is the direction and strength of attachment, e.g., property (c). Another is the sequence of operations, specifically, the Add Operation taking precedence over the Exchange Operation.
Theorem 3 follows from Lemma 1 and Proposition 1 once we observe that we have set \(\beta_{1}\) sufficiently large. For completeness, we include the proof below.
Proof of Theorem 3.: The number of labels used by the algorithm is the number of pivots in the last phase. By Lemma 1, we maintain the invariant that pivots \(p_{1}^{t},\ldots,p_{t}^{t}\) are \(\beta_{t}\)-well-separated w.r.t. the natural weights at every time step in phase \(t\). Suppose to the contrary that the final number of pivots is strictly more than \(k\). Then at some point there are \(t=k+1\) or \(t=k+2\)8 pivots that are \(\beta_{t}\)-well-separated w.r.t. the natural weights throughout phase \(t\). But \(\beta_{k+2}=8\), and it is impossible for \(k+2\) points to be 8-well-separated, by Proposition 1. We conclude the final number of pivots is at most \(k\), so the algorithm uses at most \(k\) labels.
Footnote 8: The algorithm may skip a phase, hence we consider both cases.
As the proof of Proposition 4 shows, the Add and Exchange operations are engineered so that the estimated center \(c_{j}^{T^{-}}\) is close to \(p_{j}^{T}\) at the beginning of phase \(T\). Lemma 2 states that this property is maintained through the end of phase \(T\). In essence, this is because no Add or Exchange operations are executed during phase \(T\), so we can show that the attachment between \(c_{j}^{T^{-}}\) and \(p_{j}^{T^{+}}\) is static--even as natural weights increase. The proof is below.
Proof of Lemma 2.: We need to show that: either \(w^{T}(c_{j}^{T})\leqslant w^{T^{+}}(p_{j}^{T^{+}})\) and \(w^{T}(c_{j}^{T})\cdot d(c_{j}^{T},p_{j}^{T^{+}})\leqslant\beta_{T}(T^{+}-T) \cdot\mathsf{OPT}\), or \(c_{j}^{T}\) is \(\beta_{T^{+}+1}\)-attached to \(p_{j}^{T^{+}}\) w.r.t. \(w^{T^{+}}\).
We need the second part of Proposition 4. When we reach the start of phase \(T^{+}\), there are no Add Operations or Exchange Operations available, so (a) in the statement of Proposition 4 cannot hold when \(t=T^{+}\). Thus either (b) or (c) must hold.
Let \(w_{T^{+}}\) denote the natural weights at the _start_ of phase \(T^{+}\). Note this notation is consistent with taking \(t=T^{+}\) in \(w_{t}\) in Proposition 4.
**Case 1**.: _In Proposition 4, (c) holds._
If (c) holds, then
\[w^{T}(c_{j}^{T})\leqslant w_{T^{+}}(c_{j}^{T})<w_{T^{+}}(p_{j}^{T^{+}}) \leqslant w^{T^{+}}(p_{j}^{T^{+}}),\quad\text{and}\]
\[w^{T}(c_{j}^{T})\cdot d(c_{j}^{T},p_{j}^{T^{+}})\leqslant w_{T^{+}}(c_{j}^{T} )\cdot d(c_{j}^{T},p_{j}^{T^{+}})\leqslant\beta_{T}(T^{+}-T)\cdot\mathsf{OPT}\]
where the second inequality in the first line and the last inequality in the second line follow from (c) holding in Proposition 4.
**Case 2**.: _In Proposition 4, (c) does not hold._
If (c) does not hold, then \(w_{T^{+}}(c_{j}^{T})\geqslant w_{T^{+}}(p_{j}^{T^{+}})\) and \(c_{j}^{T}\) is \(\beta_{T^{+}+1}\)-attached to \(p_{j}^{T^{+}}\) w.r.t. \(w_{T^{+}}\), i.e., the weights at the _beginning_ of phase \(T^{+}\) (since \(\beta_{T^{+}+1}<\beta_{T}(T^{+}-T)\)). We need to show that \(c_{j}^{T}\) is \(\beta_{T^{+}+1}\)-attached to \(p_{j}^{T^{+}}\) w.r.t. \(w^{T^{+}}\), i.e., the weights at the _end_ of phase \(T^{+}\). Recall that through the end of phase \(T^{+}\), \(c_{j}^{T}\) remains \(\beta_{T^{+}+1}\)-attached to at least one of the \(T^{+}\) pivots (otherwise, the phase would terminate and an Add Operation would be executed). So it just remains to show that at the end of phase \(T^{+}\), \(c_{j}^{T}\) is still \(\beta_{T^{+}+1}\)-attached to \(p_{j}^{T^{+}}\) in particular,
Figure 5: Cases (i)—(iii) in the proof sketch of Lemma 1. Dashed lines indicate well-separation and solid lines indicate attachment, labelled with the appropriate parameters. Arrows go from smaller to larger natural weights.
w.r.t. \(w^{T^{+}}\). Suppose to the contrary that \(c_{j}^{T}\) is \(\beta_{T^{+}+1}\)-attached to \(p_{j^{\prime}}^{T^{+}}\) w.r.t. \(w^{T^{+}}\), where \(j^{\prime}\neq j\). Then it must be the case that \(c_{j}^{T}\) is \(\beta_{T^{+}+1}\)-attached to \(p_{j^{\prime}}^{T^{+}}\) w.r.t. \(w_{T+}\). But we also know that \(w_{T^{+}}(c_{j}^{T})\geqslant w_{T^{+}}(p_{j}^{T^{+}})\) and \(c_{j}^{T}\) is \(\beta_{T^{+}+1}\)-attached to \(p_{j}^{T^{+}}\) w.r.t. \(w_{T^{+}}\). By Proposition 5, this contradicts that \(p_{j}^{T^{+}}\) and \(p_{j^{\prime}}^{T^{+}}\) are \(\beta_{T^{+}}\)-well-separated w.r.t. \(w_{T^{+}}\).
Having established Lemma 2, we are ready to bound the cost of the algorithm, using the present pivot \(p_{j}^{T}\) as the "bridge" between old and newly arriving points given label \(j\). We will show inductively that, at bounded cost, we can move the old points to \(c_{j}^{T^{-}}\), which is in some sense close to \(p_{j}^{T}\) by Lemma 2. In turn, \(p_{j}^{T}\) dictates the greedy choices for the new points. So we will combine the cost of old and new points via \(p_{j}^{T}\).
## 6 Bounding the Algorithm's Cost
Throughout, let \(cost(S;c)=\sum_{p\in S}d(p,c)\) for \(S\subseteq X\) and \(c\in X\).
As a first step, we need to bound the cost contribution of points that arrive during a single phase. The strategy is to compare the online greedy choices with the offline optimal choices, and to show these are sufficiently similar. More specifically, we know that in \(D(T)\) each offline optimal center \(y_{i}\) in \(P_{T}\) is in the neighborhood of exactly one pivot, namely \(p(y_{i})\), and \(y_{i}\) and \(p(y_{i})\) are close in a weighted sense, i.e., attached (Proposition 2). We further know that since no Exchange Operations are executed during a phase, we can show that if \(y_{i_{1}}\) and \(y_{i_{2}}\) are in the neighborhood of the same pivot (\(p(y_{i_{1}})=p(y_{i_{2}})\)), then \(y_{i_{1}}\) and \(y_{i_{2}}\) are also close in a weighted sense.
Using these facts, we would be in good shape if we could show that for every point arriving during phase \(T\), it is the case that if the point is assigned to \(y_{i}\) in the offline optimal solution, then it receives the label of pivot \(p(y_{i})\) online. While this is not quite true, we can instead show that the number of points that do not satisfy this condition is small relative to the natural weights of their pivot, owing to the well-separated invariant (Lemma 1). Further, we show that these points can still be moved to their pivots at bounded cost, due to the greedy labelling rule. In effect, we will _charge_ the cost of these "far" points to their pivot. Lemma 3 summarizes this argument, and is used to prove the main theorem, Theorem 2.
**Lemma 3**.: _Let \(T\) be a non-intermediate phase. For any \(j\in[T]\), let \(C_{j}\) be the points given label \(j\) during phase \(T\), i.e., \(C_{j}=C_{j}^{T}\backslash C_{j}^{T^{-}}\). Define \(S_{ji}\) to be the set of elements in \(C_{j}\) assigned to \(y_{i}\) in the clustering of \(X(T)\backslash X(T^{-})\) induced by \(P_{T}\). Define \(S_{far,j}=\bigcup_{i:p(y_{i})\neq p_{j}^{T}}S_{ji}\). Then_
1. \(cost(S_{far,j};p_{j}^{T})\leqslant k\cdot(\beta_{T+1}+2)\cdot\textsf{OPT}\)_, and_
2. \(|S_{far,j}|\leqslant k\cdot w^{T}(p_{j}^{T})\)_, where_ \(w^{T}\) _denotes the natural weights at the end of phase_ \(T\)_._
The following lemma states that the number of points in a cluster by the end of any phase is a bounded factor away from the natural weight (at the end of the phase) of the estimated center for that cluster at the end of the phase.
**Lemma 4**.: _Let \(T\) be a non-intermediate phase and \(j\in[T]\). Let \(w^{T}(c_{j}^{T})\) denote the natural weight of \(c_{j}^{T}\) at the end of phase \(T\) and let \(C_{j}^{T}\) denote the set of points in cluster \(j\) by the end of phase \(T\). Then_
\[|C_{j}^{T}|\leqslant(2k+1)\cdot T\cdot w^{T}(c_{j}^{T}).\]
**Lemma 5**.: _Let \(T\) be a non-intermediate phase and \(j\in[T]\). Then \(cost(C_{j}^{T})\) is bounded against center \(c_{j}^{T}\), i.e.,_
\[\sum_{x\in C_{j}^{T}}d(x,c_{j}^{T})\leqslant g(T,k)\cdot\textsf{OPT},\ \ g(T,k)=T\cdot g(k),\ g(k)=\beta_{1}(2k^{3}+3k^{2}+5k+1)+2k+4.\]
As a corollary to Lemma 5, we have Theorem 2, the main theorem.
Broadly, here is how the Lemmas 2, 3, and 4 will tie together to prove Lemma 5. The proof of Lemma 5 is by induction. Inductively, the points in cluster \(j\) that arrived before phase \(T\), which we call \(C_{j}^{T^{-}}\), can be moved to their estimated center \(c_{j}^{T^{-}}\) at bounded cost. This estimated center is close to \(p_{j}^{T}\) by Lemma 2. For instance, (a) in Lemma 2 says that once the points in \(C_{j}^{T^{-}}\) are moved to \(c_{j}^{T^{-}}\), they can also be moved to \(p_{j}^{T}\) at bounded cost, since \(w^{T^{-}}(c_{j}^{T^{-}})\cdot d(c_{j}^{T^{-}},p_{j}^{T})\) is bounded and \(|C_{j}^{T^{-}}|\leqslant h(T^{-},k)\cdot w^{T^{-}}(c_{j}^{T^{-}})\) (Lemma 4). Finally, since \(w^{T^{-}}(c_{j}^{T^{-}})\leqslant w^{T}(p_{j}^{T})\), we will be able to _charge_ the points in \(C_{j}^{T^{-}}\) to \(p_{j}^{T}\) in order to move them to \(c_{j}^{T}\), which is attached to \(p_{j}^{T}\) (Proposition 3). By similar logic, the cost of the far points \(S_{far,j}\) from phase \(T\) can be charged to \(p_{j}^{T}\) and then moved to \(c_{j}^{T}\) (Lemma 3). Finally, the remaining points given label \(j\) in phase \(T\), call them \(S_{near,j}\), are close to offline centers in \(P_{T}\) that are in turn close to \(p_{j}^{T}\). Crucially, no Exchange Operation is executed during a phase, so these offline centers are also close to \(c_{j}^{T}\) in a weighted sense.
### Proofs for bounding cost
We start with the proof of Lemmas 3, which is a key step where the greedy rule for assigning labels to points is used.
Proof of Lemma 3.: WLOG, let \(j=T\). For \(c\in P_{T}\), let \(m(c)\) be the number of points assigned to \(c\) in the clustering of \(X(T)\backslash X(T^{-})\) induced by the centers \(P_{T}\), i.e., in this clustering every point is assigned to the _nearest_ point in \(P_{T}\).
For shorthand, let \(w\) denote the natural weights \(w^{T}\) of points at the end of phase \(T\).
**Observation 1**.: _For \(c\in P_{T}\), \(w(c)\geqslant m(c)\)._
This follows from the definition of \(w(c)\) and the fact that there are \(m(c)\) points whose movement cost to \(c\) is at most \(2\mathsf{OPT}\), by construction of \(P_{T}\).
**Observation 2**.: _If \((p(y_{i}),y_{i})\) is a directed edge in \(D(T)\), then \(w(p(y_{i}))\cdot d(p(y_{i}),y_{i})<\beta_{T+1}\cdot\mathsf{OPT}\). Likewise, if \((y_{i},p(y_{i}))\) is a directed edge in \(D(T)\), then \(w(y_{i})\cdot d(p(y_{i}),y_{i})<\beta_{T+1}\cdot\mathsf{OPT}\)._
This follows from the definition of \(D(T)\) and Proposition 2.
Call the points in \(S_{far,T}\)_far_ points. In the claims below, we show that the far points can be moved to \(p_{T}^{T}\) at bounded cost (Claims 1 and 2), and that there are not too many far points relative to the weight of \(p_{T}^{T}\) (Claim 3). In turn, we will be able to _charge_ the cost of the far points to \(p_{T}^{T}\).
**Claim 1**.: _Let \(p(y_{i})\neq p_{T}^{T}\). Suppose \(w(y_{i})>w(p(y_{i}))\). Then \(cost(S_{Ti};p_{T}^{T})\leqslant(\beta_{T+1}+2)\mathsf{OPT}\)._
Proof.: WLOG, let \(p(y_{i})=p_{1}^{T}\). We consider two cases.
**Case 1**.: \(|S_{Ti}|\geqslant w(p_{1}^{T})\). _We will show this case cannot happen._
In this case, we know that \(w(y_{i})\geqslant m(y_{i})\geqslant|S_{Ti}|\geqslant w(p_{1}^{T})\). We know by Observation 2 that \(w(p_{1}^{T})\cdot d(p_{1}^{T},y_{i})<\beta_{T+1}\cdot\mathsf{OPT}\). By Proposition 5, this implies \(w(p_{1}^{T})\cdot d(y_{i},p_{T}^{T})\geqslant 2\beta_{T+1}\cdot\mathsf{OPT}\).
Since \(|S_{Ti}|\geqslant w(p_{1}^{T})\), there exists \(S_{Ti}^{\prime}\subseteq S_{Ti}\) such that \(|S_{Ti}^{\prime}|=w(p_{1}^{T})\). In turn, \(cost(S_{Ti}^{\prime};p_{1}^{T})\leqslant cost(S_{Ti}^{\prime};y_{i})+w(p_{1}^{ T})\cdot d(y_{i},p_{1}^{T})<(\beta_{T+1}+2)\cdot\mathsf{OPT}\), since \(P_{T}\) is a clustering with cost at most \(2\mathsf{OPT}\). On the other hand,
\[cost(S_{Ti}^{\prime};p_{T}^{T})\geqslant\sum_{p\in S_{Ti}^{\prime}}d(y_{i},p _{T}^{T})-\sum_{p\in S_{Ti}^{\prime}}d(p,y_{i})=w(p_{1}^{T})\cdot d(y_{i},p_{T }^{T})-\sum_{p\in S_{Ti}^{\prime}}d(p,y_{i})\geqslant(2\beta_{T+1}-2)\mathsf{ OPT}.\]
Since \(\beta_{T+1}\geqslant 4\), \(\beta_{T+1}+2\leqslant 2\beta_{T+1}-2\), so \(cost(S_{Ti}^{\prime};p_{1}^{T})<cost(S_{Ti}^{\prime};p_{T}^{T})\), which violates that \(T=\arg\min_{j\in[T]}d(p,p_{j}^{T})\) for all \(p\in S_{Ti}^{\prime}\subseteq C_{T}\).
**Case 2**.: \(|S_{Ti}|\leqslant w_{t}(p_{1}^{T})\).
In this case, we know that since \(w(p_{1}^{T})\cdot d(y_{i},p_{1}^{T})<\beta_{T+1}\cdot\mathsf{OPT}\), we also have \(|S_{Ti}|\cdot d(y_{i},p_{1}^{T})<\beta_{T+1}\cdot\mathsf{OPT}\). By the triangle inequality,
\[cost(S_{Ti};p_{1}^{T})\leq cost(S_{Ti};y_{i})+|S_{Ti}|\cdot d(y_{i},p_{1}^{T}) \leq 2\mathsf{OPT}+\beta_{T+1}\cdot\mathsf{OPT}.\]
Since \(cost(S_{Ti};p_{T}^{T})\leq cost(S_{Ti};p_{1}^{T})\) by the greedy procedure, this proves Claim 1.
**Claim 2**.: _Let \(p(y_{i})\neq p_{T}^{T}\). Suppose that \(w(y_{i})\leq w(p(y_{i}))\). Then \(cost(S_{Ti};p_{T}^{T})\leq(\beta_{T+1}+1)\mathsf{OPT}\)._
Proof.: WLOG, let \(p(y_{i})=p_{1}^{T}\). By Observation 2, \(w(y_{i})\cdot d(y_{i},p_{1}^{T})<\beta_{T+1}\cdot\mathsf{OPT}\). Further, \(|S_{Ti}|\leq m(y_{i})\leq w(y_{i})\), so \(|S_{Ti}|\cdot d(y_{i},p_{1}^{T})<\beta_{T+1}\cdot\mathsf{OPT}\). So:
\[cost(S_{Ti};p_{T}^{T})\leq cost(S_{Ti};p_{1}^{T})\leq cost(S_{Ti};y_{i})+|S_{Ti }|\cdot d(y_{i},p_{1}^{T})\leq 2\mathsf{OPT}+\beta_{T+1}\cdot\mathsf{OPT}.\]
**Claim 3**.: _Let \(p(y_{i})\neq p_{T}^{T}\). Then \(|S_{Ti}|\leq w(p_{T}^{T})\)._
Proof.: As before, assume WLOG that \(p(y_{i})=p_{1}^{T}\).
**Case 1**.: \(w(y_{i})>w(p_{1}^{T})\).
We know from the proof of Claim 1, Case 1 that this implies \(|S_{Ti}|<w(p_{1}^{T})\). We have
\[|S_{Ti}|\cdot d(p_{T}^{T},y_{i}) =\sum_{p\in S_{Ti}}d(y_{i},p_{T}^{T})\] ( Claim 1) \[\leq\sum_{p\in S_{Ti}}d(p,p_{T}^{T})+\sum_{p\in S_{Ti}}d(p,y_{i})\] \[\leq(\beta_{T+1}+2)\mathsf{OPT}+2\mathsf{OPT}\] \[\leq 2\beta_{T+1}\cdot\mathsf{OPT}\] \[\leq w(p_{T}^{T})\cdot d(p_{T}^{T},y_{i})\]
where in the last line we have applied Proposition 5, using that \(w(y_{i})>w(p_{1}^{T})\), Observation 2, and \(p_{1}^{T}\) and \(p_{T}^{T}\) are \(\beta_{T}\)-well-separated w.r.t. \(w\). Finally, dividing both ends of the chain of inequalities by \(d(p_{T}^{T},y_{j})\) gives \(|S_{Tj}|\leq w(p_{T}^{T})\), as desired.
**Case 2**.: \(w(y_{i})\leq w(p_{1}^{T})\).
First consider when \(w(p_{T}^{T})\geq w(y_{i})\). Then \(w(p_{T}^{T})\geq w(y_{i})\geq m(y_{i})\geq|S_{Ti}|\), so the claim follows.
So the last case to consider is when \(w(p_{T}^{T})<w(y_{i})\). It suffices to show that \(w(p_{T}^{T})\cdot d(p_{T}^{T},y_{i})\geq 2\beta_{T+1}\cdot\mathsf{OPT}\); then, we can just apply the argument in Case 1. Suppose to the contrary that \(w(p_{T}^{T})\cdot d(p_{T}^{T},y_{i})<2\beta_{T+1}\cdot\mathsf{OPT}\). Then
\[\beta_{T}\cdot\mathsf{OPT} \leq w(p_{T}^{T})\cdot d(p_{T}^{T},p_{1}^{T})\] \[\leq w(p_{T}^{T})\cdot d(p_{T}^{T},y_{i})+w(p_{T}^{T})\cdot d(y_{ i},p_{1}^{T})\] \[\leq 2\beta_{T+1}\cdot\mathsf{OPT}+w(p_{T}^{T})\cdot d(y_{i},p_{1}^ {T})\] \[<2\beta_{t+1}\cdot\mathsf{OPT}+w(y_{i})\cdot d(y_{i},p_{1}^{T})\] \[<2\beta_{T+1}\cdot\mathsf{OPT}+\beta_{T+1}\cdot\mathsf{OPT}\] \[=\beta_{T}\cdot\mathsf{OPT}\]
where the second-to-last line follows from Observation 2. The left-hand and right-hand sides give a contradiction, concluding the proof of the case and the claim.
**Claim 4**.: \(cost(S_{far,T};p_{T}^{T})\leq k\cdot(\beta_{T+1}+2)\mathsf{OPT}\) _and \(|S_{far,T}|\leq k\cdot w(p_{T}^{T})\)._
Proof.: By Claims 1 and 2,
\[cost(S_{far,T};p_{T}^{T})=\sum_{i:p(y_{i})\neq p_{T}^{T}}cost(S_{Ti};p_{T}^{T}) \leqslant k\cdot(\beta_{T+1}+2)\mathsf{OPT}\]
By Claim 3,
\[|S_{far,T}|=\sum_{i:p(y_{i})\neq p_{T}^{T}}|S_{Ti}|\leqslant k\cdot w(p_{T}^{T }).\]
The proof of Lemma 4 can be found in Appendix D. The argument is by induction and is similar in flavor to the proof of Lemma 5, which we give below. Lemma 5 implies Theorem 2.
Proof of Lemma 5.: The proof is by induction. Let \(C_{j}\), \(S_{ji}\), and \(S_{far,j}\) be as in Lemma 3. Define \(S_{near,j}=\bigcup_{i:p(y_{i})=p_{j}^{T}}S_{ji}\) and \(S_{j}\) to be the elements in \(C_{j}\) that are assigned to \(p_{j}^{T}\) in the clustering of \(X(T)\backslash X(T^{-})\) induced by \(P_{T}\). Let \(w^{t}\) denote the natural weights at the end of phase \(t\). First we need the following key claim.
**Claim 1**.: _For any \(x,y\in\delta^{+}(p_{j}^{T})\cup\delta^{-}(p_{j}^{T})\cup\{p_{j}^{T}\}\), \(x\) and \(y\) are \(2\beta_{T+1}\)-attached w.r.t. \(w^{T}\)._
Proof of Claim 1.: If \(x\) or \(y\) is \(p_{j}^{T}\), then the claim automatically holds by Proposition 2. There are two other cases. The first case is, WLOG, \(x\in\delta^{-}(p_{j}^{T})\). Regardless of whether \(y\) is in \(\delta^{-}(p_{j}^{T})\) or \(\delta^{+}(p_{j}^{T})\), the claim holds by Propositions 2 and 5. The second case is that \(x,y\in\delta^{+}(p_{j}^{T})\). We prove the stronger statement that \(x\) and \(y\) are \(\beta_{T+1}\)-attached w.r.t. \(w^{T}\). Suppose to the contrary that \(x\) and \(y\) are \(\beta_{T+1}\)-well-separated. We claim that this implies
\[\{p_{1}^{T},\ldots,p_{T}^{T}\}\cup\{x,y\}\backslash\{p_{j}^{T}\} \tag{1}\]
is \(\beta_{T+1}\)-well-separated w.r.t. \(w^{T}\); this would give a contradiction, since if an Exchange Operation were available, it would have been executed. Now suppose that (1) does not hold. Then WLOG \(p_{j}^{T}\) and \(x\) are \(\beta_{T+1}\)-attached w.r.t. \(w^{T}\), for some \(j^{\prime}\neq j\). Since \(x\in\delta^{+}(p_{j}^{T})\) and since \(x\) and \(p_{j}^{T}\) are \(\beta_{T+1}\)-attached w.r.t. \(w^{T}\), by Proposition 5, \(p_{j}^{T}\) and \(p_{j^{\prime}}^{T}\) are \(2\beta_{T+1}\)-attached w.r.t. \(w^{T}\). This contradicts that \(p_{j}^{T}\) and \(p_{j^{\prime}}^{T}\) are \(\beta_{T}\)-well-separated w.r.t. \(w^{T}\), since \(2\beta_{T+1}<\beta_{T}\). This concludes the proof of the case and the claim.
To bound the cost contribution of \(C_{j}^{T^{-}}\), we case on which statement holds in Lemma 2.
**Case 1**.: \(c_{j}^{T^{-}}\) _is \(\beta_{T+1}\)-attached to \(p_{j}^{T}\) w.r.t. \(w^{T}\) (i.e., (b) holds in Lemma 2)._
Since in Case 1, \(c_{j}^{T^{-}}\) is \(\beta_{T+1}\)-attached to \(p_{j}^{T}\) w.r.t. \(w^{T}\), \(c_{j}^{T^{-}}\in\delta^{+}(p_{j}^{T})\cup\delta^{-}(p_{j}^{T})\). Also, \(c_{j}^{T}\) by definition is in \(\delta^{+}(p_{j}^{T})\cup\{p_{j}^{T}\}\). So by Claim 1, \(c_{j}^{T^{-}}\) is \(2\beta_{T+1}\)-attached to \(c_{j}^{T}\) w.r.t. \(w^{T}\). Using this, we bound \(cost(C_{j}^{T^{-}};c_{j}^{T})\):
\[cost(C_{j}^{T^{-}};c_{j}^{T}) \leqslant cost(C_{j}^{T^{-}};c_{j}^{T^{-}})+|C_{j}^{T^{-}}|\cdot d (c_{j}^{T^{-}},c_{j}^{T})\] \[\leqslant g(T^{-},k)\cdot\mathsf{OPT}+|C_{j}^{T^{-}}|\cdot d(c_{j} ^{T^{-}},c_{j}^{T})\] \[\leqslant g(T^{-},k)\cdot\mathsf{OPT}+(2k+1)\cdot T^{-}\cdot w^{T ^{-}}(c_{j}^{T^{-}})\cdot d(c_{j}^{T^{-}},c_{j}^{T})\] \[\leqslant g(T^{-},k)\cdot\mathsf{OPT}+(2k+1)\cdot T^{-}\cdot w^{T }(c_{j}^{T^{-}})\cdot d(c_{j}^{T^{-}},c_{j}^{T})\] \[\leqslant g(T^{-},k)\cdot\mathsf{OPT}+(2k+1)\cdot T^{-}\cdot 2\beta_{T+1}\cdot\mathsf{OPT} \tag{2}\]
where the third inequality is due to Lemma 4.
**Case 2**.: _(b) does not hold in Lemma 2, so (a) holds, i.e., \(w^{T^{-}}(c_{j}^{T^{-}})\leqslant w^{T}(p_{j}^{T})\) and \(w^{T^{-}}(c_{j}^{T^{-}})\cdot d(c_{j}^{T^{-}},p_{j}^{T})\leqslant\beta_{T^{-}} (T-T^{-})\cdot\mathsf{OPT}\)._
We bound \(cost(C_{j}^{T-};c_{j}^{T})\):
\[cost(C_{j}^{T-};c_{j}^{T}) \leqslant cost(C_{j}^{T-};c_{j}^{T-})+|C_{j}^{T-}|\cdot d(c_{j}^{T-},c_{j}^{T})\] \[\leqslant g(T^{-},k)\cdot\mathsf{OPT}+|C_{j}^{T-}|\cdot d(c_{j}^{T -},p_{j}^{T})+|C_{j}^{T-}|\cdot d(p_{j}^{T},c_{j}^{T}) \tag{3}\]
and now we use the assumptions of the case to continue bounding from (3):
\[|C_{j}^{T-}|\cdot d(c_{j}^{T-},p_{j}^{T}) \leqslant(2k+1)\cdot T^{-}\cdot w^{T-}(c_{j}^{T-})\cdot d(c_{j}^{ T-},p_{j}^{T})\] \[\leqslant(2k+1)\cdot T^{-}\cdot\beta_{T^{-}}(T-T^{-})\cdot \mathsf{OPT} \tag{4}\]
where the first inequality is due to Lemma 4. Next,
\[|C_{j}^{T-}|\cdot d(p_{j}^{T},c_{j}^{T})\leqslant(2k+1)T^{-}\cdot w ^{T-}(c_{j}^{T-})\cdot d(p_{j}^{T},c_{j}^{T}) \leqslant(2k+1)T^{-}\cdot w^{T}(p_{j}^{T})\cdot d(p_{j}^{T},c_{j} ^{T})\] \[\leqslant(2k+1)T^{-}\cdot\beta_{T+1}\cdot\mathsf{OPT} \tag{5}\]
where the first inequality is due to Lemma 4 and the last inequality is due to Proposition 3. So combining (3), (4), (5) gives
\[cost(C_{j}^{T-};c_{j}^{T})\leqslant g(T^{-},k)\cdot\mathsf{OPT}+(2k+1)\cdot T^ {-}\cdot(\beta_{T^{-}}(T-T^{-})+\beta_{T+1})\cdot\mathsf{OPT}. \tag{6}\]
Now we have bounds (2) and (6) for \(cost(C_{j}^{T-};c_{j}^{T})\). Recall that \(C_{j}^{T}=C_{j}^{T-}\cup S_{far,j}\cup S_{near,j}\cup S_{j}\). The following bounds will hold regardless of whether we are in Case 1 or 2. We have
\[cost(S_{j};c_{j}^{T})\leqslant cost(S_{j};p_{j}^{T})+|S_{j}|\cdot d(p_{j}^{T}, c_{j}^{T})\leqslant 2\mathsf{OPT}+w^{T}(p_{j}^{T})\cdot d(p_{j}^{T},c_{j}^{T}) \leqslant(2+\beta_{T+1})\mathsf{OPT} \tag{7}\]
\[cost(S_{near,j};c_{j}^{T}) =\sum_{i:p(y_{i})=p_{j}^{T}}cost(S_{ji};c_{j}^{T})\leqslant\sum_{ i:p(y_{i})=p_{j}^{T}}\sum_{j\in S_{ji}}d(p,c_{j}^{T})\] \[\leqslant 2\mathsf{OPT}+\sum_{i:p(y_{i})=p_{j}^{T}}w^{T}(y_{i}) \cdot d(y_{i},c_{j}^{T})\leqslant(2k\beta_{T+1}+2)\mathsf{OPT} \tag{8}\]
where we have used Claim 1 and that \(|S_{ji}|\leqslant w^{T}(y_{i})\). Finally, by Lemma 3,
\[cost(S_{far,j};c_{j}^{T})\leqslant cost(S_{far,j};p_{j}^{T})+|S_{far,j}|\cdot d (p_{j}^{T},c_{j}^{T})\leqslant k(2\beta_{T+1}+2)\mathsf{OPT} \tag{9}\]
Combining (7), (8), (9) with (2) or (6) gives the sought bound:
\[cost(C_{j}^{T};c_{j}^{T})\leqslant[g(T^{-},k)+g(k)]\mathsf{OPT}\leqslant g(T,k )\cdot\mathsf{OPT}.\]
|
2305.11291 | A systematic review of safety-critical scenarios between automated
vehicles and vulnerable road users | Automated vehicles (AVs) are of great potential in reducing crashes on the
road. However, it is still complicated to eliminate all the possible accidents,
especially those with vulnerable road users (VRUs), who are among the greater
risk than vehicle occupants in traffic accidents. Thus, in this paper, we
conducted a systematic review of safety-critical scenarios between AVs and
VRUs. We identified 39 papers in the literature and typical safety-critical
scenarios between AVs and VRUs. They were further divided into three
categories, including human factors, environmental factors, and vehicle
factors. We then discussed the development, challenges, and possible solutions
for each category. In order to further improve the safety of VRUs when
interacting with AVs, multiple stakeholders should work together to 1) improve
AI and sensor technologies and vehicle automation, 2) redesign the current
transportation infrastructure, 3) design effective communication technologies
and interfaces between vehicles and between vehicles and VRUs, and 4) design
effective simulation and testing methods to support and evaluate both
infrastructure and technologies. | Aditya Deshmukh, Zifei Wang, Aaron Gunn, Huizhong Guo, Rini Sherony, Fred Feng, Brian Lin, Shan Bao, Feng Zhou | 2023-05-18T20:15:08Z | http://arxiv.org/abs/2305.11291v1 | A systematic review of safety-critical scenarios between automated vehicles and vulnerable road users
###### Abstract
Automated vehicles (AVs) are of great potential in reducing crashes on the road. However, it is still complicated to eliminate all the possible accidents, especially those with vulnerable road users (VRUs), who are among the greater risk than vehicle occupants in traffic accidents. Thus, in this paper, we conducted a systematic review of safety-critical scenarios between AVs and VRUs. We identified 39 papers in the literature and typical safety-critical scenarios between AVs and VRUs. They were further divided into three categories, including human factors, environmental factors, and vehicle factors. We then discussed the development, challenges, and possible solutions for each category. In order to further improve the safety of VRUs when interacting with AVs, multiple stakeholders should work together to 1) improve AI and sensor technologies and vehicle automation, 2) redesign the current transportation infrastructure, 3) design effective communication technologies and interfaces between vehicles and VRUs, and 4) design effective simulation and testing methods to support and evaluate both infrastructure and technologies.
## Introduction
Vulnerable road users (VRUs) (e.g., pedestrians, cyclists, skate-boarders, e-scooter) are among the greater risk than vehicle occupants in traffic accidents. VRUs accounted for 26% of all road traffic death globally [22]. In the United States, pedestrian fatalities increased by 10.63% and 9.63% in 2015 and 2016 while pedal-cyclist fatalities increased by 30% between 2009 and 2018 [11]. Although automated vehicles (AVs) are expected to improve the safety, mobility, and efficiency of the transportation system [1] there are still challenges when interacting with VRUs in mixed traffic in the near future. Due to the complexity of various of interaction scenarios between AVs and VRUs, it is important to conduct a systematic review of the safety-critical scenarios in the literature.
Therefore, in this study, we 1) conducted a systematic review to understand the major factors that lead to possible accidents between AVs and VRUs, 2) proposed a taxonomy of the major factors involved in these safety-critical scenarios, and 3) discussed the possible solutions to handle these scenarios.
## Method
We conducted a systematic search of related studies by following the PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) process [14] in different databases, including ACM digital library, IEEE Xplore, ScienceDirect, and IIHS (Insurance Institute for Highway Safety). The sets of domain-specific searching key terms are: {"automated vehicle", "automated driving", "autonomous vehicle", "ADAS", "ADS", "bicyclists", "corner cases", "crashes", "cyclists", "pedestrians", "VRU", "E-scooter", Micro mobility", "skateboard"}. We used a combination of one or more key terms for searching. Key terms were searched within metadata (i.e., title, abstract, and keywords). Papers were restricted to journal publications, conference proceedings, and theses. Eligible studies were within a period from January 2010 to January 2022.
In total, 209 papers were identified in the selected databases. We identified 183 unique papers in total by removing 26 duplicate papers with the following eligibility criteria: 1) An empirical study (e.g., crash data analysis, human-subject experiment, or on-road testobservation), or 2) at least one incident or safety-critical scenario related to AV-VRU had to be reported. After scanning all 183 papers for eligibility based on abstract and methodology, we identified 39 relevant papers for further assessment and systematic review per the above criteria. The paper selection process is shown in Figure 1.
## Results
We identified 47 scenarios involving automated vehicles and VRUs from the selected papers. The identified scenarios were grouped into three main categories, including human factors (44.7%), environmental factors (27.7%), and vehicle fac
Figure 1: PRISMA overview method for literature searching.
tors (27.7%). Within each main category, subcategories were also defined to further explain the scenarios. The details are shown in Figure 2. For the scenarios related to human factors, most of them were about VRU behavior (76.2%), followed by driver's behavior (14.3%), and other driver's behavior (9.5%). For the scenarios related to environmental factors, the majority were about occlusion (76.9%), followed by road conditions (15.4%) and light condition (7.7%). For the scenarios related to vehicle factors, most of them were about system limitations (46.2%) and system failures (38.5%), followed by algorithmic decision-making (15.4%).
### Human Factors
The identified scenarios related to human factors are shown in Table 1 and they are divided into five different types, including e-scooters, bicyclists, pedestrians, other drivers and drivers of the AV. The majority of the scenarios are associated with pedestrians.
_Challenges._ The major challenges we identified related to human factors were summarized as follows:
(a) Unpredictability of VRU behaviors
(b) Unpredictability of other vehicles
(c) Understand VRU intentions
(d) Complexities of various safety-critical scenarios
(e) Incompatibility inadequacy of current AV infrastructure
(f) Mixed traffic between AV and conventional vehicles
(g) VRUs occlusion by objects
(h) Inadequacy of testing and evaluation methods
(i) Density of VRUs in complicated scenarios
(j) Lack of effective communication between VRUs and AVs
_Proposed Solutions._ While these scenarios create challenges for AVs, various solutions were proposed in the literature as summarized below.
**Machine Learning Methods:** Many researchers attempted to deal with interactions with VRUs with machine learning models to understand their behaviors and intentions so that AVs can take proactive responses to avoid potential accidents. In order to improve cyclist safety, Gu and Zhou (2017) proposed a solution with smartphones, which used sensors and GPS data to train a machine learning model to monitor the bicyclist's behavior to improve safety. The results showed that the proposed solutions achieved accuracy between 87% and 90% for detecting the three given scenarios. Garate and Bours (2012) proposed to detect VRUs using machine learning models, based on which an automatic braking system was used to reduce the effect of collision. Such pre-collision systems might create false alarms if the intentions of the VRUs are not well understood, which is one of the main challenges. Liu et al. (2016) analyzed the behavior of bicyclists by measuring the time to collision, lateral positioning, and the speed to inform the pre-collision system to interpret their intentions.
**Redesign Transportation Infrastructure:** The existing transportation infrastructure seems to be inadequate to address various scenarios between AVs and VRUs in mixed traffic. For example, Kiss and Berecz (2019) discussed the safety issues related to AVs in different scenarios and suggested that AV systems should be designed to be aware of the various situations and problems of infrastructure, such as road layouts, vehicle systems, and pedestrian behaviors. Mako (2015) investigated pedestrian accidents at crossings and found that safety measures, such as refuge islands, traffic signals, and flashing lights improved driver behavior. Polnnaluri and Alluri (2021) proposed using intelligent transportation systems to address issues, such as distracted pedestrians crossing when drivers have the right-of-way, bicyclists making left-turns or U-turns, and wrong-way driving by motorists. However, these technologies are not yet widely adopted.
While comparing to road infrastructure, network systems are promising to improve the safety of the VRUs. Sam et al. (2015) identified situations where accidents occurred due to obstructed views of VRUs caused by buildings, junctions, or traffic. They proposed a solution by creating an alert system based on a vehicular Ad Hoc network to give drivers advanced warning and increase their reaction time. However, testing such a system in real-world scenarios could pose risks. To overcome this limitation, Heinovski et al. (2019) developed a virtual model to simulate interactions between vehicles and
\begin{table}
\begin{tabular}{l l} \hline \hline Type & Description \\ \hline Bicyclists & \(\bullet\) Riding alternatively from one side of the lane to the other \\ & \(\bullet\) Standing up to pedal to accelerate \\ & \(\bullet\) Riding in the opposite direction of the traffic \\ & \(\bullet\) Cross an intersection with vehicles around \\ & \(\bullet\) A bicyclist initiating a left-turn U-turn maneuver at a signalized intersection and having to use a full travel lane \\ & \(\bullet\) WRUs occlusion by objects \\ \hline Pedestrians & \(\bullet\) Crossing a nonprivileged road \\ & \(\bullet\) Not yielding to the incoming traffic \\ & \(\bullet\) Crossing the road at the intersection with the red light on \\ & \(\bullet\) Pedestrians standing on the road for an indefinite period \\ & \(\bullet\) Pedestrian leaving the sidewalk in such a short distance \\ & so that the vehicle could not stop \\ & \(\bullet\) A distracted pedestrian crossing a signalized intersection \\ & \(\bullet\) When the conflicting driver had the right-of-way \\ & \(\bullet\) Children crossing the road \\ & \(\bullet\) A pedestrian jaywalking while crossing the road \\ & \(\bullet\) A pedestrian waiting in the middle of the road to let the \\ & vehicle cross \\ \hline Other drivers & \(\bullet\) Sudden lane changing of the front vehicle \\ & \(\bullet\) Sudden cuts of the other vehicles \\ & \(\bullet\) Wrong way driving of the motorist or other vehicles \\ \hline Drivers & \(\bullet\) AV crashing into a bus while leaving the parking space as \\ & it assumed that the bus driver would give the right of way \\ \hline \hline \end{tabular}
\end{table}
Table 1: _Typical scenarios related to human factors_
Figure 2: Classification of Scenarios
cyclists at intersections and investigate and record critical scenarios of cyclists' behavior. Moreover, recent advancements in communication technology, such as Wifi Direct and Dedicated Short-Range Communication enabled it to gain attention for vehicle-pedestrian communication. Tahmasbi-Sarvestani et al. (2017) proposed a vehicle-to-pedestrian safety system to extend situation awareness and hazard detection capabilities using SAE J2735 Personal Safety Message (PSM). The system was implemented and tested in intersection crossings between vehicles and pedestrians. They found such a communication system provided promising results to improve VRU safety.
**Investigating VRU Behaviors:** To increase the traffic safety of VRUs, the behavior of pedestrians and cyclists should be taken into consideration while designing the safety systems, such as trust, willingness and culture that affect the behaviour of the people. Wong (2019) examined the factors affecting people's behavior and emotions toward AVs and found that those presented with subjective information about an AV crash scenario showed less trust in AVs compared to those with factual information. Sikkenk and Terken (2015) surveyed people's willingness to display polite traffic behavior when interacting with AVs and found that the willingness to give the right of way was influenced by various factors, such as weather, the vulnerability of the road users, and driving styles of the participants. Wang et al. (2016) compared traffic safety cultures between Sweden and China by testing pre-designed incidents similar to the real traffic situations (e.g., sudden lane changing or cuts of the front vehicle, pedestrian suddenly coming to the road). They found that the designed advisory traffic information system increased the safety of the drivers in both countries. Moreover, increasing the traffic safety for children is important because of children's inability to scan the environment while crossing the road. Riaz and Cuenen (2019) developed a platform to teach children traffic safety rules by including four modules based on real-life situations. They found that children performed better in the familiar situations compared to unfamiliar situations.
**Simulation and Testing:** Researchers aim to increase safety for pedestrians and drivers using AV systems, but real-life testing is difficult and risky due to unpredictable human behavior and the challenge of gathering data on VRUs. Jannat et al. (2020) developed a pedestrian technology test bed and evaluated three different systems (camera-based, camera-radar fusion, and smartphone-based P2I) in four test scenarios, finding that the camera-based systems were vulnerable to environmental conditions while the smartphone-based P2I applications were not. Doric et al. (2016) introduced and evaluated a simulation-based system to mix human behavior when testing traffic safety systems, achieving an acceptable level of realism in a pedestrian-failed-to-yield scenario. AVs are improving traffic safety by attempting to eliminate human errors. However, there is still a need to understand the implications of AVs on traffic safety and researchers used different testing methods to assess the safety benefits. Kutela et al. (2022) analyzed AV crash data to explore the involvement of VRUs in traffic and found that bicyclists and cyclists were more likely to be involved in AV crashes and that crosswalks, intersections, and traffic signals were key factors. Hollander et al. (2020) developed a game to gather data on pedestrian behavior, finding that the game's scenarios correlated with real-world patterns and could be used to create a system for vehicle-pedestrian safety.
**Environmental Factors**
Table 2 shows the scenarios related to environmental factors in three different types, including occlusion, light conditions, and road conditions. The complexity of those scenarios are the main challenges facing AVs.
_Proposed Solutions._ We summarized the solutions below:
**Communicating with VRUs:** The current communications between vehicles can be extended to those between vehicles and VRUs for improved traffic safety. Segata et al. (2017) developed a probability framework to inform drivers to prevent collisions at intersections where a vehicle turning left might collide with a bicyclist. External human-machine interfaces (eHMI) on AVs are another hot research topic for communicating with VRUs. Rettenmaier et al. (2020) evaluated different eHMIs by in a driving simulator, where the AV and other human drivers communicated at a road bottleneck. The result showed that the selected e HMIwas able to significantly reduce the passing time and crashes with human drivers. However, communicating in the high traffic area is still a challenge. Rostami et al. (2016) proposed to evaluate the performance and channel load for the pedestrian to vehicle transmission in a high-density pedestrian scenario. Their results showed that the performance requirements were hard to meet.
**Investigating Environmental Factors:** Advancements perception technology is increasing AV performance. However, it is still a major issue for scenarios where objects are hidden or cannot be observed directly. Hoermann et al. (2017) proposed a safe approach for vehicles at intersections where pedestrians are hidden by buildings or vegetation, using free sensor fusion to calculate hidden regions and safely navigate turns. Chen et al. (2016) showed that the main factors affecting the brake response time were visibility in the darkness, which shortened the time to brake, intersections, and the number of potential threat vehicles. Sherony and Zhang (2015) analyzed the pre
\begin{table}
\begin{tabular}{p{14.2pt} p{284.5pt}} \hline \hline Type & Description \\ \hline Occlusions & • Pedestrian hiding by the corner of the building while walking, not leaving enough time for the car to stop \\ • A pedestrian walking alongagainst the traffic and obscured by the other traffic & • A pedestrian walking alongagainst the traffic and obscured by the other traffic \\ • Two pedestrians start crossing the street right in front of the driver. Of these two, one starts walking first and is partly hidden in the pillar blind spot. & • A car taking a left turn at the intersection and the VRU is obscured by the opposing vehicle \\ • A pile of snow in the sidewalk obscures the pedestrian’s view and the driver’s view. & • A pile of snow in the sidewalk obscures the pedestrian’s view and the driver’s view. \\ \multicolumn{2}{p{14.2pt}}{\(\bullet\) Pedestrians being blocked by the corner of buildings} \\ \hline Light Conditions & • Cyclists failing to detect right-turning vehicles under poor \\ \multicolumn{2}{p{14.2pt}}{\(\bullet\) Cyclists failing conditions} \\ \hline Road Conditions & • Two vehicles parking perpendicularly on both sides of the residential road creating a bottleneck \\ • Layout of the road is not proper making it difficult to distinguish the main road from the residential road \\ • Excessively worn road markings & • Previous road markings are visible or have been adjusted to road construction \\ \multicolumn{2}{p{14.2pt}}{\(\bullet\)} \\ \hline \hline \end{tabular}
\end{table}
Table 2: Typical Scenarios related to environmental factors
crash scenarios and crash analysis between VRUs and vehicles and found that drivers of age between 25-30 years were seen to hit the pedestrian and bicyclists when crossing the road due to road conditions and lighting conditions.
### Vehicle Factors
The identified scenarios related to vehicle factors are shown in Table 3 in three different types, including system limitation, system failure, and algorithmic decision making.
_Challenges._ We identified the major challenges described as 1) limitations of current sensor and perception technologies, especially in adverse conditions, 2) incompatibility with current infrastructure, 3) regulation, cultural differences across countries, and 4) security issues of the communication technology.
_Proposed Solutions:_ We discuss the solutions proposed in the literature as summarized below:
**Sensor Technology:** AVs are heavily dependent on sensor technologies to increase traffic safety and thus advancements in such technologies can reduce VRU fatalities. Combs et al. (2019) examined the potential of AVs to increase traffic safety by analyzing nearly 5000 pedestrian-related incidents, finding that combining visible-light cameras, light detection and ranging, and radar sensors was more effective than a single sensor in reducing incidents. Recent studies suggest that improving road infrastructure compatibility with AV sensors is important for improving traffic safety. Chipengo and Commens (2019) found that the high radar cross-section of current road guardrails and construction steel plates could trigger false alarms and ignore important signs. They demonstrated two techniques to reduce radar cross-section and improved the effectiveness of sensors used in ADAS systems for vehicle functions like steering, braking, and obstacle alerts. But one challenge of the ADAS system is that they might be unable to differentiate the real obstacles/object from the phantom objects. Nassi et al. (2020) demonstrated a scenario in which the ADAS systems, such as Mobileye 630 and Tesla Model X, recognized phantom depthless images that appeared for split seconds as a real objects. They also proposed a countermeasure model to determine the authenticity of the object with the help of camera sensors.
Another challenge of the sensor technology is its reliability in adverse conditions for SAE Level 3 and above AVs. For example, Du et al. (2020) found that in the scenarios of the high cognitive load and high incoming traffic density, drivers had the worst performance and lower taker-over readiness. Utriainen (2020) studied the potential impact of AVs on pedestrian safety in adverse weather conditions and evaluated the safety impacts in the scenarios based on the AV's ability to operate in snowy and low-light conditions. The result showed that 28 percent and 73 of fatal crashes could have been avoided by Level 4 and 5 AVs, respectively.
**False or Miscommunication:** Traffic safety can be compromised by vehicles sending malicious and false information to other vehicles in vehicle-to-vehicle communication. Sarker and Shen (2018) studied a scenario where an AV system sent false information to another vehicle which caused a safety-related incident. Thus, it is important to detect such malicious information. The miscommunication using eHMIs between AVs and VRUs can also lead to safety issues. Hollander et al. (2019) investigated the trust of pedestrians in a scenario where the external displays showed misinformation or contradicting information at the intersection. The results showed that the misinformation decreased the trust of pedestrians and or worsened the overtrust even in the case of malfunctioning of the vehicle. Further research in the domain of vehicle-to-VRU communication is needed to increase the safety of VRUs.
**Driver Assistance Technology:** Issues with other assistance technologies, such as navigation, crash avoidance, and automation level also played a role in accidents between VRUs and AVs. For example, Lin et al. (2017) identified the key reasons for accidents caused by navigation systems, such as missing road characteristics data, weather conditions, poor audio and visual instructions from 158 news reports. Good et al. (2017) analyzed how well a crash avoidance system performed in bicycle crash scenarios. Their findings indicated that the crash avoidance system did not offer significant safety or crash avoidance benefits. As a result, further work is required to enhance the effectiveness of crash avoidance systems. Wotton et al. (2022) studied who was responsible for a crash involving an AV that hit a pedestrian due to system failure while the driver was distracted. They looked at four levels of vehicle automation and found that despite differences in drivers' behaviors, the drivers were deemed responsible for the accident even though their behavior did not have a significant impact on the outcome.
## Conclusions
In this paper, we conducted a systematic review of safety-critical scenarios between AVs and VRUs. We identified three major categories, including human factors, environmental factors, and vehicle factors. Within each category, we identified detailed scenarios from the literature and discussed the proposed solutions that showed the advancement in improving safety of VRUs in interacting with AVs. However, due to the complexities of various scenarios, more work is needed to address the challenges, including but not limited to 1) understanding intentions and behaviors of VRUs, 2) improving the current transportation infrastructure, 3) communicating with VRUs effectively, 4) improving the sensor technologies in AVs, 5) building and utilizing reliable simulation and testing methods, and so on.
\begin{table}
\begin{tabular}{p{14.2pt} p{142.3pt}} \hline \hline Type & Description \\ \hline System Limitations & Short takeover lead time while in the heavy traffic density \\ & Digitally displayed imagery is perceived as a real object \\ & AV is not able to operate in adverse conditions and without visible lane markings \\ & Interaction of people of different cultures with AVs \\ & Visible light communication system is ineffective in disk and radar is ineffective when pedestrians are stationary \\ \hline System Failure & The AV shows wrong contradicting information on the external display to the VRUs \\ & The ADAS cannot detect the imagery that appears briefly \\ & Sensors failed to detect roadway infrastructure \\ & A vehicle sends false information to another vehicle causing the other vehicle to act and leading to fatal collision \\ \hline Algorithmic Decision Making & A situation where case has to make a decision. If the AV continues ahead it will hit and kill a group of pedestrians, including three adults and a dog, crossing on a red light. If the AV swerves, it will hit a barrier and kill its passenger \\ \hline \hline \end{tabular}
\end{table}
Table 3: Typical Scenarios related to vehicle factors
## Acknowledgement
This work was supported by Toyota Motor North America.
|
2308.00360 | An Efficient Algorithm for Computational Protein Design Problem | A protein is a sequence of basic blocks called amino acids, and it plays an
important role in animals and human beings. The computational protein design
(CPD) problem is to identify a protein that could perform some given functions.
The CPD problem can be formulated as a quadratic semi-assigement problem (QSAP)
and is extremely challenging due to its combinatorial properties over different
amino acid sequences. In this paper, we first show that the QSAP is equivalent
to its continuous relaxation problem, the RQSAP, in the sense that the QSAP and
RQSAP share the same optimal solution. Then we design an efficient quadratic
penalty method to solve large-scale RQSAP. Numerical results on benchmark
instances verify the superior performance of our approach over the
state-of-the-art branch-and-cut solvers. In particular, our proposed algorithm
outperforms the state-of-the-art solvers by three order of magnitude in CPU
time in most cases while returns a high-quality solution. | Yukai Zheng, Weikun Chen, Qingna Li | 2023-08-01T08:05:03Z | http://arxiv.org/abs/2308.00360v1 | # An Efficient Algorithm for Computational Protein Design Problem
###### Abstract
A protein is a sequence of basic blocks called amino acids, and it plays an important role in animals and human beings. The computational protein design (CPD) problem is to identify a protein that could perform some given functions. The CPD problem can be formulated as a quadratic semi-assigement problem (QSAP) and is extremely challenging due to its combinatorial properties over different amino acid sequences. In this paper, we first show that the QSAP is equivalent to its continuous relaxation problem, the RQSAP, in the sense that the QSAP and RQSAP share the same optimal solution. Then we design an efficient quadratic penalty method to solve large-scale RQSAP. Numerical results on benchmark instances verify the superior performance of our approach over the state-of-the-art branch-and-cut solvers. In particular, our proposed algorithm outperforms the state-of-the-art solvers by three order of magnitude in CPU time in most cases while returns a high-quality solution.
keywords: Computational protein design, Linear programming, Quadratic assignment problem, Penalty method, Projected Newton method. +
Footnote †: journal:
[inst]organization=School of Mathematics and Statistics, Beijing Institute of Technology, Beijing, China.
[inst]organization=School of Mathematics and Statistics/Beijing Key Laboratory on MCAACI, Beijing Institute of Technology, Beijing, China.
[inst]organization=School of Mathematics and Statistics, Beijing Institute of Technology, Beijing, China.
[inst]organization=School of Mathematics and Statistics/Beijing Key Laboratory on MCAACI, Beijing Institute of Technology, Beijing, China.
[inst]organization=School of Mathematics and Statistics, Beijing Institute of Technology, Beijing, China.
[inst]organization=School of Mathematics and Statistics/Beijing Key Laboratory on MCAACI, Beijing Institute of Technology, Beijing, China.
[inst]organization=School of Mathematics and Statistics/Beijing Key Laboratory on MCAACI, Beijing Institute of Technology, Beijing, China.
[inst]organization=School of Mathematics and Statistics, Beijing Institute of Technology, Beijing, China.
[MISSING_PAGE_POST]
## 1 Introduction
The study of the QSAP in the context of protein design (CPD) problem [1] is a very important part of the study of protein design (CPD) problem. The computational protein design (CPD) problem [2] is to identify a protein that could perform some given functions. The CPD problem can be formulated as a quadratic semi-assigement problem (QSAP) and is extremely challenging due to its combinatorial properties over different amino acid sequences. In this paper, we first show that the QSAP is equivalent to its continuous relaxation problem, the RQSAP, in the sense that the QSAP and RQSAP share the same optimal solution. Then we design an efficient quadratic penalty method to solve large-scale RQSAP. Numerical results on benchmark instances verify the superior performance of our approach over the state-of-the-art branch-and-cut solvers. In particular, our proposed algorithm outperforms the state-of-the-art solvers by three order of magnitude in CPU time in most cases while returns a high-quality solution.
Computational protein design, Linear programming, Quadratic assignment problem, Penalty method, Projected Newton method.
## 1 Introduction
Proteins are sequences of amino acids, and they play an important role in almost all the structural, catalytic, sensory and regulatory functions of living systems [11]. Different functions usually require proteins to be assembled into specific three-dimensional structures defined by their amino acid sequences [11]. Over millions of years, during the process of natural evolution, proteins acquire entirely new structures and functions through sequence variation, including mutation, recombination and repetition. Nowadays, as the protein engineering technology gives a huge boost to the development of medicine, synthetic biology, nanotechnology and biotechnology [30; 20; 43], it has become a topic of wide interest [29; 1]. For example, protein engineering has become a key technology for the manufacture of customized enzymes that can catalyze directional conversion under specific conditions [21; 23].
As each position on the protein chain has to be selected from over 20 kinds of natural amino acids, current experimental methods cannot afford such heavy complexity, even for short amino acid sequences [23; 31]. Therefore, the computational protein design (CPD) methods [35; 1] attempt to guide the protein design process by producing a set of specific proteins that is rich in functional proteins, but also small enough to be evaluated experimentally. In this way, the problem of selecting amino acid sequences to perform a given task can be defined as a computable optimization problem. It is often described as the inverse of the protein folding problem [28; 7; 46]: the three-dimensional structure of a protein is known, and we need to find the amino acid sequence folded into it [8].
The challenge of CPD problems lies in its combinatorial properties over different choices of natural amino acids. The resulting optimization model is usually NP-hard [33; 41]. Existing methods for CPD problems can be summarized into two lines. One line focuses on different mathematical models, including probabilistic graphical model [40; 14], linear integer programming model [48; 23], 0-1 quadratic programming model [34; 13], weighted partial maximum satisfiability problem (MaxSAT) [25; 36] and so on [1; 23]. Different models have different application scopes and performances in different situations. The other line devotes efforts to preprocessing methods, trying to reduce the computational complexity of the model [37; 1; 45]. For example, the dead end elimination (DEE) method [1; 45] reduces the problem size by eliminating some selection choices in the combinatorial space which does not contain the optimal solution. Such strategy can speed up the algorithm when solving the CPD problem [1]. Several successful cases have demonstrated the outstanding potential of the CPD methods for the
design of proteins with improved or brand new properties [1]. We refer to [1] for various preprocessing methods.
Our interest in this paper is in the mathematical model for the CPD problem, which is in the first line. Note that the CPD problem is essentially an integer programming problem. Among various models for integer programming, assignment models and corresponding algorithms have been widely applied in financial decision making [6], resources allocation [44] and especially in solving dynamic traffic problems [12; 22; 39]. In [9], the authors reformulate the hypergraph matching problem as an assignment problem, with nonlinear objective function. Due to the special structure in hypergraph matching problem, the authors propose a continuous relaxation problem which can also recover the optimal solution of the hypergraph matching problem. The key point of such recovery property lies in the linearity of the objective function with each block of assignment variable. Such favorable property is further explored in [47], where the assignment variable is introduced for Multi-Input-Multi-Output (MIMO) detection problem, and exact recovery result is also established therein.
Inspired by the work above, we consider the CPD problem as a quadratic semi-assignment problem (QSAP) in this paper. The QSAP enjoys the favorable property as in [9; 47], i.e., the objective function is linear with respect to each block of the assignment variable. With this property, the continuous relaxation problem can be proved to recover the global optimal solution of the QSAP. Finally, we design an efficient quadratic penalty method to solve the relaxation problem. Numerical results verifies the efficiency of our proposed algorithm.
The rest of this paper is organized as follows. In Section 2, we introduce some preliminaries and formulate the CPD problem as a QSAP. In Section 3, we study the relaxation of the CPD problem and propose a quadratic penalty method to solve the relaxation problem. In Section 4, we report the numerical results. Final conclusions are made in Section 5.
## 2 Problem Formulation
In this section, we give some preliminaries and formulate the CPD problem as a semi-assignment problem.
Proteins are sequences of organic compounds called amino acids, and each protein contains a peptide core, and a side chain. The amino acid cores are sequentially joined together to form the backbone of the protein. See **Figure**1 for demonstration. All proteins fold into a three-dimensional shape based on the information contained in their amino acid sequence. Depending on the amino acid
being considered, the side chain of each amino acid can be rotated by up to 4 dihedral angles relative to the main chain. According to Anfinsen's results [2], the three-dimensional structure of a protein can be determined by the rotation of the main and the corresponding side chains. This is called the conformation of the protein, and it determines the chemical reactivity and biological function of the protein [1; 23].
In the CPD problem, there are usually two assumptions [1]. Firstly, we assume that the final protein design preserves the overall folding pattern of the selected scaffold: that is, the protein backbone is assumed to be fixed. Amino acids can be modified by changing side chains at specific locations selected by computational biologists. Secondly, we assume that the conformational domain of each amino acid side chain is actually continuous. This continuous region can be approximated by a set of discrete conformations defined by their internal dihedral angle values. These conformations, also called rotamers [17], are derived from the most commonly used conformations in the Protein Data Bank (PDB), an experimental repository of known protein structures.
The CPD problem can be described as the problem of obtaining the conformation with the lowest energy by changing the amino acids residues, that is, changing the amino acids' identity and their 3D orientations. The conformation that minimizes the energy is called the global minimum energy conformation (GMEC) [1]. To solve this problem, a computable energy model is usually used to evaluate the energy of any conformation, and computational optimization techniques that can
Figure 1: A local view of relations between protein backbone, residue positions and rotamers [18]. Numbers in the picture denote the pairwise interaction energy between different selected rotamers.
effectively explore the conformational space to find the global minimum energy conformation.
So far, various energy functions have been defined to make the energy of the protein design problem easy to calculate and manage [4]. In this paper we use the version implemented through \(osprey\) 2.0, which has been widely used in solving CPD problems, such as in [15, 1, 27, 42]. That is, for a certain conformation, its energy can be expressed by the energy function below:
\[E=E_{\varnothing}+\sum_{i}E(i_{r})+\sum_{i}\sum_{j>i}E(i_{r},j_{s}), \tag{1}\]
where \(E_{\varnothing}\) is a constant energy contribution capturing interactions between fixed parts of the model; \(E(i_{r})\) is the energy contribution of rotamer \(r\) at position \(i\) capturing internal interactions (and a reference energy for the associated amino acid) or interactions with fixed regions and \(E(i_{r},j_{s})\) is the pairwise interaction energy between rotamer \(r\) at position \(i\) and rotamer \(s\) at position \(j\)[10].
The CPD problem is therefore an optimization problem defined by a specific set of positions, i.e. residues, on a fixed backbone to be selected, a rotamer library, and a set of energy functions. Each position \(i\) on the backbone corresponds to a subset \(I^{(i)}\) of all (amino-acid, rotamer) pairs in the library, and we need to select a residue from the corresponding \(I^{(i)}\) at each position \(i\) to minimize the total energy E. In practice, depending on the expertise or specific design requirements, \(I^{(i)}\) at each position \(i\) can be fixed (that is, \(I^{(i)}\) is single-valued), flexible (all pairs in \(I^{(i)}\) have the same amino acids), or variable (the general situation).
Next, we formulate the rigid backbone discrete rotamer CPD problem as a QSAP. Let \(n\) be the number of positions, and we use \([n]\) to denote \(\{1,2,...,n\}\). Let \(l_{i}\) be the number of alternative rotamers that can be located in position \(i\), \(i\in[n]\). That is, \(|I^{(i)}|=l_{i}\). Notice that the number of alternative residues corresponding to different positions on the backbone may be different from each other, we cannot use a rectangular matrix as the variable in the model. Therefore, we turn to use a vector \(x\) as the variable.
Let \(m=\sum\limits_{i=1}^{n}l_{i}\). Define \(x\in\mathbb{R}^{m}\) as follows:
\[x=\begin{pmatrix}x^{(1)}\\ \vdots\\ x^{(n)}\end{pmatrix}=\begin{pmatrix}x_{1}^{(1)}\\ \vdots\\ x_{l_{1}}^{(1)}\\ \vdots\\ x_{l_{n}}^{(n)}\end{pmatrix}\in\mathbb{R}^{m},\]
Let \(x^{(i)}\in\mathbb{R}^{l_{i}}\) be the \(i\)-th block of the assignment variable \(x\), \(i\in[n]\). We have:
\[x_{r}^{(i)}=\begin{cases}1,&\text{if $r$-th rotamer is assigned to position $i$,}\ \ i\in[n],\\ 0,&\text{otherwise.}\end{cases}\]
Define \(a\in\mathbb{R}^{m}\), \(B\in\mathbb{R}^{m\times m}\) as:
\[a=\begin{pmatrix}a^{(1)}\\ \vdots\\ a^{(n)}\end{pmatrix}=\begin{pmatrix}a_{1}^{(1)}\\ \vdots\\ a_{l_{1}}^{(1)}\\ \vdots\\ a_{l_{n}}^{(n)}\end{pmatrix}\in\mathbb{R}^{m},\ \ B=\begin{pmatrix}\mathbf{0}_{ \mathbf{l_{1}}\times\mathbf{l_{1}}}&B_{12}&\cdots&B_{1n}\\ B_{12}^{T}&\mathbf{0}_{\mathbf{l_{2}}\times\mathbf{l_{2}}}&\cdots&B_{2n}\\ \vdots&\vdots&\ddots&\vdots\\ B_{1n}^{T}&B_{2n}^{T}&\cdots&\mathbf{0}_{\mathbf{l_{n}}\times\mathbf{l_{n}}} \end{pmatrix}\in\mathbb{R}^{m\times m},\]
where \(a_{r}^{(i)}=E(i_{r})\), \(r\in I^{(i)}\), \(i\in[n]\), with
\[B_{ij}=\begin{pmatrix}b_{11}^{ij}&b_{12}^{ij}&\cdots&b_{1l_{j}}^{ij}\\ b_{21}^{ij}&b_{22}^{ij}&\cdots&b_{2l_{j}}^{ij}\\ \vdots&\vdots&\ddots&\vdots\\ b_{l_{i}1}^{ij}&b_{l_{i}2}^{ij}&\cdots&b_{l_{i}l_{j}}^{ij}\end{pmatrix}\in \mathbb{R}^{l_{i}\times l_{j}},\ \ i<j,\ \ i,\ \ j\in[n],\]
and
\[b_{rs}^{ij}=E(i_{r},j_{s}),\ \ r\in I^{(i)},\ \ s\in I^{(j)},\ \ i<j,\ \ i,\ \ j\in[n],\]
Here \(i\) and \(j\) represent different positions on the backbone of the protein.
Based on the above notations, the objective function of the CPD problem can
be represented by:
\[f(x)=\frac{1}{2}x^{T}Bx+a^{T}x, \tag{2}\]
and therefore the CPD problem can be expressed as the following QSAP:
\[\begin{split}\min_{x\in\mathbb{R}^{m}}& f(x)\\ \text{s.t.}&\sum_{r\in I^{(i)}}x_{r}^{(i)}=1,\ \ i\in[n],\\ & x_{r}^{(i)}\in\{0,1\}\,,\ \ r\in I^{(i)},\ \ i\in[n],\end{split} \tag{3}\]
where \(f(x)\) is defined as in (2).
## 3 Relaxation Problem and The Algorithm
In this part, we first show the equivalence between problem (3) and its relaxation problem. Then we design a quadratic penalty method to solve the relaxation problem.
### Relaxation Problem
Like many other quadratic assignment problems such as the traveling salesman problem [16], the bin-packing problem [26] and the max clique problem [5], the CPD problem (3) is also NP-hard [33; 41] in general, which means that the computational cost to solve (3) rises dramatically as the scale of the problem increases. Thus a natural way to solve (3) is to consider its relaxation problem as follows:
\[\begin{split}\min_{x\in\mathbb{R}^{m^{\prime}}}& f(x)\\ \text{s.t.}&\sum_{r\in I^{(i)}}x_{r}^{(i)}=1,\ \ i\in[n],\\ & x_{r}^{(i)}\geqslant 0,\ \ r\in I^{(i)},\ \ i\in[n].\end{split} \tag{4}\]
After relaxing \(x_{r}^{(i)}\in\{0,1\}\) to \(x_{r}^{(i)}\in[0,1]\), the feasible region in (4) is much larger than that of (3). Therefore, a natural question is, what is the relationship between the gloabl minimizer of (3) and the global minimizer of (4)? To answer this question, we first have the following proposition.
**Proposition 1**.: _f(x) is a linear function with respect to each block \(x^{(i)}\), \(i\in[n]\)._
In fact, \(\nabla f(x)=a+Bx\), and \(\nabla_{x^{(i)}}f(x)\) takes the following form, where \(i\), \(j\in[n]\).
\[\nabla_{x^{(i)}}f(x)=a^{(i)}+\sum_{j\neq i}B_{ij}x^{(j)}=\begin{pmatrix}a_{1}^{ (i)}\\ a_{2}^{(i)}\\ \vdots\\ a_{l_{i}}^{(i)}\end{pmatrix}+\begin{pmatrix}\sum\limits_{j\neq i}\sum\limits _{s\in I^{(i)}}b_{1s}^{ij}x_{s}^{(j)}\\ \sum\limits_{j\neq i}\sum\limits_{s\in I^{(i)}}b_{2s}^{ij}x_{s}^{(j)}\\ \cdot\\ \cdot\\ \cdot\\ \sum\limits_{j\neq i}\sum\limits_{s\in I^{(i)}}b_{l_{i}s}^{ij}x_{s}^{(j)} \end{pmatrix}. \tag{5}\]
Due to this linear property of \(f(x)\) with respect to each block \(x^{(i)}\), \(i\in[n]\), we have the following result.
**Theorem 1**.: _There exists an optimal solution \(x^{*}\) to the relaxation problem (4) such that \(\|x^{*}\|_{0}=n\). Thus \(x^{*}\) is also an optimal solution to the original problem (3)._
Proof of Theorem 1.: Define:
\[x^{(-i)}=\begin{pmatrix}x^{(1)}\\ \vdots\\ x^{(i-1)}\\ x^{(i+1)}\\ \vdots\\ x^{(n)}\end{pmatrix}\in\mathbb{R}^{m-l_{i}},\]
and
\[f(x) =\frac{1}{2}x^{T}Bx+a^{T}x\] \[:=f_{B}(x^{(i)},x^{(-i)})+f_{-B}(x^{(-i)})+f_{a}(x^{(i)})+f_{-a}( x^{(-i)})\]
Assume that \(x_{0}\) is an optimal solution to (4), such that \(\|x_{0}\|_{0}>m\), then we can find the first block of \(x_{0}\), denoted as \(x_{0}^{(i)}\), such that \(\|x_{0}^{(i)}\|_{0}>1\). For any \(r^{*}\in\Gamma(x_{0}^{(i)})=\left\{r:(x_{0})_{r}^{(i)}>0\right\},\) define \(x_{1}\) as:
\[(x_{1})_{r}^{(i)}=\begin{cases}1,&r=r^{*},\\ 0,&\text{otherwise},\end{cases}x_{1}^{(j)}=\begin{cases}x_{1}^{(i)},&j=i,\\ x_{0}^{(j)},&\text{otherwise}.\end{cases}\]
Then \(x_{1}\) is also a feasible solution for (4), such that \(x_{1}^{(-i)}=x_{0}^{(-i)}\), and
\(\nabla_{x^{(i)}}f(x_{1})\). Furthermore, we have:
\[f(x_{1})-f(x_{0})= f_{B}({x_{1}}^{(i)},{x_{1}}^{(-i)})+f_{-B}({x_{1}}^{(-i)})+f_{a}({ x_{1}}^{(i)})+f_{-a}({x_{1}}^{(-i)})\] \[-f_{B}({x_{0}}^{(i)},{x_{0}}^{(-i)})-f_{-B}({x_{0}}^{(-i)})-f_{a}({ x_{0}}^{(i)})-f_{-a}({x_{0}}^{(-i)})\] \[= f_{B}({x_{1}}^{(i)},{x_{1}}^{(-i)})+f_{a}({x_{1}}^{(i)})-f_{B}({ x_{0}}^{(i)}-f_{a}({x_{0}}^{(i)}),{x_{0}}^{(-i)})\] \[= ({x_{1}}^{(i)})^{T}\nabla_{x^{(i)}}f(x_{1})-f_{A}({x_{0}}^{(i)})- f_{B}({x_{0}}^{(i)},{x_{0}}^{(-i)})\] \[= ({x_{1}}^{(i)})_{r^{*}}(\nabla_{x^{(i)}})f(x_{1}))_{r^{*}}-f_{A}({ x_{0}}^{(i)})-f_{B}({x_{0}}^{(i)},{x_{0}}^{(-i)})\] \[= 0.\]
Therefore \(f(x_{1})=f(x_{0})\), and thus \(x_{1}\) is also an optimal solution of (4). Repeat this process, until we get \(x_{k}\), such that \(\|x_{k}\|_{0}=n\), and \(x_{k}\) is an optimal solution to (4). This shows that \(x_{k}\) is also an optimal solution to (3), which completes the proof.
**Theorem 1** basically reveals that problem (4) is a tight continuous relaxation of problem (3) in the sense that two problems share at least one global minimizer.
Based on the above result, suppose one gets a global minimizer of the relaxation problem (4), then one can use the following algorithm to get the global minimizer of the original problem (3). In this way, we can obtain the solutions of the original CPD problems by just solving their relaxations.
```
0: a global optimal solution \(x^{0}\in\mathbb{R}^{m}\) of the relaxation problem (4);
0: a global optimal solution \(\hat{x}\in\mathbb{R}^{m}\) of the original problem (3);
1: initialization: \(l=0\);
2:while\(\|x^{l}\|_{0}>n\)do
3: for\(i=1,...,n\), find the first block of \(x^{l}\), denoted as \((x^{l})^{(j)}\), such that \(\|(x^{l})^{(j)}\|_{0}>1\), choose one index \(s^{0}\) from \(\Gamma_{j}(x^{l}):=\left\{s:(x^{l})_{s}^{(j)}>0\right\}\);
4: define \(x^{l+1}\) as: \((x^{l+1})_{s}^{(j)}=\begin{cases}1,&s=s^{0},\\ 0,&\text{otherwise},\end{cases}\)\((x^{l+1})^{(i)}=\begin{cases}(x^{l+1})^{(j)},&i=j,\\ (x^{l})^{(i)},&\text{otherwise};\end{cases}\)
5: update \(l:=l+1\);
6:endwhile
7: output \(\hat{x}=x^{l}\).
```
**Algorithm 1** Obtain the global minimizer of (3) by that of (4)
**Remark 1**.: _For any approximate solution of the relaxation problem (4), we can also use **Algorithm 1** to obtain a feasible point of the original problem (3)._
### The Numerical Algorithm for Problem (4)
Next, we design a numerical method for the relaxation problem (4). It should be noticed that the aim we solve (4) is to identify the locations of nonzero entries of the global minimizer of (4), rather than to find the scale of it. This is because once the locations of the nonzero entries are identified, we can apply **Algorithm** 1 to obtain a global optimal solution of (3). From this point of view, keeping the equality constraints in (4) may not be necessary. Therefore, we apply the quadratic penalty method to solve (4). That is, we penalize the equality constraints to objective, and solve the following quadratic penalty subproblem:
\[\min_{x\in\mathbb{R}^{m}}f(x)+\frac{\sigma}{2}\sum_{i=1}^{n}\left(\sum_{r\in t ^{(i)}}x_{r}^{(i)}-1\right)^{2}\ \ \text{s.t.}\ x\geqslant 0, \tag{6}\]
where \(\sigma\) is a penalty parameter.
Based on the above, we can get **Algorithm** 2 as follows, which obtains the optimal solution of the relaxation problem (4) numerically, and then transforms it into the optimal solution of the original problem (3).
```
0:\(x^{0}\geqslant 0\), \(\sigma_{0}>0\), \(\rho>1\);
0: an optimal solution \(\hat{x}\in\mathbb{R}^{m}\) to the original problem (3);
1: initialization: choose \(x^{0}\in\mathbb{R}_{+}^{m}\), \(k:=0\);
2:while the termination condition is not met do
3: start from \(x^{k}\), solve problem (6) with \(\sigma:=\sigma_{k}\) to get \(x^{k+1}\);
4: update \(\sigma_{k+1}:=\rho\sigma_{k}\), \(k:=k+1\);
5:endwhile
6: transform \(x^{k}\) into \(\hat{x}\) by **Algorithm** 1;
7: output \(\hat{x}\).
```
**Algorithm 2** Quadratic Penalty Method for (3)
In **Algorithm** 2, we use the projected Newton method [3] to solve the subproblem (6).
The following theorem addresses the convergence of the quadratic penalty method, which can be found in classic optimization books such as [19] (Theorem 17.1) and [38] (Corollary 10.2.6). Therefore, the proof is omitted.
**Theorem 2**.: _Let \(\left\{x^{k}\right\}\) be generated by **Algorithm** 2, and \(\lim_{k\rightarrow\infty}\sigma_{k}=+\infty\). If each \(x^{k+1}\) is a global minimizer of (6), then any accumulation point of the generated sequence \(\left\{x^{k}\right\}\) is a global optimal solution of the relaxation problem (4)._
Due to **Theorem**3, we always assume the following holds.
**Assumption 1**.: _Let \(\left\{x^{k}\right\}\) be generated by **Algorithm 2**, and \(\lim_{k\rightarrow\infty}\sigma_{k}=+\infty\). Denote \(K\) as a subset of \(\left\{1,2,...\right\}\). Assume that \(\lim_{k\rightarrow\infty,k\in K}x^{k}=z\), and \(z\) is a global optimal solution of the relaxation problem (4)._
The following theorems further analyses the convergence of **Algorithm**2. We define:
\[\Gamma^{k} =\left\{l:(x^{k})_{l}>0\right\},\ \ \Gamma(z)=\left\{l:z_{l}>0 \right\},\] \[J_{i}^{k} =\operatorname*{arg\,max}_{r}\left\{(x^{k})_{r}^{(i)}\right\}, \ \ J_{i}(z)=\operatorname*{arg\,max}_{r}\left\{z_{r}^{(i)}\right\},\] \[J^{k} =\left\{J_{i}^{k},i\in[n]\right\},\ \ J(z)=\left\{J_{i}(z),i\in[n] \right\},\]
**Theorem 3**.: _Suppose that **Assumption 1** holds._
* _If_ \(\|z\|_{0}=m\)_, then there exists an integer_ \(k_{0}>0\)_, such that_ \(J^{k}=\Gamma(z)\)_,_ \(\forall k\geqslant k_{0}\)_,_ \(k\in K\)_;_
* _If_ \(\|z\|_{0}>m\)_, and_ \(|J_{i}(z)|=1\)_,_ \(\forall i\in[n]\)_, then there exist an integer_ \(k_{0}>0\) _and an optimal solution_ \(x^{*}\) _to the original problem (_3_), such that_ \(J^{k}=\Gamma^{*}\)_,_ \(\forall k\geqslant k_{0}\)_,_ \(k\in K\)_;_
* _If_ \(\|z\|_{0}>m\)_, and_ \(|J_{i}(z)|>1\) _for at least one_ \(i\in[n]\)_, then there exist a subsequence_ \(\left\{x^{k}\right\}\)_,_ \(k\in K^{\prime}\subseteq K\)_, an integer_ \(k_{0}>0\)_, and an optimal solution_ \(x^{*}\) _to the original problem (_3_), such that_ \(J^{k}=\Gamma^{*}\)_,_ \(\forall k\geqslant k_{0}\)_,_ \(k\in K^{\prime}\)_._
Proof.: The proof is similar to that of Theorem 4 in [9].
**Theorem**3 ensures that there is always a subsequence of \(\left\{x^{k}\right\}\) generated by **Algorithm**2 whose support set will coincide with the support set of one global minimizer of (3).
**Theorem 4**.: _Suppose that **Assumption 1** holds. If there exists a positive integer \(k_{0}\), such that \(\|x^{k}\|_{0}=m\), \(\forall k\geqslant k_{0}\), \(k\in K\), then there is a positive integer \(k_{1}\geqslant k_{0}\) such that \(\Gamma^{k}=\Gamma(z)\), \(\forall k\geqslant k_{1}\), \(k\in K\), where \(z\) is an optimal solution of (3)._
Proof.: The proof is similar to that of Theorem 3 in [9].
**Theorem**4 gives a special case when **Algorithm**2 converges, which indicates that we do not need to drive \(\sigma_{k}\) to infinity since only the support set of \(z\) is needed. In practice, if the conditions in **Theorem**4 holds, we can stop the algorithm when the elements in \(J^{k}\) keep unchanged for several iterations. Consequently, the above theorems provide a method to design the termination rule for **Algorithm**2.
## 4 Numerical Results
The proposed **Algorithm**2 is termed as AQPPG, which is the abbreviation of Assignment Quadratic Penalty Projected Gradient method. We implement the algorithm in MATLAB(R2018a). All the experiments are performed on a Lenovo desktop with AMD Ryzen7 4800H CPU at 2.90 GHz and 16 GB of memory running Windows 10. We use the data as in [1], which can be downloaded from [https://genoweb.toulouse.inra.fr/~tschiex/CPD-AIJ/](https://genoweb.toulouse.inra.fr/~tschiex/CPD-AIJ/). 1
Footnote 1: To convert the floating point energies of a given instance to non-negative integer costs, David Allouche et al. [1] subtracted the minimum energy to all energies and then multiplied energies by an integer constant \(M\) and rounded to the nearest integer. Therefore, all the energies in the data sets are non-negative integers.
### Pre-processing
At present, there are many pre-processing methods for the CPD problem, such as the dead-end elimination [10] and its various extensions [1; 24; 32]. However, in this paper, we turn to use a much easier one, while numerical results still verify its effectiveness.
Under real circumstances in the CPD problem, for each positon, there are some certain rotamers that lead to sterical clashes, which means that these rotamers can not be chosen in the corresponding position. We denote the set of all such rotamers as \(M_{i}\) for position \(i\), \(i\in[n]\). In practice, these rotamers are associated with huge energies equal to the upper bound \(U\) (the forbidden cost), i.e., \(a_{r}^{(i)}=U\), \(r\in M_{i}\), \(i\in[n]\). For each dataset, the upper bound \(U\) is set to the sum, over all cost functions, of the maximum energies (excluding forbidden sterical clashes) [1]. This would make the vector \(a\) ill-conditioned. However, knowing that the optimal solution will not contain such rotamers, we could expect the corresponding components of the variable \(x\) to be zero. Therefore, we can delete the components of \(x\), \(a\) and \(B\) that correspond to the rotamers leading to sterical clashes and get \(x^{\prime}\), \(a^{\prime}\) and \(B^{\prime}\). Then the reduced problem of (3) can be written as:
\[\begin{split}\min_{x^{\prime}\in\mathbb{R}^{n^{\prime}}}& f^{\prime}(x^{\prime})\\ \text{s.t.}&\sum_{r\in I^{(i)}-M_{i}}{x^{\prime}}_ {r}^{(i)}=1,\ \ i\in[n],\\ &{x^{\prime}}_{r}^{(i)}\in\left\{0,1\right\},\ \ r\in I^{(i)}-M_{i},\ \ i\in[n],\end{split} \tag{7}\]
where \(m^{\prime}=m-\sum_{i=1}^{n}|M_{i}|\), and \(f^{\prime}(x)\) is defined as:
\[f^{\prime}(x)=\frac{1}{2}x^{T}B^{\prime}x+a^{\prime T}x. \tag{8}\]
Once we get an optimal solution, namely \(\hat{x}\), of this reduced problem (7), we can add zeros to the components of \(\hat{x}\) that have been deleted before. We denote this new variable as \(x^{*}\). Naturally, \(x^{*}\) is an optimal solution of the original CPD problem (3). As a consequence, we can always find the solution of (3) by simply solving the reduced problem (7). Furthermore, it is easy to verify that the properties of (3) we discussed in Section 3 are exactly the same with (7). Therefore, in practice we run AQPPG to solve the relaxation problem (4) of the reduced problem (7) and get the optimal solution \(\hat{x}\) of (7). Then, starting from \(\hat{x}\), we recover the optimal solution \(x^{*}\) of the original problem (3).
\begin{table}
\begin{tabular}{c c c c c} \hline \multirow{2}{*}{Data} & Position & Rotamer & Component & Component \\ & \(n\) & \(l_{i}\) & \(m\) & \(m^{\prime}\) \\ \hline
1HZ5 & 12 & (49, 49) & 588 & 427 \\
1PGB & 11 & (49, 49) & 539 & 438 \\
2PCY & 18 & (48, 48) & 864 & 598 \\
1CSK & 30 & (3, 49) & 616 & 508 \\
1CTF & 39 & (3, 56) & 1204 & 1012 \\
1FNA & 38 & (3, 48) & 990 & 887 \\
1PGB(2) & 11 & (198, 198) & 2178 & 1803 \\
1UBI & 13 & (49, 49) & 637 & 498 \\
2TRX & 11 & (48, 48) & 528 & 410 \\
1UBI(2) & 13 & (198, 198) & 2574 & 2147 \\
2DHC & 14 & (198, 198) & 2772 & 2225 \\
1PIN & 28 & (198, 198) & 5544 & 5010 \\
1C9O & 55 & (198, 198) & 10890 & 9823 \\
1C9O(2) & 43 & (3, 182) & 1950 & 1859 \\
1CSE & 97 & (3, 183) & 1355 & 1098 \\
1CSP & 30 & (3, 182) & 1114 & 1026 \\
1DKT & 46 & (3, 190) & 2243 & 2008 \\
1BK2 & 24 & (3, 182) & 1294 & 1089 \\
1BRS & 44 & (3, 194) & 3741 & 3094 \\ \hline \end{tabular}
\end{table}
Table 1: Parameters of all the data
**Table 1** - continued from previous page
\begin{tabular}{c c c c c} \hline \hline Data & \begin{tabular}{c} Position \\ \(n\) \\ \end{tabular} & \begin{tabular}{c} Rotamer \\ \(l_{i}\) \\ \end{tabular} & \begin{tabular}{c} Component \\ \(m\) \\ \end{tabular} &
\begin{tabular}{c} Component \\ \(m^{\prime}\) \\ \end{tabular} \\ \hline
1CM1 & 17 & (198, 198) & 3366 & 2242 \\
1SHG & 28 & (3, 182) & 737 & 613 \\
1MJC & 28 & (3, 182) & 493 & 440 \\
1SHF & 30 & (3, 56) & 638 & 527 \\
1FYN & 23 & (3, 186) & 2474 & 2110 \\
1NXB & 34 & (3, 56) & 800 & 625 \\
1TEN & 39 & (3, 66) & 808 & 674 \\
1POH & 46 & (3, 182) & 943 & 769 \\
1CDL & 40 & (3, 186) & 4141 & 3211 \\
1HZ5(2) & 12 & (198, 198) & 2376 & 1738 \\
2DRI & 37 & (3, 186) & 2120 & 1869 \\
2PCY(2) & 46 & (3, 56) & 1057 & 855 \\
2TRX(2) & 61 & (3, 186) & 1589 & 1499 \\
1CM1(2) & 42 & (3, 186) & 3633 & 2944 \\
1LZ1 & 59 & (3, 57) & 1467 & 1202 \\
1GVP & 52 & (3, 182) & 3826 & 3433 \\
1R1S & 56 & (3, 182) & 3276 & 2873 \\
2RN2 & 69 & (3, 66) & 1667 & 1224 \\
1HNG & 85 & (3, 182) & 2341 & 2085 \\
3CHY & 74 & (3, 66) & 2010 & 1665 \\
1L63 & 83 & (3, 182) & 2392 & 2031 \\ \hline \hline \end{tabular}
**Table 1** shows the information of all data sets tested. In **Table 1**, _Position_ (\(n\)) represents the number of positions in the target protein, which is also the number of blocks in the decision variables corresponding to both (3) and (7). _Rotamer_ (\(l_{i}\)) shows how many rotamers one position can contain at least and at most, in the form of (min \(l_{i}\), max \(l_{i}\)). _Component_ (\(m\)) shows the dimension of the decision variable before preprocessing, i.e., the dimension of \(x\) in (3). _Component_ (\(m^{\prime}\)) shows the dimension of the decision variable after preprocessing, i.e., the dimension of \(\hat{x}\) in (7).
### An example as illustration
We first demonstrate the performance of AQPPG on the data set 1SHF. The target protein in 1SHF contains 30 positions, which means that the decision variables have 30 blocks, i.e., \(n=30\). Each position contains at least 3, and at most
\begin{table}
\begin{tabular}{c c c c c} \hline \hline Data & \begin{tabular}{c} Position \\ \(n\) \\ \end{tabular} & \begin{tabular}{c} Rotamer \\ \(l_{i}\) \\ \end{tabular} & \begin{tabular}{c} Component \\ \(m\) \\ \end{tabular} &
\begin{tabular}{c} Component \\ \(m^{\prime}\) \\ \end{tabular} \\ \hline
1CM1 & 17 & (198, 198) & 3366 & 2242 \\
1SHG & 28 & (3, 182) & 737 & 613 \\
1MJC & 28 & (3, 182) & 493 & 440 \\
1SHF & 30 & (3, 56) & 638 & 527 \\
1FYN & 23 & (3, 186) & 2474 & 2110 \\
1NXB & 34 & (3, 56) & 800 & 625 \\
1TEN & 39 & (3, 66) & 808 & 674 \\
1POH & 46 & (3, 182) & 943 & 769 \\
1CDL & 40 & (3, 186) & 4141 & 3211 \\
1HZ5(2) & 12 & (198, 198) & 2376 & 1738 \\
2DRI & 37 & (3, 186) & 2120 & 1869 \\
2PCY(2) & 46 & (3, 56) & 1057 & 855 \\
2TRX(2) & 61 & (3, 186) & 1589 & 1499 \\
1CM1(2) & 42 & (3, 186) & 3633 & 2944 \\
1LZ1 & 59 & (3, 57) & 1467 & 1202 \\
1GVP & 52 & (3, 182) & 3826 & 3433 \\
1R1S & 56 & (3, 182) & 3276 & 2873 \\
2RN2 & 69 & (3, 66) & 1667 & 1224 \\
1HNG & 85 & (3, 182) & 2341 & 2085 \\
3CHY & 74 & (3, 66) & 2010 & 1665 \\
1L63 & 83 & (3, 182) & 2392 & 2031 \\ \hline \hline \end{tabular}
\end{table}
Table 1: - continued from previous page
56 kinds of rotamers to be selected, which means that each block in the decision variable has 3 to 56 components, i.e., \(3\leqslant l_{i}\leqslant 56\), \(i\in[n]\). Exact numbers of rotamers that can be selected for each position are shown in **Figure** 2. The original decision variable, i.e., \(x\) in (3), is a 638-dimensional vector, and it reduces to a 527-dimensional vector after preprocessing, i.e., \(\hat{x}\in\mathbb{R}^{527}\) in (7).
protein energy, i.e., the optimization goal, reaches the minimum value 1101835 when specific rotamers are selected for the corresponding positions, as shown in **Table** 2. The following **Figure** 3-5 shows more details during the iteration process.
As shown in **Figure**3 and **Figure**4, the number of nonzero components in the decision variable \(x^{k}\) decreased gradually during the iteration process, and the number of nonzero entries finally turned into 38, which means that the decision variable \(x^{k}\) was close to the feasible region of (7). **Figure**5 shows the function value during the iterations. In the first 50 iterations, the function value dropped dramatically from the initial value, which was over \(2\times 10^{10}\). When \(k\) was about 120, 230, 310, 410 and 490, there were several fluctuations to the function value, meaning that the algorithm was searching for better staionary points. After that, the function value decreased rapidly and gradually stabilized at 1000398, which was the optimal value of the relaxation problem (4). However, note that the decision variable \(x^{k}\) is not a feasible point of the original problem (3), we need to first transform \(x^{k}\) into \(\hat{x}\) using **Algorithm**1, so that it becomes an optimal solution of the reduced problem (7). Then we add zeros to the components of \(\hat{x}\) that were removed in the preprocessing and get \(x^{*}\), which is an optimal solution of the original CPD problem (3). Selected rotamers are shown in **Table**2, and the corresponding
Figure 4: The number of nonzero components in \(x^{k}\) for 1SHF
optimal value for (3) is 1101835.
To demonstrate the benefits that our pre-processing brings, below we show the results given by the unpreprocessed version of our algorithm for 1SHF, that is, we run AQPPG to directly solve the relaxation (4) of the original CPD problem (3), and recover the optimal solution of (3) by that of (4).
UAQPPG stands for the unpreprocessed version of our algorithm AQPPG. \(Difference\) shows the difference between the given result and that of AQPPG. From **Table** 3, it is clear that the pre-processing not only improves the efficiency
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline & Objective & Difference & Time & Iteration \\ \hline UAQPPG & 1102348 & +0.05\% & 45 & 10926 \\ \hline AQPPG & 1101835 & / & 1 & 795 \\ \hline \end{tabular}
\end{table}
Table 3: Comparison between UAQPPG and AQPPG for 1SHF
Figure 5: Function values during the iterations for 1SHF
of the algorithm, but also improves the quality of the solution. This is because the ill-condition of vector \(a\) leads to a huge increase in the initial value of the objective. In other words, \(f(x)\) in (6) becomes too big in the early iterations, which would break the balance between the two parts in (6), and make the penalty method less efficient.
### Comparison with the state-of-the-art branch-and-cut (or QSAP) solver
We compare AQPPG with Gurobi (version 9.5.2), one of the state-of-the-art branch-and-cut solver.
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline Data & \multicolumn{2}{c}{Objective} & Ratio & \multicolumn{2}{c}{Time} \\ & AQPPG & Gurobi & Gurobi & AQPPG & Gurobi \\ \hline
1HZ5 & 150714 & 150714 & 100.00\% & 1 & 4:32:07 \\
1PGB & 125306 & 125306 & 100.00\% & 1 & 5:13:59 \\
2PCY & 308545 & 307667 & 99.72\% & 2 & 9:58:45 \\
1CSK & 1125971 & *1125798 & 99.98\% & 7 & 10:00:00 \\
1CTF & 1882883 & *1881874 & 99.95\% & 10 & 10:00:00 \\
1FNA & 3751671 & *3750260 & 99.96\% & 5 & 10:00:00 \\
1PGB(2) & 287413 & *286468 & 99.67\% & 16 & 10:00:00 \\
1UBI & 159700 & 159522 & 99.89\% & 1 & 5:32:53 \\
2TRX & 178900 & 178534 & 99.80\% & 2 & 4:34:26 \\
1UBI(2) & 382033 & *381180 & 99.78\% & 33 & 10:00:00 \\
2DHC & 1424025 & *1422718 & 99.91\% & 23 & 10:00:00 \\
1PIN & 1996834 & *1995099 & 99.91\% & 2:35 & 10:00:00 \\
1C9O & 8084802 & - & - & 3:58 & - \\
1C9O(2) & 4975017 & *4959931 & 99.70\% & 42 & 10:00:00 \\
1CSE & 18602843 & *18602292 & 100.00\% & 27 & 10:00:00 \\
1CSP & 2521159 & *2520706 & 99.98\% & 14 & 10:00:00 \\
1DKT & 4214282 & *4192707 & 99.49\% & 8:35 & 10:00:00 \\
1BK2 & 1140948 & *1133737 & 99.37\% & 6 & 10:00:00 \\
1BRS & 4017422 & *4007755 & 99.76\% & 2:20 & 10:00:00 \\
1CM1 & 746221 & *743645 & 99.66\% & 41 & 10:00:00 \\ \hline \hline \end{tabular}
Continued on next page
\end{table}
Table 4: Results given by AQPPG and Gurobi
**Table 4** - continued from previous page
\begin{tabular}{c c c c c c} \hline \hline \multirow{2}{*}{Data} & \multicolumn{2}{c}{Objective} & Ratio & Time \\ & AQPPG & Gurobi & Gurobi & AQPPG & Gurobi \\ \hline
1SHG & 1513349 & 1513151 & 99.99\% & 1 & 5:27:03 \\
1MJC & 1514481 & - & - & 2 & - \\
1SHF & 1101835 & 1101033 & 99.93\% & 1 & 7:18:47 \\
1FYN & 1194046 & *1183722 & 99.14\% & 58 & 10:00:00 \\
1NXB & 2979543 & *2971624 & 99.73\% & 2 & 10:00:00 \\
1TEN & 1962500 & *1959862 & 99.87\% & 2 & 10:00:00 \\
1POH & 4035139 & 4033915 & 99.97\% & 6 & 8:04:23 \\
1CDL & 3594181 & 3590578 & 99.90\% & 6:47 & 2:45:33 \\
1HZ5(2) & 343021 & *343113 & 100.03\% & 9 & 10:00:00 \\
2DRI & 2908142 & *2905276 & 99.90\% & 1:07:08 & 10:00:00 \\
2PCY(2) & 2937638 & *2935820 & 99.94\% & 5 & 10:00:00 \\
2TRX(2) & 7020438 & *7016199 & 99.93\% & 20 & 10:00:00 \\
1CM1(2) & 3904719 & *3895736 & 99.77\% & 2:46 & 10:00:00 \\
1LZ1 & 7038826 & *7022768 & 99.77\% & 6 & 10:00:00 \\
1GVP & 5205320 & *5196913 & 99.84\% & 1:25 & 10:00:00 \\
1R1S & 6174155 & *6171802 & 99.96\% & 11:00 & 10:00:00 \\
2RN2 & 8918311 & *8910166 & 99.91\% & 55 & 10:00:00 \\
1HNG & 13543984 & *13532638 & 99.91\% & 2:38 & 10:00:00 \\
3CHY & 10466158 & *10461537 & 99.96\% & 13 & 10:00:00 \\
1L63 & 13015089 & *12891316 & 99.05\% & 32 & 10:00:00 \\ \hline \hline \end{tabular}
**Table 4** shows the results given by AQPPG and Gurobi. _Objective_ represents the optimal values of the objective function given by different methods. Results marked with * means that the corresponding solver does not terminate within 10 hours, and the objective is the best value the solver could give within 10 hours. - means that the solver fail to solve the problem for the lack of memory. _Ratio_ represents the ratio of the optimal values compared to those given by AQPPG. _Time_ shows the CPU time to get the optimal values in the form of \(hh:mm:ss\).
We can see that our algorithm AQPPG could effectively solve CPD problems. Gaps between the solutions given by Gurobi and AQPPG range from -0.95% to +0.03%. However, compared with Gurobi, the proposed AQPPG is much more efficient. In most cases, AQPPG outperforms Gurobi by three order of magnitude in CPU time, and the CPU time for Gurobi to reach the optimal solution exceeds 10 hours in nearly all the cases. Specifically, Gurobi even fails to find feasible points in some certain cases such as 1C9O and 1MJC, while AQPPG could still
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline \multirow{2}{*}{Data} & \multicolumn{2}{c}{Objective} & Ratio & Time \\ & AQPPG & Gurobi & Gurobi & AQPPG & Gurobi \\ \hline
1SHG & 1513349 & 1513151 & 99.99\% & 1 & 5:27:03 \\
1MJC & 1514481 & - & - & 2 & - \\
1SHF & 1101835 & 1101033 & 99.93\% & 1 & 7:18:47 \\
1FYN & 1194046 & *1183722 & 99.14\% & 58 & 10:00:00 \\
1NXB & 2979543 & *2971624 & 99.73\% & 2 & 10:00:00 \\
1TEN & 1962500 & *1959862 & 99.87\% & 2 & 10:00:00 \\
1POH & 4035139 & 4033915 & 99.97\% & 6 & 8:04:23 \\
1CDL & 3594181 & 3590578 & 99.90\% & 6:47 & 2:45:33 \\
1HZ5(2) & 343021 & *343113 & 100.03\% & 9 & 10:00:00 \\
2DRI & 2908142 & *2905276 & 99.90\% & 1:07:08 & 10:00:00 \\
2PCY(2) & 2937638 & *2935820 & 99.94\% & 5 & 10:00:00 \\
2TRX(2) & 7020438 & *7016199 & 99.93\% & 20 & 10:00:00 \\
1CM1(2) & 3904719 & *3895736 & 99.77\% & 2:46 & 10:00:00 \\
1LZ1 & 7038826 & *7022768 & 99.77\% & 6 & 10:00:00 \\
1GVP & 5205320 & *5196913 & 99.84\% & 1:25 & 10:00:00 \\
1R1S & 6174155 & *6171802 & 99.96\% & 11:00 & 10:00:00 \\
2RN2 & 8918311 & *8910166 & 99.91\% & 55 & 10:00:00 \\
1HNG & 13543984 & *13532638 & 99.91\% & 2:38 & 10:00:00 \\
3CHY & 10466158 & *10461537 & 99.96\% & 13 & 10:00:00 \\
1L63 & 13015089 & *12891316 & 99.05\% & 32 & 10:00:00 \\ \hline \hline \end{tabular}
\end{table}
Table 4: - continued from previous page
terminate in a short period of time.
Based on the above results, we can conclude that our proposed AQPPG can effectively find a high-quality solution within a reasonable amount of time.
## 5 Conclusion
In this paper, we proposed an efficient algorithm called AQPPG for solving the CPD problem. Using the fact that the objective of CPD problem relies linearly on each block of the decision variable, we proved that any optimal solution to the relaxation problem (4) can be transformed into an optimal solution to the original problem (3). Then we proposed AQPPG, a quadratic penalty method applied to solve the proposed relaxation problem. Our simulation results show that our proposed algorithm can effectively find a high-quality solution for the CPD problem, and is much more efficient than the state-of-the-art branch-and-cut solver Gurobi.
|
2303.02605 | Mapper Side Geometric Shaping for QAM Constellations in 5G MIMO Wireless
Channel with Realistic LDPC Codes | In wireless communication systems, there are many stages for signal
transmission. Among them, mapping and demapping convert a sequence of bits into
a sequence of complex numbers and vice versa. This operation is performed by a
system of constellations~ -- by a set of labeled points on the complex plane.
Usually, the geometry of the constellation is fixed, and constellation points
are uniformly spaced, e.g., the same quadrature amplitude modulation (QAM) is
used in a wide range of signal-to-noise ratio (SNR). By eliminating the
uniformity of constellations, it is possible to achieve greater values of
capacity. Due to the current standard restrictions, it is difficult to change
the constellation both on the mapper or demapper side. In this case, one can
optimize the constellation only on the mapper or the demapper side using
original methodology. By the numerical calculating of capacity, we show that
the optimal geometric constellation depends on SNR. Optimization is carried out
by maximizing mutual information (MI). The MI function describes the amount of
information being transmitted through the channel with the optimal encoding. To
prove the effectiveness of this approach we provide numerical experiments in
the modern physical level Sionna simulator using the realistic LDPC codes and
the MIMO 5G OFDM channels. | Daniil Yudakov, Dmitrii Kolosov, Evgeny Bobrov | 2023-03-05T08:18:22Z | http://arxiv.org/abs/2303.02605v1 | Mapper Side Geometric Shaping for QAM Constellations in 5G MIMO Wireless Channel with Realistic LDPC Codes
###### Abstract
In wireless communication systems, there are many stages for signal transmission. Among them, mapping and demapping convert a sequence of bits into a sequence of complex numbers and vice versa. This operation is performed by a system of constellations -- by a set of labeled points on the complex plane. Usually, the geometry of the constellation is fixed, and constellation points are uniformly spaced, e.g., the same quadrature amplitude modulation (QAM) is used in a wide range of signal-to-noise ratio (SNR). By eliminating the uniformity of constellations, it is possible to achieve greater values of capacity. Due to the current standard restrictions, it is difficult to change the constellation both on the mapper or demapper side. In this case, one can optimize the constellation only on the mapper or the demapper side using original methodology. By the numerical calculating of capacity, we show that the optimal geometric constellation depends on SNR. Optimization is carried out by maximizing mutual information (MI). The MI function describes the amount of information being transmitted through the channel with the optimal encoding. To prove the effectiveness of this approach we provide numerical experiments in the modern physical level Sonna simulator using the realistic LDPC codes and the MIMO 5G OFDM channels.
Keywords:Wireless MCS Constellation Geometric Shaping BLER SE BICM Mutual Information QAM
## 1 Introduction
The performance of communication systems is greatly influenced by the choice of constellation. Both the coordinates and the probabilities of points can be optimized when designing constellations. These approaches are known as geometric and probabilistic shaping, respectively. The geometry of the constellation is usually fixed, e.g. quadrature amplitude modulation (QAM) is used. Probabilistic shaping can still improve the achievable information rate in such cases. An approach of how autoencoders can be used for probabilistic constellation shaping is shown in [12].
In optical and wireless communications, several shaping schemes have been proposed recently to match the capacity achieved by the input distribution, thereby increasing the shaping gain. In [6] probabilistic and geometric shaping schemes for wireless backhaul channels based on their frame error rate performance using soft and hard decision decoding with WiMAX and DVB-S2 LDPC codes are evaluated. Both probabilistic and geometric shaping techniques show significant gains over uniformly distributed symbol transmission for soft decision decoding. However, probabilistic shaping is more challenging because it requires optimizing discrete distributions.
In this paper, we will concentrate on the geometric shaping. This approach is well-studied. For example, geometrically shaped 256-ary constellation in [4] achieves SNR gains up to 1.18 dB compared to the standard QAM constellations [5]. In [9] lattice-based geometrically shaped modulation formats in multidimensional Euclidean space are proposed and also fast and low complexity modulation and demodulation algorithms are described. Recent machine learning techniques have been proposed for geometric shaping [10]. In [8] an autoencoder to obtain geometrically shaped constellation is used.
All these papers suppose that we can change both mapper and demmaper on the transmitter and receiver side. This cannot be realised in existing standards for wireless communication systems. Signal transmission in wireless networks is carried out from the base station to the user equipment (UE). Most UEs work according to the standards already set, for example, 3GPP TS 38.211 V15.4.0 [3]. Changing both of the mapping (uplink) and demapping (downlink) for UEs is difficult, because we need to change the standards themselves.
In this paper, we consider the case when the base station transmit a modified signal, where a sequence of bits is converted into a sequence of complex numbers using non-uniform constellation (NUC) and the user receives this signal and converts it to a sequence of LLRs using standard QAM constellation. This process can be implemented using present technology and does not require any 3GPP standard change. We consider the realistic Low-Density Parity-Check (LDPC) codes and the 5G MIMO OFDM system, providing numerical experiments in the physical communication system level Sionna simulator [7].
## 2 System model
We consider a realistic point-to-point transmission between a UE and a base station. The system we will configure is shown in the Fig. 1. The main difference from the usual BICM is the presence of geometric shaping, which changes the mapping procedure.
On the base station side, in the considered channel model, uniformly distributed parity check bits are added to the sequence of bits (binary source) using the LDPC encoder. The bit sequence is then converted into a sequence of complex values using a Mapper. This sequence pass through the AWGN channel.
On the UE side, the LMMSE equalizer reduces the inter-symbol (IS) interference from the received signal (complex values). The demapper converts
complex values to LLRs and the decoder finally converts LLRs to bit sequence. The quality of the received signal can be described by the mutual information function.
Our aim is to optimize the modulating part (i.e. the mapper) of this scheme, so that the channel model is as follows
\[x\xrightarrow{Mapper}s\xrightarrow{Channel}Hs+n\xrightarrow{Equalizer}G(Hs+n)= \tilde{s}\xrightarrow{Demapper}y\xrightarrow{Decoder}\tilde{x}\]
where \(x\) and \(\tilde{x}\) -- transmitted and received bit sequences, \(s\) and \(\tilde{s}\) -- transmitted and received complex signal values, \(H\), \(G\) -- predefined Channel and LMMSE Equalizer complex matrices; usually \(GH\sim I\) (signal does not change with zero noise), \(n\) -- complex random noise (e.g. \(\mathcal{N}(0,\sigma^{2})\)), \(y\) is a sequence of real values obtained by the Log Likelihood Ratio (LLR) function:
\[y_{i}=llr_{i}(\tilde{s})=\log\frac{P(b_{i}(x)=0|\tilde{s})}{P(b_{i}(x)=1|\tilde {s})}=\log\left(\frac{\sum_{c\in\mathcal{C}_{i,0}}\exp\left(-\frac{1}{N_{0}}| \tilde{s}-c|^{2}\right)}{\sum_{c\in\mathcal{C}_{i,1}}\exp\left(-\frac{1}{N_{0}} |\tilde{s}-c|^{2}\right)}\right)\]
where \(\mathcal{C}_{i,0}\) and \(\mathcal{C}_{i,1}\) are the non-intersecting sets of constellation points. For \(c\in\mathcal{C}_{i,0}:i^{th}\) bit equal to \(0\) and for \(c\in\mathcal{C}_{i,1}:i^{th}\) bit equal to \(1\).
Knowledge of \(y=llr(\tilde{s})\) is equivalent to knowledge of \(t=P(b(x)=0|\tilde{s})\) and \(1-t=P(b(x)=1|\tilde{s})\) since \(t=\frac{2^{y}}{2^{y}+1}\).
Note that in case of fixed demapper LLR function depends on QAM receiver constellation. We have \(y_{i}=llr_{i}(\tilde{s},QAM)\). The LLRs can be tabulated once for any new constellation.
### Mutual Information as a Loss Function
For an arbitrary channel \(X\to Y\) and the input distribution \(P_{X}(x)\) we consider the ensemble \((X,Y)\) with the distribution \(P_{(X,Y)}(x,y)\). The correct information measure in this setting is Mutual Information (MI) \(I(X;Y)\):
\[I(X;Y):=H(X)-H(X|Y)=\mathbb{E}_{(X,Y)}\left(\log\frac{P_{(X,Y)}(x,y)}{P_{X}(x )P_{Y}(y)}\right), \tag{1}\]
Figure 1: Data transmission workflow with learned mapper and fixed demapper.
where \(H(X)\) is an Entropy and \(H(X|Y)\) is a Conditional Entropy.
The expectation can be computed as a sum or an integral, depending on whether the distribution is discrete or continuous. According to Shannon's theorem [11], \(I(X;Y)\) is a strict upper bound on channel throughput using optimum codes. Modern LDPC codes from the standard are sub-optimal, with a predictable deficit.
The Theorem proved by [2] shows that (1) can be calculated as:
\[I(X^{bit};Y^{LLR})=\sum_{j=1}^{m}I(B_{j};Y)=\sum_{j=1}^{m}\left(H(B_{j})-H(B_{j }|Y)\right), \tag{2}\]
where \(m\) is the number of bits transmitted by a given constellation, \(B_{j}\) is a discrete random variable equal to the \(j\)-th bit, \(Y\) is a continuous random variable equal to a received point on a complex plane. As long as the transmitter and receiver use the same constellation, this formula is correct for any constellation and any type of noise. This means that we can use an error-correcting code for the whole set of bits without losing any information.
### Cross-entropy for Neural Networks
To calculate \(I(X;Y)\) (2) we need to calculate each term of sum:
\[I(B_{j};Y)=H(B_{j})-H(B_{j}|Y) \tag{3}\]
The first term in (3) can be easily computed due to the fact of equal probability of zero and one in bit sequence:
\[H(B_{j})=-\frac{1}{2}\log\frac{1}{2}-\frac{1}{2}\log\frac{1}{2}=1\]
Now we need to calculate the second term in (3): \(H(B_{j}|Y)\). For that, let the variable \(t=t(\tilde{s})\) be a probability that the transmitted bit is zero, \(b_{j}(x)=0\), with a condition of received complex value \(\tilde{s}\): \(t=t(\tilde{s})=P(b_{j}(x)=0|\tilde{s})\). Then the probability that the one is transmitted \(P(b_{j}(x)=1|\tilde{s})=1-t\) and
\[H(B_{j}|Y)=-\int P(y,b_{j}(x)=0)\log P(b_{j}(x)=0|y)dy-\] \[\qquad-\int P(y,b_{j}(x)=1)\log P(b_{j}(x)=1|y)dy=\] \[=-\int P(y)P(b_{j}(x)=0|y)\log P(b_{j}(x)=0|y)dy-\] \[\quad-\int P(y)P(b_{j}(x)=0|y)\log P(b_{j}(x)=0|y)dy=\] \[=-\int P(y)(t\log t+(1-t)\log(1-t)dy)=\int P_{Y}(y)H_{2}(t)dy,\]
where \(H_{2}(t)=t\log\frac{1}{t}+(1-t)\log\frac{1}{1-t}\), is known as _binary cross-entropy_.
The function \(H(B_{j}|Y)\) is used in the current optimization procedure.
### Mapper Side Constellation Optimization Features
Our approach involves only constellation changing on the mapper side. This entails a change to the MI function. Mapper constellation is responsible for distribution \(P_{Y}(y)\). On the other hand, fixed demmaper QAM constellation is responsible for function \(H_{2}(t)\).
```
Input \(\textit{SNR}\), \(\textit{Coderate}\), \(QAM\_Constallation\) Output \(\textit{Mapper\_Constellation}\) \(Mapper\_Constellation\gets QAM\_Constallation\) \(\textit{Demapper\_Constellation}\gets QAM\_Constallation\) for\(t=1:T\)do\(\triangleright\)\(T\) is the number of Adam epochs \(bits\gets BinarySource()\)\(\triangleright\) Random sequence generation \(s\gets Mapper(bits,Mapper\_Constellation)\) \(\tilde{s}\gets AWGN\_Channel(s,SNR)\) \(llrs\gets Demapper(\tilde{s},Demapper\_Constellation)\) \(loss\gets BinaryCrossentropy(bits,llrs)\) \(Mapper\_Constellation\gets AdamUpdate(Mapper\_Constellation,loss)\) endfor return\(Mapper\_Constellation\)
```
**Algorithm 1** An Adam algorithm for mapper constellation optimization
We can explain these facts as follows: \(y\) is the signal received by the user after passing through the channel. The distribution \(P_{Y}(y)\) describes the probability of detecting a given signal in a certain area. For the AWGN channel, this distribution is represented as noise clouds around the constellation points. Thus, if we move the mapper constellation points, the distribution \(P_{Y}(y)\) itself changes (See Fig. 2). The movement of one circle leads to an increase in interference with some circles and a decrease with others. The MI function helps us move it properly. On the other hand, demapper constellation points show how this signal \(y\) is converted to LLR's. As described above, the \(t(y)\) function is responsible for this. It doesn't change because the demapper constellation doesn't change. So, to minimize the MI function, we need to maximize \(\int P_{Y}(y)H_{2}(t(y))dy\) with fixed function \(t(y)\).
## 3 Simulation Results
The tests were performed on the Sionna [7] simulation platform. The first part of the simulations was conducted for the simplified channel. Here we obtained the optimal constellations by MI maximization (See some examples of constellation on Fig. 3). Constellation training was done using the Adam optimizer. For each SNR, the amount of information transmitted in the cases of
uniform and non-uniform constellations was calculated (See Fig. 4). Here we see that SAP constellations have up to 1.5% gain in MI for high SNR (Signal to Noise Ratio) values.
The resulting constellations were tested in more complex scenarios. First, we considered a scenario where constellations were merged with the LDPC encoder. Sionna simulator considers LDPC codes from standard 3GPP TS 38.211 V17.0.0 (2021-12). The result can be seen in Fig. 5. Here "Baseline QAM" is the algorithm with a uniform mapper constellation, "Mapper GS" is the algorithm with a non-uniform mapper constellation that maximizes the MI function.
Spectral efficiency (in bits per unit of time) is an indicator of bandwidth efficiency. In our case, it is defined as
\[SE=(1-BLER)\cdot Moderate\cdot num\_bits\]
where \(BLER\) is the Block Error Rate (probability of error when transmitting a single LDPC block), \(Coderate\) is MCS-based encoding rate, \(num\_bits\) is the number of bits encoding the constellation. For example, for QAM16: \(num\_bits=log_{2}(16)=4\). The value of \(num\_bits\) also depends on the MCS.
The values of \(Coderate\) and \(num\_bits_{qam}\) are obtained from 3GPP standards. The value of \(BLER\) was obtained by simulation.
Compared to the old approach, the graph of the gains
\[SE_{gain}=\frac{SE_{after}}{SE_{before}}-1\]
over the basic algorithm is more descriptive (Fig. 6).
Figure 3: Examples of shaped constellation. The blue points represent the base QAM constellation. The orange points represent the shaped constellation.
Figure 4: Mutual Information. The \(x\)-axis represents SNR in dB. The \(y\)-axis represents the percentage of additional information we can transmit to the user, compared to the Shannon limit. Here the blue line shows percentage of the base QAM modulation transmission. The orange line shows the percentage of transmission with the shaped constellation.
Figure 5: Spectral Efficiency. The \(x\)-axis represents SNR in dB. The \(y\)-axis represents the amount of information we can transmit to the user due to the unit period of time. Here the blue line shows amount of information of the base QAM modulation transmission. The orange line shows the amount of information of transmission with the shaped constellation.
For this series of experiments we get a gain of up to 4%.
The second series of experiments was conducted with taking into account Orthogonal Frequency-Division Multiplexing (OFDM) modulation. The architecture of the system consists of Forward Error Correction LDPC Code, Bit Interleaver, Resource Grid Mapper, Least-Squares Channel Estimator, Nearest Neighbor Demapper, LMMSE Equalizer, OFDM Modulator and NUC Constellations.
The system uses different 3GPP wireless Non-Line-of-Sight (NLoS) Clustered Delay Line (CDL) channel models: A, B, C; and Line-of-Sight (LoS) CDL channel models: D, E (Figs. 7, 8, 9, 10, 11) [1]. The time models are simulated in real time domain considering inter-symbol (IS) and inter-carrier (IC) interferences, while frequency models are simulated straight away in frequency domain without taking into account IS and IC interferences.
Figure 6: Spectral Efficiency Gain. The \(x\)-axis represents SNR in dB. The \(y\)-axis represents the percentage of information we can transmit to the user, compared to the base QAM modulation transmission. The orange line shows the percentage of transmission with the shaped constellation.
Figure 8: Spectral Efficiency gain for 3GPP wireless NLoS B channel model.
Figure 7: Spectral Efficiency gain for 3GPP wireless NLoS A channel model.
Figure 10: Spectral Efficiency gain for 3GPP wireless LoS D channel model.
Figure 9: Spectral Efficiency gain for 3GPP wireless NLoS C channel model.
For these series of experiments we get a gain of up to 1.75%.
## 4 Conclusion
This paper presents the solution for the problem of constellation shaping with the condition that it can be changed only on the transmitter side. Constellations optimized on a simple model improve the spectral efficiency function on more complicated models that take into account different encoders. It has been shown that such an approach can have a gain of about 4% for simple AWGN channel models with LDPC code and about 1.5% for OFDM LoS and NLoS models.
|
2305.10831 | Momentum-space Scattering Extremizations | Studies into scatterings of photonic structures have been so far
overwhelmingly focused on their dependencies on the spatial and spectral
morphologies of the incident waves. In contrast, the evolution of scattering
properties through another parameter space of incident directions (momentum
space) has attracted comparably little attention, though of profound importance
for various scattering-related applications. Here we investigate, from the
perspective of quasi-normal modes (QNMs), the momentum-space scattering
extremizations with respect to varying incident directions of plane waves. It
is revealed that for effective single-QNM excitations, scatterings are
maximized exactly along those directions where the QNM radiation reaches its
maximum, with matched incident and radiation polarizations. For an arbitrary
direction, when the incident polarization is tuned to be orthogonal to that of
the mode radiation, the QNM cannot be excited and thus the scatterer becomes
invisible with null scatterings. The principles we have revealed are protected
by fundamental laws of reciprocity and energy conservation (optical theorem),
which can be further expanded and applied for other branches of wave physics. | Chunchao Wen, Jianfa Zhang, Shiqiao Qin, Zhihong Zhu, Wei Liu | 2023-05-18T09:15:34Z | http://arxiv.org/abs/2305.10831v1 | # Momentum-Space Scattering Extremizations
###### Abstract
Studies into scatterings of photonic structures have been so far overwhelmingly focused on their dependencies on the spatial and spectral morphologies of the incident waves. In contrast, the evolution of scattering properties through another parameter space of incident directions (momentum space) has attracted comparably little attention, though of profound importance for various scattering-related applications. Here we investigate, from the perspective of quasi-normal modes (QNMs), the momentum-space scattering extremizations with respect to varying incident directions of plane waves. It is revealed that for effective single-QNM excitations, scatterings are maximized exactly along those directions where the QNM radiation reaches its maximum, with matched incident and radiation polarizations. For an arbitrary direction, when the incident polarization is tuned to be orthogonal to that of the mode radiation, the QNM cannot be excited and thus the scatterer becomes invisible with null scatterings. The principles we have revealed are protected by fundamental laws of reciprocity and energy conservation (optical theorem), which can be further expanded and applied for other branches of wave physics.
## I Introduction
Throughout all disciplines of photonics that involve light-matter interactions, scattering manipulations (enhancement or suppression _e.g._ cloaking) constitute one of the central themes [1, 2, 3]. To control the interactions, either the incident source or the photonic structure can be engineered to satisfy different demands of various applications. For source engineering, previous studies are extensively focused on frequency tuning and/or spatial phase and polarization morphology structuring [4, 5, 6, 7, 8]. Though it is apparent that light-matter interactions are largely dependent on incident directions, systematic and thorough examinations of momentum-space scattering extremizations (maximized or minimized) have not yet been conducted. All possible incident directions constitute a closed momentum sphere, and according to the extreme value theorem [9] there must be at least one direction along which the scattering can reach its maximum or minimum.
Here we study the momentum-space scattering extremizations with respect to varying directions of incident plane waves, from the perspective of QNMs supported by the scatterers. When only one QNM is effectively excited, it is discovered that the maximum mode radiation directions correspond exactly to the momentum-space points where the scattering is maximized, with matched incident and radiation polarizations. Along an arbitrary direction, when the incident and mode radiation polarizations are tuned to be orthogonal, all scatterings are fully eliminated, rendering the scatterer invisible. Our revelations connecting QNM radiations and momentum-space scattering evolutions are secured by the fundamental laws of electromagnetic reciprocity and optical theorem. As a result, the framework we have established can be naturally extended to apply to other branches of wave physics, incubating applications in fields such as acoustics, water waves and microscopic matter waves.
## II Theoretical model incorporating reciprocity and optical theorem
Throughout this work, we confine our studies to the single-QNM regime: the spectral regions where only one QNM can be effectively excited, and other QNMs are either spectrally apart or simply cannot be excited. The incident sources are plane waves (electric field vector \(\mathbf{E}_{\mathrm{inc}}\), wavevector \(\mathbf{k}_{\mathrm{inc}}\), angular frequency \(\omega\) and vacuum wavelength \(\lambda\)) and the QNM excited is characterized by a complex eigenfrequency \(\tilde{\omega}\)[10]. In the single-QNM regime, all scattering properties are decided by the excitation coefficient \(\alpha(\mathbf{E}_{\mathrm{inc}},\omega,\mathbf{k}_{\mathrm{inc}})\) of the QNM [11]:
\[\mathrm{C}_{\mathrm{sca,ext,abs}}\propto|\alpha(\mathbf{E}_{\mathrm{inc}}, \omega,\mathbf{k}_{\mathrm{inc}})|^{2}, \tag{1}\]
where \(\mathrm{C}_{\mathrm{sca,ext,abs}}\) are respectively scattering, extinction and absorption cross sections; \(|\alpha(\mathbf{E}_{\mathrm{inc}},\omega,\mathbf{k}_{\mathrm{inc}})|^{2}\) is the excitation efficiency, and its spectral curve (dependence on \(\omega\)) generally exhibits a typical Lorentzian shape, with the central position and linewidth decided by the complex \(\tilde{\omega}\)[10, 11].
A conventional approach to calculate \(\alpha\) and thus the excitation efficiency relies on cumbersome volume integrations involving detailed near-field current distributions of the
Figure 1: (a) A metallic cylinder that supports a linear electric dipolar QNM, the angular radiation intensity and polarization distributions of which are shown in (b) and (c), respectively. Three excitation scenarios (i)-(iii) are also specified in (a), where black and blue arrows denote the propagation and electric field directions of the incident plane waves, respectively. For (i) & (ii) the QNM cannot be excited, while for (iii) the excitation efficiency reaches its maximum.
QNM [10]. It is recently shown that, for incident planes waves, \(\alpha\) can be directly calculated in the far field according to the electromagnetic reciprocity [11]:
\[\alpha(\mathbf{E}_{\mathrm{inc}},\omega,\mathbf{k}_{\mathrm{inc}})\propto \mathbf{\tilde{E}}_{\mathrm{rad}}\cdot\mathbf{E}_{\mathrm{inc}}, \tag{2}\]
where \(\mathbf{\tilde{E}}_{\mathrm{rad}}\) is the electric field vector of the QNM radiation along \(\mathbf{k}_{\mathrm{rad}}=-\mathbf{k}_{\mathrm{inc}}\). To clarify the meaning of the dot product in Eq. (2), we employ the normalized Jones vectors \(\mathbf{J}_{\mathrm{inc,rad}}=(j_{1}^{\,\mathrm{inc,rad}},j_{2}^{\,\mathrm{ inc,rad}})\) to characterize the polarizations of the incident wave and the far-field QNM radiation, _e.g._\(\mathbf{J}=\frac{1}{\sqrt{2}}(1,\pm i)\) denoting right- and left-handed circular polarizations, respectively [2]. Then Eq. (2) can be expressed as [12; 13]:
\[\alpha(\mathbf{E}_{\mathrm{inc}},\omega,\mathbf{k}_{\mathrm{inc}})\propto \mathbf{J}_{\mathrm{rad}}\mathbf{J}_{\mathrm{inc}}^{\dagger}\tilde{\mathrm{E} }_{\mathrm{rad}}\mathrm{E}_{\mathrm{inc}}, \tag{3}\]
where \(\dagger\) denotes combined operations of complex conjugate and transpose; \(\tilde{\mathrm{E}}_{\mathrm{rad}}\) and \(\mathrm{E}_{\mathrm{inc}}\) are scalar field amplitudes. It is worth mentioning that the Jones vector characterizes the polarization only and does not contain any information about the light prorogation direction, _e.g._ waves of the same polarization while propagating along opposite directions are represented by the same Jones vector.
Since all cross sections \(\mathrm{C}_{\mathrm{sca,ext,abs}}\) are defined to be independent of the incident field strength [1], Eq. (1) can be then explicitly simplified as:
\[\mathrm{C}_{\mathrm{sca,ext,abs}}\propto|\mathbf{J}_{\mathrm{rad}}\mathbf{J}_ {\mathrm{inc}}^{\dagger}|^{2}|\tilde{\mathrm{E}}_{\mathrm{rad}}|^{2}=|\mathbf{ J}_{\mathrm{rad}}\mathbf{J}_{\mathrm{inc}}^{\dagger}|^{2}\mathbf{I}_{\mathrm{rad}}, \tag{4}\]
where \(\mathrm{I}_{\mathrm{rad}}=|\tilde{\mathrm{E}}_{\mathrm{rad}}|^{2}\) is the QNM radiation intensity along \(\mathbf{k}_{\mathrm{rad}}=-\mathbf{k}_{\mathrm{inc}}\). For general reciprocal scatterers, it has been proved, based on the principles of reciprocity and optical theorem, that for an arbitrary pair of opposite incident directions (\(\pm\mathbf{k}_{\mathrm{inc}}\)) the extinction cross sections are identical [14; 15]. In the single-QNM regime, both cross sections of scattering and absorption are also identical for \(\pm\mathbf{k}_{\mathrm{inc}}\)[11]. According to Eq. (4), it requires that:
\[\mathbf{J}_{\mathrm{rad}}(\mathbf{k}_{\mathrm{rad}})=\mathbf{J}_{\mathrm{rad} }(-\mathbf{k}_{\mathrm{rad}}),\ \mathrm{I}_{\mathrm{rad}}(\mathbf{k}_{\mathrm{rad}})=\mathrm{I}_{\mathrm{rad}}( -\mathbf{k}_{\mathrm{rad}}). \tag{5}\]
That is to say, in the single-QNM regime, the mode radiations are of the same polarization and magnitude along two arbitrary opposite directions. A more intuitive interpretation of this conclusion is as follows: (i) Equation (4) tells that for an incident plane wave along \(\mathbf{k}_{\mathrm{inc}}\), the extinction is solely decided by the QNM radiation along the opposite direction of \(\mathbf{k}_{\mathrm{rad}}=-\mathbf{k}_{\mathrm{inc}}\); (ii) Optical theorem tells that the extinction is only related to QNM radiation along \(\mathbf{k}_{\mathrm{rad}}=\mathbf{k}_{\mathrm{inc}}\), as only forward radiations (parallel to the incident direction) can interfere destructively with the incident wave to extinguish part of its energy to account for all-angle scatterings and dissipative absorptions [1]; (i) & (ii) are applicable for arbitrary incident polarizations, which results in Eq. (5).
Now we proceed to discuss the momentum-space scattering extremizations based on Eq. (4). The maximization of the scattering requires the simultaneous maximization of both \(\mathrm{I}_{\mathrm{rad}}\) and \(|\mathbf{J}_{\mathrm{rad}}\mathbf{J}_{\mathrm{inc}}^{\dagger}|\): the former corresponds to that \(\mathbf{k}_{\mathrm{inc}}\) is collinear to the directions along which the QNM radiation is strongest [according to Eq. (5) there are at least a pair of such directions which are opposite to each other]; the latter requires the incident and QNM radiations along \(\pm\mathbf{k}_{\mathrm{inc}}\)) are of the same polarization (\(|\mathbf{J}_{\mathrm{rad}}\mathbf{J}_{\mathrm{inc}}^{\dagger}|=1\)), which is natural since the time reversal operation does not change the polarization of the field [12; 13].
We have thus so far managed to reveal the recipe to obtain the maximum scattering in the single-QNM regime: (i) Identify the QNM of complex eigenfrequency \(\tilde{\omega}\); (ii) Calculate its far-field radiations, in terms of both distributions of radiation intensity \(\mathrm{I}_{\mathrm{rad}}\) and polarization \(\mathbf{J}_{\mathrm{rad}}\); (iii) Pinpoint the directions (at least a pair of them) where the scattering reaches its maximum and specify the polarization; (iv) Shine light along those directions with matched polarization (with the same polarization as that of the radiation: \(|\mathbf{J}_{\mathrm{rad}}\mathbf{J}_{\mathrm{inc}}^{\dagger}|=1\)). For a fixed incident direction, the scattering is maximized when the incident and radiation polarizations are matched; while for a fixed incident polarization, the scattering maximization requires simultaneous consideration of \(|\mathbf{J}_{\mathrm{rad}}\mathbf{J}_{\mathrm{inc}}^{\dagger}|\) and \(\mathrm{I}_{\mathrm{rad}}\).
Equation (4) also shows that along arbitrary directions, the scattering can be always fully eliminated \(\mathrm{C}_{\mathrm{sca,ext,abs}}=0\): along directions where the QNM radiation is not zero, the elimination is obtained when the incident and radiation polarizations are orthogonal (\(\mathbf{J}_{\mathrm{rad}}\mathbf{J}_{\mathrm{inc}}^{\dagger}=0\)); along directions of vanishing radiations (\(\mathrm{I}_{\mathrm{rad}}=0\)), there are no scatterings for incident waves of arbitrary polarizations.
The principles discovered and explicitly described above are schematically exemplified in Fig. 1: a metallic cylinder [Fig. 1(a)] supports a linear electric dipolar QNM, the angular radiation intensity and polarization (linear polarization everywhere along the longitude, except for the poles where the radiations are singularly zero [16]) distributions of which are shown respectively in Fig. 1(b) and Fig. 1(c). Three excitation scenarios are also included in Fig. 1: for (i) & (ii) the excitation efficiency is zero while for (iii) it is maximized. This can be easily interpreted from the conventional perspective of near-field interactions between \(\mathbf{E}_{\mathrm{inc}}\) and the oscillating electrons along the cylinder axis. Meanwhile, Eq. (4) provides an alternative far-field interpretation for each scenario: (i) \(\mathrm{I}_{\mathrm{rad}}=0\) and thus \(\alpha=0\); (ii) \(\mathbf{J}_{\mathrm{rad}}\mathbf{J}_{\mathrm{inc}}^{\dagger}=0\) and thus \(\alpha=0\); (iii) both \(|\mathbf{J}_{\mathrm{rad}}\mathbf{J}_{\mathrm{inc}}^{\dagger}|\) and \(\mathbf{\tilde{E}}_{\mathrm{rad}}\) are maximal, leading to maximized coupling efficiency and thus also scattering properties. For this elementary example, the superiority of far-field interpretations based on Eq. (4) is not obvious; while for more sophisticated structures where direct analysis of charge-field interactions becomes too complicated, the overwhelming simplicity and strength of our far-field model would become apparent.
## III Results and Discussions
To further verify the principles revealed, we employ the seminal structures of split ring resonators (SRRs), at the same time emphasizing that our theory is generally applicable to other photonic configurations. Throughout this work, the SRRs investigated consist of gold with effective bulk permittivity fitted from data in Ref. [17], and numerical results are obtained us
ing COMSOL Multiphysics. We start with an individual SRR, and all geometric parameters and its orientation with respect to the coordinate system shown in Fig. 2(a) are specified in Fig. 2(b). Two QNMs supported by this SRR are chosen with eigenfrequencies of \(\tilde{\omega_{1}}=(2.326\times 10^{14}-1.3258\times 10^{13}\mathrm{i})\) rad/s (central resonant wavelength \(\lambda_{1}=9.348~{}\mu\)m) and \(\tilde{\omega_{2}}=(8.055\times 10^{14}-4.8841\times 10^{12}\mathrm{i})\) rad/s (\(\lambda_{2}=2.339~{}\mu\)m), and their corresponding angular radiation patterns (in terms of \(\mathrm{I_{rad}}\)) are shown in Figs. 2(c) and (d), respectively. For each QNM, we have selected a semi-circle in the momentum-space [denoted by dashed white lines in Figs. 2(c) & (d)] which covers both directions of maximum and minimum radiation intensity throughout the momentum space. For calculations of scattering properties (\(\mathrm{C_{sca}}\) and \(\mathrm{C_{abs}}\)) with incident waves along directions on those momentum-space semi-circles, we fix the angular frequency of the incident wave \(\omega=\mathrm{Re}(\tilde{\omega})\) and make sure the incident polarization is matched to that of the radiation (\(|\mathbf{J}_{\mathrm{rad}}\mathbf{J}_{\mathrm{inc}}^{\dagger}|=1\)), reducing Eq. (4) to:
\[\mathrm{C_{sca,ext,abs}}\propto\mathrm{I_{rad}}. \tag{6}\]
The normalized \(\mathrm{I_{rad}}\) and \(\mathrm{C_{sca,abs}}\) on our selected momentum-space semi-circles for both QNMs are shown respectively in Figs. 2(e) and (f). Agreement between them is manifest confirming Eq. (6), despite some discrepancies along those directions of small \(\mathrm{I_{rad}}\). This is because small \(\mathrm{I_{rad}}\) is synonymous with small excitation efficiency [Eq. (3)] and then contributions from other QNMs are not fully negligible any more, rendering our single-QNM approximation less accurate. The smaller is \(\mathrm{I_{rad}}\), the larger is the discrepancy, as is also the case for the results shown in Figs. 3(b) and (c) that will be discussed later. We have also pinpointed the two directions of maximum and minimum radiation directions of the higher-order QNM, marked by a star (\(\theta=44.3^{\circ},~{}\varphi=0\)) and a cross (\(\theta=90^{\circ},~{}\varphi=0\)) in Fig. 2(d), respectively. When the incident direction is along the maximum radiation direction, the near-field distributions for matched (\(|\mathbf{J}_{\mathrm{rad}}\mathbf{J}_{\mathrm{inc}}^{\dagger}|=1\)) and orthogonal (\(|\mathbf{J}_{\mathrm{rad}}\mathbf{J}_{\mathrm{inc}}^{\dagger}|=0\)) incident polarizations are shown respectively in Fig. 2(g) (largest excitation efficiency and scattering) and Fig. 2(h) (close to zero excitation efficiency and scattering). Along the minimum radiation direction (\(\mathrm{I_{rad}}\approx 0\)), it is shown in Fig. 2(i) that, similar to that shown in Fig. 2(h), scattering is almost absent despite the matched incident polarization. All those results are consistent with Eq. (4), confirming that invisibility is obtainable for arbitrary incident directions with either \(\mathrm{I_{rad}}=0\) or \(|\mathbf{J}_{\mathrm{rad}}\mathbf{J}_{\mathrm{inc}}^{\dagger}|=0\).
We then proceed to check the more sophisticated configuration of coupled SRRs schematically shown in Fig. 3(a). Here each SRR is identical to that shown in Fig. 2(b) and they are parallel-displaced by \(90\) nm and twisted by \(\delta=46.4^{\circ}\) with respect to each other along the \(\mathbf{y}\)-axis. Similarly, two QNMs supported by the SRR pair are selected with eigenfrequencies of \(\tilde{\omega}_{3}=(1.7825\times 10^{14}-1.1628\times 10^{13}\mathrm{i})\) rad/s (central resonant wavelength \(\lambda_{3}=10.568~{}\mu\)m) and \(\tilde{\omega}_{4}=(7.8753\times 10^{14}-4.1295\times 10^{12}\mathrm{i})\) rad/s (\(\lambda_{4}=2.392~{}\mu\)m), and their corresponding radiation patterns are demonstrated respectively in Figs. 3(b) and (c). The normalized quantities (\(\mathrm{I_{rad}}\) and \(\mathrm{C_{sca,abs}}\) with matched incident polarizations) on the selected momentum-space semi-circles [marked by white-dashed lines in Figs. 3(b) and (c)] are shown in Figs. 3(d) and (e), respectively. Similar to those already observed in Figs. 2(e) and (f), it is manifest in Figs. 3(d) and (e) that: along directions with large \(\mathrm{I_{rad}}\) and thus also large coupling efficiencies, the agreement between both sets of results are excellent; while for small \(\mathrm{I_{rad}}\) and thus also low coupling efficiencies, obvious
Figure 2: (a) The spherical coordinate system with the azimuthal angle \(\varphi\) and polar angle \(\theta\). (b) A gold SRR with all geometric paramters and its orientation within the coordinate system specified. (c) and (d) Angular radiation patterns of the two QNMs (of eigenfrequencies \(\tilde{\omega_{1}}\) and \(\tilde{\omega_{2}}\)) supported by the SRR, where the selected momentum-space semi-circles are marked by white-dashed lines. For the higher-order mode in (d), two points of maximum (star) and minimum (cross) radiation intensity have been marked. (e) and (f) Normalized radiation intensity or scattering (absorption) cross sections on the momentum-space semi-circles. (g) and (h) Near-field distributions (in terms of \(\mathbf{E}_{x}\) and \(\mathbf{E}_{y}\)) for incident waves along the maximum-radiation direction marked in (d) with matched and orthogonal polarizations, respectively. (i) Near-field distributions (in terms of \(\mathbf{E}_{x}\)) for the incident wave along the minimum-radiation direction marked in (d) with the matched polarization.
discrepancies arise, induced by non-negligible contributions from other QNMs.
## IV Conclusions and perspectives
We study momentum-space scattering extremizations with respect to varying incident directions from the perspective the QNMs. When an individual QNM is effectively excited, we have revealed that: scattering is maximized for waves incident along the directions of maximum QNM radiations with matched radiation and incident polarizations; for an arbitrary direction of non-zero radiation, the scattering is fully eliminated with orthogonal radiation and incident polarizations; for incident directions along which there are no QNM radiations, the scatterings are null for arbitrary incident polarizations. We have effectively revealed the deep connections between QNM radiation patterns and momentum-space scattering evolutions, which are protected by the fundamental principles of electromagnetic reciprocity and optical theorem, and thus can be naturally extended to other disciplines of wave physics.
In this study, we have confined our investigations to fully-polarized incident waves on the surface of the Poincare sphere, and similar studies can be directly extended to the interior of the Poincare sphere to encompass partially polarized and unpolarized incident waves. For example, for unpolarized incident waves, it is obvious that the scattering is maximized and minimized along directions of maximum and minimum QNM radiations, respectively.
We have shown in previous studies that the polarization distributions of the QNM radiations are globally bounded by the Poincare-Hopf theorem [18], which secures the existence of polarization singularities and a global index sum of Euler characteristic \(2\) of the momentum sphere [19; 16]. In a similar manner, the maximum and minimum QNM radiation points (together with other critical points) are also globally bounded by the Morse theory [20], and explorations of those points and their Morse indexes from a photonics perspective can certainly further expand the richness of the vibrant field of topological physics.
The presence of discrepancies in Figs. 2(e-f) and Figs. 3(d-e) have already exposed the limitation of our theoretical model, which is only applicable to the single-QNM excitation regime. When there are symmetry-protected mode degeneracies or spectrally-close QNMs that are simultaneously co-excited, the principles revealed in this paper cannot be directly applied. A more encompassing model that can systematically solve the problem of the momentum-space scattering extreminations, for either multi-QNM scatterers or structured non-plane incident waves, is yet to be established.
## Acknowledgement
This research was funded by the National Natural Science Foundation of China (12274462, 11674396, and 11874426), and the Science and Technology Planning Project of Hunan Province (2018JJ1033 and 2017RS3039).
|
2310.05734 | Morita equivalence classes of $2$-blocks with abelian defect groups of
rank $4$ | We classify all $2$-blocks with abelian defect groups of rank $4$ up to
Morita equivalence. The classification holds for blocks over a suitable
discrete valuation ring as well as for those over an algebraically closed
field. An application is that Brou\'{e}'s abelian defect group conjecture holds
for all blocks under consideration here. | Charles W. Eaton, Michael Livesey | 2023-10-09T14:09:45Z | http://arxiv.org/abs/2310.05734v3 | # Morita equivalence classes of 2-blocks with abelian defect groups of rank 4 +
###### Abstract
We classify all 2-blocks with abelian defect groups of rank 4 up to Morita equivalence. The classification holds for blocks over a suitable discrete valuation ring as well as for those over an algebraically closed field. An application is that Broue's abelian defect group conjecture holds for all blocks under consideration here.
Keywords: Morita equivalence; finite groups; block theory
## 1 Introduction
Let \(p\) be a prime and \((K,\mathcal{O},k)\) be a \(p\)-modular system with \(k\) algebraically closed, and let \(P\) be a finite \(p\)-group. It is known by work culminating in [19] that if \(P\) is an abelian 2-group, then there are only finitely many Morita equivalence classes amongst all blocks of \(\mathcal{O}G\) for all finite groups \(G\) with defect groups isomorphic to \(P\), i.e., that Donovan's conjecture holds for such \(P\). This suggests the problem of classifying the Morita equivalence classes of blocks that arise for given abelian 2-groups \(P\). The Morita equivalence classes are already known without the use of the classification when \(P\) is abelian and \(\mathrm{Aut}(P)\) is a 2-group and when \(P\) is a Klein four group. In the former case every block must be nilpotent by [44] since \(P\) controls fusion for the block, and in the latter there are three Morita equivalence classes by [36] (in fact these are the only source algebra equivalence classes by [13]). In [20] the classification of finite simple groups was applied to describe the 2-blocks of quasisimple groups with abelian defect groups. This has been used to classify the Morita equivalence classes of blocks with defect groups isomorphic to \(P\) in the following cases. When \(P\) is elementary abelian of order 16 or 32, there are 16 or 34 Morita equivalence classes respectively by [18] and [3]. When \(P\) is elementary abelian of order 64, there are 81 Morita equivalence classes containing a principal block by [4]. Morita equivalence classes of blocks when
\(P\) is an abelian 2-group of rank at most three are determined in [22] and [51]. Morita equivalence classes have been determined for a class of minimal nonabelian 2-groups in [21]. Further classifications are obtained when we place restrictions on the inertial quotient of the blocks we consider, as in [42] and [5], where it is assumed that the inertial quotient contains an element of maximal order, or is a subgroup of a cyclic group of maximal order.
Here we determine the Morita equivalence classes of blocks when \(p=2\) and \(P\) is abelian of rank at most four. There are two main problems to overcome in the reduction to quasisimple groups in order to apply [20]: the cases of a normal subgroup of odd index and of a normal subgroup of index two. In the former case we use Picard groups and crossed products to deduce possible Morita equivalence classes for the block of the over group. In the latter we use a combination of a method developed in [51] and one of our own.
Let \(D\cong C_{2^{n_{1}}}\times C_{2^{n_{2}}}\times C_{2^{n_{3}}}\times C_{2^{n_{4}}}\), where \(n_{1},\ldots,n_{4}\geq 0\). The conjugacy classes of odd order subgroups \(E\) of \(\operatorname{Aut}(D)\) have representatives as given in Table 1, and correspond to the possible inertial quotients for blocks with defect group \(D\) (where by inertial quotient, defined later, we consider also the action on \(D\)). By [27, 5.2.3] we may write \(D=[D,E]\times C_{D}(E)\). To simplify notation we assume that labeling is chosen so that \([D,E]\cong C_{2^{n_{1}}}\times\cdots\times C_{2^{n_{i}}}\) and \(C_{D}(E)\cong C_{2^{n_{i+1}}}\times\cdots\times C_{2^{n_{4}}}\) for some \(i\). If \(B\) is a block with defect group \(D\) and inertial quotient \(E\), then we say that \(B\) is of type \(E\), where we use the notation of Table 1 to distinguish between isomorphic but non-conjugate subgroups. Note that the case \((C_{3})_{2}\) represents the simultaneous action of \(C_{3}\) on \(C_{2^{n_{1}}}\times C_{2^{n_{2}}}\) and on \(C_{2^{n_{3}}}\times C_{2^{n_{4}}}\), which for \(n_{1}=n_{2}=n_{3}=n_{4}\), also represents a subgroup of \(C_{15}\).
We now give the classification. Throughout this paper, \(G_{n}\) will denote the non-abelian group \((C_{2^{n}}\times C_{2^{n}})\rtimes C_{3}\).
Following [46] a block is called _inertial_ if it is basic Morita equivalent to a block with normal defect group.
**Theorem 1.1**.: _Let \(G\) be a finite group and \(B\) be a block of \(\mathcal{O}G\) with defect group \(D\cong C_{2^{n_{1}}}\times C_{2^{n_{2}}}\times C_{2^{n_{3}}}\times C_{2^{n_{4}}}\) as above. Let \(E\) be an inertial quotient of \(B\), with notation as in Table 1._
1. _If_ \(n_{1}=n_{2}\)_,_ \(n_{3}=n_{4}\) _and_ \(E\) _acts as_ \(1\)_,_ \((C_{3})_{2}\) _or_ \(C_{5}\) _then_ \(B\) _is inertial._
\begin{table}
\begin{tabular}{|l|l|l|} \hline Inertial quotient & Notes & Restrictions on \(D\) \\ \hline
1 & & None \\ \((C_{3})_{1}\) & \(C_{D}(E)\) has rank 2 & \(n_{1}=n_{2}\) \\ \((C_{3})_{2}\) & \(C_{D}(E)=1\) & \(n_{1}=n_{2}\), \(n_{3}=n_{4}\) \\ \(C_{3}\times C_{3}\) & \(C_{D}(E)=1\) & \(n_{1}=n_{2}\), \(n_{3}=n_{4}\) \\ \(C_{5}\) & \(C_{D}(E)=1\) & \(n_{1}=n_{2}=n_{3}=n_{4}\) \\ \(C_{15}\) & \(C_{D}(E)=1\) & \(n_{1}=n_{2}=n_{3}=n_{4}\) \\ \(C_{7}\) & \(C_{D}(E)\) cyclic & \(n_{1}=n_{2}=n_{3}\) \\ \(C_{7}\rtimes C_{3}\) & \(C_{D}(E)\) cyclic & \(n_{1}=n_{2}=n_{3}\) \\ \hline \end{tabular}
\end{table}
Table 1: Possible inertial quotients
_._
2. _If_ \(E\) _acts as_ \((C_{3})_{1}\)_, then_ \(B\) _is Morita equivalent to one of_ \(\mathcal{O}(D\rtimes E)\) _or_ \(B_{0}(\mathcal{O}(A_{5}\times C_{2^{n_{3}}}\times C_{2^{n_{4}}}))\)_._ * _If_ \(E\) _acts as_ \(C_{7}\)_, then_ \(B\) _is Morita equivalent to_ \(\mathcal{O}(D\rtimes C_{7})\) _or_ \(B_{0}(\mathcal{O}(SL_{2}(8)\times C_{2^{n_{4}}}))\)_._ * _If_ \(E\) _acts as_ \(C_{7}\rtimes C_{3}\)_, then_ \(B\) _is Morita equivalent to_ \(\mathcal{O}(D\rtimes(C_{7}\rtimes C_{3}))\)_,_ \(B_{0}(\mathcal{O}(\operatorname{Aut}(SL_{2}(8))\times C_{2^{n_{4}}}))\) _or_ \(B_{0}(\mathcal{O}(J_{1}\times C_{2^{n_{4}}}))\)_._ * _If_ \(E\) _acts as_ \(C_{15}\)_, then either_ \(B\) _is inertial or_ \(B\) _is basic Morita equivalent to_ \(B_{0}(\mathcal{O}SL_{2}(16))\)_._ * _Suppose_ \(E\) _acts as_ \(C_{3}\times C_{3}\)_, with_ \(n_{1}=n_{2}\) _and_ \(n_{3}=n_{4}\)_. Then_ \(B\) _is Morita equivalent to one of_ \(B_{0}(\mathcal{O}(G_{n_{1}}\times G_{n_{3}}))\)_,_ \(B_{0}(\mathcal{O}(A_{5}\times G_{n_{3}}))\)_,_ \(B_{0}(\mathcal{O}(A_{5}\times A_{5}))\) _or a nonprincipal block of_ \(D\rtimes 3_{+}^{1+2}\)_, where_ \(Z(3_{+}^{1+2})\) _acts trivially on_ \(D\)_._
**Remark 1.2**.: (i) Nonprincipal blocks of \(D\rtimes 3_{+}^{1+2}\) and \(D\rtimes 3_{-}^{1+2}\) as above will be shown to be Morita equivalent in Proposition 5.11.
(ii) Implicit in Theorem 1.1 is that for these defect groups Morita equivalent blocks have the same fusion (i.e., the same inertial quotient).
(iii) Cases (a) and (b)(iv) of Theorem 1.1 have been proved in [42] and [5].
A consequence is the following Corollary that Broue's abelian defect group conjecture holds for the blocks under consideration:
**Corollary 1.3**.: _Let \(G\) be a finite group and \(B\) be a \(2\)-block of \(\mathcal{O}G\) with abelian defect group \(D\) of rank at most four. Let \(b\) be the Brauer correspondent of \(B\) in \(\mathcal{O}N_{G}(D)\). Then \(B\) is derived equivalent to \(b\)._
**Remark 1.4**.: Broue's abelian defect group conjecture is often stated as requiring further a splendid Rickard equivalence. Since Theorem 1.1 only provides a Morita equivalence with no additional information on the bimodule affording the equivalence, we make no claim to the existence of a splendid equivalence in our result.
The structure of the paper is as follows. In Section 2 we briefly covered necessary notation and general results. This includes: results on covering of blocks of normal subgroups and related Morita equivalences; subpairs and inertial quotients; a review of the different (stronger) versions of Morita equivalence that we use, and their relationship with fusion; a summary of the results of [20] that we use; and Picard groups. We prove a result on extending Morita equivalences (in certain cases) from normal subgroups of index \(2\) in Section 4. This requires knowledge of some groups of perfect self-isometries, which is the content of Section 3. In Section 5 we give results on extending Morita equivalences from normal subgroups, which includes extensions using crossed products where the index is odd, combined with techniques for index \(2\) from the previous section and the result developed in [51]. In Section 6 we present the proof of the main result. Finally in Section 7 we prove Corollary 1.3.
## 2 Notation and background
### Morita equivalence
See [37, 2.8] for an introduction to Morita theory. It is not known whether or not Morita equivalence of blocks of finite groups defined over \(\mathcal{O}\) preserve the isomorphism
type of the defect group (Morita equivalences defined over \(k\) are known not to necessarily preserve this by [26]). However, by [9] Morita equivalence does preserve the isomorphism class of abelian defect groups. It is also not known whether the inertial quotient (and its action) are preserved. We will have to take some care in the proof of Theorem 1.1 to check that this happens in our situation.
A Morita equivalence is _basic_ if it is induced by an endopermutation source bi-module, and a block is _inertial_ if there is basic Morita equivalence with the Brauer correspondent block of the normalizer of a defect group (see [45] and [46]). A basic Morita equivalence preserves the isomorphism type of the defect group and fusion.
We will also refer to source algebra equivalences (see [38, 6.4]). These occur in the section on Picard groups. Also certain Morita equivalences that we use are stated as source algebra equivalences (for example Lemma 2.2), but we do not use properties beyond that they are basic Morita equivalences.
Another point to mention here is the Bonnafe-Rouquier correspondence as in [11]. Whilst correspondent blocks are only known to be Morita equivalences (i.e, not basic), they are splendid Rickard equivalent, which implies they have the same fusion.
### Blocks and normal subgroups
We collect some background on blocks and normal subgroups that will be used frequently and usually without further reference. A reference for this material is [38, Section 6.8]
Let \(G\) be a finite group and \(B\) be a block of \(\mathcal{O}G\) with defect group \(D\). We use \(\operatorname{prj}(B)\) to denote the set of characters of projective indecomposable \(B\)-modules. Let \(N\lhd G\) and let \(b\) be a block of \(\mathcal{O}N\) covered by \(B\). We may choose a \(G\)-conjugate of \(D\) such that \(D\cap N\) is a defect group for \(b\). Write \(\operatorname{Stab}_{G}(b)\) for the stabiliser of \(b\) under conjugation in \(G\). There is a block of \(\mathcal{O}G\) covering \(b\) with a defect group \(P\) such that \(NP/N\) is a Sylow \(p\)-subgroup of \(\operatorname{Stab}_{G}(b)/N\).
If \([G:N]\) is a power of \(p\), then \(B\) is the unique block of \(\mathcal{O}G\) covering \(b\), and it follows that \(b\) is \(G\)-stable if and only if \(G=ND\). In this case \(B\) and \(b\) share a block idempotent. We have the following crucial fact in the case that \(D\) is abelian:
**Proposition 2.1**.: _Let \(G\) be a finite group and \(B\) a block of \(\mathcal{O}G\) with abelian defect group \(D\). Suppose that \(N\lhd G\) with \(G=ND\) and that \(b\) is a \(G\)-stable block of \(\mathcal{O}N\) covered by \(B\). Then \(D\) acts as inner automorphisms on \(b\). Further,_
_(i) every irreducible character of \(b\) is \(G\)-stable and extends to \(p\) distinct irreducible characters of \(B\),_
_(ii) induction gives a bijection between the projective indecomposable modules for \(b\) and those for \(B\)._
Proof.: The assertion that \(D\) acts as inner automorphisms follows from [37, Corollary 6.16.3] or [24, Proposition 3.1]. The rest is [22, Proposition 2.6(i), (ii)].
Note that the version of Proposition 2.1 over \(k\) is also found within [30].
Recall that a block \(B\) is _quasiprimitive_ if every block of every normal subgroup covered by \(B\) is \(G\)-stable. In particular \(B\) covers a unique block for each normal subgroup.
We will make frequent use of the following, especially in the case where the quotient group is a subgroup of the outer automorphism group of a quasisimple group.
**Lemma 2.2** (Lemma 2.4 of [3]).: _Let \(G\) be a finite group and let \(N\lhd G\) with \(G/N\) solvable. Let \(b\) be a \(G\)-stable block of \(\mathcal{O}N\) and let \(B\) be a quasiprimitive block of \(\mathcal{O}G\) covering \(b\) with defect group \(D\). Then \(DN/N\) is a Sylow \(p\)-subgroup of \(G/N\)._
The normal subgroup \(G[b]\) of \(G\) is defined to be the group of elements of \(G\) acting as inner automorphisms on \(b\otimes_{\mathcal{O}}k\) (see [33]).
**Proposition 2.3**.: _Let \(G\) be a finite group and \(B\) a block of \(\mathcal{O}G\) with defect group \(D\). Let \(N\lhd G\) with \(D\leq N\) and suppose that \(B\) covers a \(G\)-stable block \(b\) of \(\mathcal{O}N\). Let \(B^{\prime}\) be a block of \(\mathcal{O}G[b]\) covered by \(B\). Then_
_(i) \(b\) is source algebra equivalent to \(B^{\prime}\), and in particular has isomorphic inertial quotient;_
_(ii) \(B\) is the unique block of \(\mathcal{O}G\) covering \(B^{\prime}\)._
Proof.: Part (i) is [28, 2.2], noting that a source algebra equivalence over \(k\) implies one over \(\mathcal{O}\) by [44, 7.8]. Part (ii) follows from [15, 3.5].
### Subpairs, fusion and inertial quotients
Let \(G\) be a finite group and \(B\) be a block of \(\mathcal{O}G\) with defect group \(D\). For convenience in stating definitions we assume that \(D\) is abelian. A \(B\)-subpair is a pair \((Q,B_{Q})\) where \(Q\) is a \(p\)-subgroup of \(G\) and \(B_{Q}\) is a block of \(C_{G}(Q)\) with Brauer correspondent \(B\). Such a block \(B_{Q}\) only exists when \(Q\) is \(G\)-conjugate to a subgroup of \(D\). For a fixed \(B\)-subpair \((D,B_{D})\), for each \(Q\leq D\) there is an unique \(B\)-subpair \((Q,B_{Q})\) such that \(B_{Q}\) is the Brauer correspondent of \(B_{D}\).
The \(B\)-subpairs define a fusion system \(\mathcal{F}=\mathcal{F}_{B}(D,B_{D})\), often called the _Frobenius category_ of the \(B\) (see for example [7] or [37]). The _inertial quotient_ of \(B\) is \(E=N_{G}(D,B_{D})/DC_{G}(D)\), together with the action of \(E\) on \(D\). Since \(D\) is abelian, \(\mathcal{F}\) is determined by \(E\). Consequently, every block with abelian defect groups and trivial inertial quotient is nilpotent (we will use this fact throughout this paper without further reference).
The following is well-known.
**Proposition 2.4**.: _Let \(B\) be a block of \(\mathcal{O}G\) for a finite group \(G\) with abelian defect group \(D\). Let \((D,B_{D})\) be a \(B\)-subpair. Then \(D=[D,N_{G}(D,B_{D})]\times C_{D}(N_{G}(D,B_{D}))\)._
_Suppose there is \(N\lhd G\) such that \(G=DN\), then \([D,N_{G}(D,B_{D})]\leq N\)._
Proof.: The factorisation of \(D\) follows from [27, Theorem 5.2.3]. For the second part, note that \([D,N_{G}(D,B_{D})]=[D,DN_{N}(D,B_{D})]\leq N\).
Most of the following is Lemma 5.3.5 of [41].
**Proposition 2.5**.: _Let \(B\) be a block of \(\mathcal{O}G\) for a finite group \(G\) with abelian defect group \(D\) and let \((D,B_{D})\) be a \(B\)-subpair. Let \(N\lhd G\) and suppose that \(B\) covers a \(G\)-stable block \(b\) of \(\mathcal{O}N\). Write \(Q=N\cap D\), a defect group for \(b\). Let \(b_{Q}\) be a block of \(\mathcal{O}C_{N}(Q)\) covered by \(B_{Q}\). Then \(b_{Q}\) is \(C_{G}(Q)\)-stable and \((Q,b_{Q})\) is a \(b\)-subpair. The blocks \(B\) and \(b\) have isomorphic inertial quotients and \([N_{G}(D,B_{D}),D]=[N_{N}(Q,b_{Q}),Q]\)._
Proof.: That \((Q,b_{Q})\) is a \(b\)-subpair forms part of the proof of the main theorem of [30]. We note that due to the correspondence between \(\mathcal{O}\)-blocks and \(k\)-blocks, the result may be proved for \(k\)-blocks, which is the setting for [30]. Note also that this part of the proof of the main result of [30] does not require that \(G\) is a split extension of \(N\).
We have \(C_{G}(Q)=DC_{N}(Q)\) and \(N_{G}(Q)=DN_{N}(Q)\). Hence \(N_{G}(Q,b_{Q})=DN_{N}(Q,b_{Q})\). Also, since \(b_{Q}\) is \(C_{G}(Q)\)-stable, it is the unique block of \(C_{N}(Q)\) covered by \(B_{Q}\) (and \(B_{Q}\) is the unique block of \(C_{G}(Q)\) covering \(b_{Q}\) since \(C_{N}(Q)\) has index a power of \(p\) in \(C_{G}(Q)\)). Hence \(N_{G}(Q,B_{Q})=N_{G}(Q,b_{Q})\). By definition we have \(N_{G}(D,B_{D})\leq N_{G}(Q,B_{Q})\).
Since \(N_{G}(D,B_{D})\) controls fusion in \(D\), by [1, Proposition 4.24] we have \(N_{G}(Q,B_{Q})=N_{G}(D,B_{D})C_{G}(Q)\). It follows that
\[N_{N}(Q,b_{Q})/C_{N}(Q)\cong N_{G}(D,B_{D})/(C_{G}(Q)\cap N_{G}(D,B_{D})).\]
Noting that \(D=QC_{D}(N_{G}(D,B_{D}))\), we have \(C_{G}(D)=C_{N_{G}(D,B_{D})}(D)=C_{N_{G}(D,B_{D})}(Q)\), so \(B\) and \(b\) have isomorphic inertial quotients.
Further
\[[N_{G}(D,B_{D}),D]=[N_{G}(D,B_{D}),QC_{D}(N_{G}(D,B_{D}))]=[N_{G}(D,B_{D}),Q]\]
\[=[N_{G}(D,B_{D})C_{G}(Q),Q]=[N_{G}(Q,B_{Q}),Q]=[N_{G}(Q,b_{Q}),Q]=[N_{N}(Q,b_{ Q}),Q]\]
as required.
The next result is essentially extracted from the proof of [51, Lemma 6.3], and allows us to compare inertial quotients of blocks with those of blocks of normal subgroups of \(p^{\prime}\) index. We include the proof here for convenience.
**Lemma 2.6**.: _Let \(B\) be a block of \(\mathcal{O}G\) for a finite group \(G\), and let \(b\) be a \(G\)-stable block of \(\mathcal{O}N\) for \(N\lhd G\) with \([G:N]=r\) a prime different to \(p\). Let \(E_{B}\), \(E_{b}\) be the inertial quotient of \(B\), \(b\) respectively. Let \(D\) be an abelian defect group for \(B\) (and for \(b\)). Then either \(E_{b}\) is isomorphic to a subgroup of \(E_{B}\) or \(E_{B}\) is isomorphic to a subgroup of \(E_{b}\)._
Proof.: We may take a \(B\)-subpair \((D,B_{D})\) and a \(b\)-subpair \((D,b_{D})\) with \(B_{D}\) covering \(b_{D}\). If \(C_{G}(D)\leq N\), then \(B_{D}=b_{D}\) and \(N_{N}(D,b_{D})\leq N_{G}(D,B_{D})\), so the result is immediate in this case. Hence suppose \(C_{G}(D)\) is not contained in \(N\), i.e,,, \(G=NC_{G}(D)\).
If \(C_{G}(D)\leq N_{G}(D,b_{D})\), then \(N_{G}(D,B_{D})\leq N_{G}(D,b_{D})=N_{N}(D,b_{D})C_{G}(D)\). Hence \(N_{G}(D,B_{D})/C_{G}(D)\leq N_{G}(D,b_{D})/C_{G}(D)\cong N_{N}(D,b_{D})/C_{N}(D)\).
Suppose that \(C_{G}(D)\) does not stabilize \(b_{D}\). Since \([C_{G}(D):C_{N}(D)]=r\) is a prime and \(C_{N}(D)\leq C_{G}(D)\cap N_{G}(D,b_{D})<C_{G}(D)\), we have \(C_{N}(D)=C_{G}(D)\cap N_{G}(D,b_{D})\). Let \(T\) be a transversal of \(C_{N}(D)\) in \(C_{G}(D)\). Then \(\{b_{D}^{t}:t\in T\}\) is the set \(r\) distinct conjugate blocks of \(C_{N}(D)\) covered by \(B_{D}\) and \(B_{D}\) is the unique block of \(C_{G}(D)\) covering \(b_{D}\). Hence \(N_{G}(D,B_{D})=N_{G}(D,b_{D})C_{G}(D)\). Then \(N_{G}(D,B_{D})/C_{G}(D)\cong N_{G}(D,b_{D})/C_{N}(D)\) and the result follows.
### Blocks of quasisimple groups
We extract the results of [20] necessary for this paper. Recall that a block \(B\) of a group \(G\) is nilpotent covered if there is \(H\) with \(G\lhd H\) and a nilpotent block of \(H\) covering \(B\). Properties of such blocks are considered in [46].
**Proposition 2.7** ([20]).: _Let \(p=2\), and let \(B\) be a block of \(\mathcal{O}G\) for a quasisimple group \(G\) with abelian defect group \(D\) of rank at most \(4\). Then one or more of the following occurs:_
_(i) \(G\cong A_{5}\), \(SL_{2}(8)\), \(SL_{2}(16)\), \(J_{1}\) or \({}^{2}G_{2}(q)\), where \(q=3^{2m+1}\) for some \(m\in\mathbb{N}\), and \(B\) is the principal block;_
_(ii) \(G\cong Co_{3}\) and \(B\) is the unique non-principal \(2\)-block of defect \(3\);_
_(iii) \(B\) has inertial quotient of type \((C_{3})_{1}\) and is Morita equivalent to a block \(C\) of \(\mathcal{O}L\) where \(L=L_{0}\times L_{1}\leq G\) such that \(L_{0}\) is abelian and the block of \(\mathcal{O}L_{1}\) covered by \(C\) has Klein four defect groups;_
_(iv) \(B\) is nilpotent covered._
Proof.: This follows from Theorem 6.1 of [20] and its proof. The only point to address is the inertial quotient in case (iii). Case (iii) arises in two ways. The first is where \(D\cong C_{2}\times C_{2}\), in which case either \(B\) is nilpotent (and so is covered by case (iv)), or has inertial quotient \(C_{3}\). The second way this arises is as in case (v) of [20, Proposition 5.3]. Here the given Morita equivalence is given by the Bonnafe-Rouquier correspondence as in [11]. Now the corresponding blocks in [11] are equivalent by a splendid Rickard equivalence and so have the same fusion. Hence \(B\) has the same inertial quotient as \(C\), which is of type \((C_{3})_{1}\).
**Corollary 2.8**.: _Every \(2\)-block of a quasisimple group that is Morita equivalent to \(G_{n}\), for \(n\in\mathbb{N}\), is inertial._
Proof.: For \(n\geq 2\) we see from Proposition 2.7 that blocks of quasisimple groups with defect groups \((C_{2^{n}})^{2}\) are nilpotent covered, and so inertial. For \(n=1\), by [13] every block that is Morita equivalent to \(\mathcal{O}A_{4}\) is source algebra equivalent to \(\mathcal{O}A_{4}\), and so inertial.
### Picard groups
Let \(G\) be a finite group and \(B\) be a block of \(\mathcal{O}G\) with defect group \(D\). The Picard group \(\mathrm{Pic}(B)\) of \(B\) has elements the isomorphism classes of \(B\)-\(B\)-bimodules inducing \(\mathcal{O}\)-linear Morita auto-equivalences of \(B\). For \(B\)-\(B\)-bimodules \(M\) and \(N\), the group multiplication is given by \(M\otimes_{B}N\). For background and definitions we follow [10].
We will use knowledge of \(\mathrm{Pic}(B)\) to refine Kulshammer's analysis in [34] of the situation of a normal subgroup containing the defect groups of a block. This involves the study of crossed products of a basic algebra with a \(p^{\prime}\)-group, which in turn uses the outer automorphism group of the basic algebra, a group that embeds into the Picard group. This will be essential in reduction steps in our classification of Morita equivalence classes of blocks.
Write \(\mathcal{T}(B)\) for the subgroup of \(\mathrm{Pic}(B)\) consisting of bimodules with trivial source and \(\mathcal{L}(B)\) for the subgroup consisting of linear source modules.
\(\mathcal{T}(B)\) and \(\mathcal{L}(B)\) are described in [10], and we summarise the relevant notation and results here. Let \(\mathcal{F}=\mathcal{F}_{B}(D,B_{D})\) be the fusion system for \(B\) on \(D\), defined using a \(B\)-subpair \((D,B_{D})\), and let \(E=N_{G}(D,B_{D})/DC_{G}(D)\) be the inertial quotient. Write \(\mathrm{Aut}(D,\mathcal{F})\) for the subgroup of \(\mathrm{Aut}(D)\) of automorphisms stabilizing \(\mathcal{F}\) and \(\mathrm{Out}(D,\mathcal{F})=\mathrm{Aut}(D,\mathcal{F})/\,\mathrm{Aut}_{ \mathcal{F}}(D)\). By [25], if \(D\) is abelian, then \(\mathrm{Out}(D,\mathcal{F})\cong N_{\mathrm{Aut}(D)}(E)/E\cong\mathrm{Out}(D \rtimes E)\).
Let \(A\) be a source algebra for \(B\). Then \(A\) is a \(D\)-algebra, so we have an embedding of \(D\) into \(A\) and we may consider the fixed points \(A^{D}\) under the action of \(D\). Write \(\operatorname{Aut}_{D}(A)\) for the group of \(\mathcal{O}\)-algebra automorphisms of \(A\) fixing each element of the image of \(D\) in \(A\), and \(\operatorname{Out}_{D}(A)\) for the quotient of \(\operatorname{Aut}_{D}(A)\) by the subgroup of automorphisms given by conjugation by elements of \((A^{D})^{\times}\). By [44, 14.9]\(\operatorname{Out}_{D}(A)\) is isomorphic to a subgroup of \(\operatorname{Hom}(E,k^{\times})\).
By [10, Theorem 1.1] we have exact sequences
\[\begin{array}{ccccccc}1&\to&\operatorname{Out}_{D}(A)&\to&\mathcal{T}(B)& \to&\operatorname{Out}(D,\mathcal{F}),\\ 1&\to&\operatorname{Out}_{D}(A)&\to&\mathcal{L}(B)&\to&\operatorname{Hom}(D/ \mathfrak{loc}(D),\mathcal{O}^{\times})\rtimes\operatorname{Out}(D,\mathcal{ F}),\end{array} \tag{1}\]
where \(\mathfrak{loc}(D)\) is the focal subgroup of \(D\) with respect to \(\mathcal{F}\), generated by the elements \(\varphi(x)x^{-1}\) for \(x\in D\) and \(\varphi\in\operatorname{Hom}_{\mathcal{F}}(\langle x\rangle,D)\). When \(D\) is abelian we have \(\mathfrak{loc}(D)=[N_{G}(D,B_{D}),D]\), so that \(\operatorname{Hom}(D/\mathfrak{loc}(D),\mathcal{O}^{\times})\cong C_{D}(N_{G }(D,B_{D}))\) (see Proposition 2.4).
We record for later use that by [47], if \(P\) is a \(p\)-group, then
\[\operatorname{Pic}(\mathcal{O}P)=\mathcal{L}(\mathcal{O}P)\cong\operatorname{ Aut}(\mathcal{O}P)\cong\operatorname{Hom}(P,\mathcal{O}^{\times})\rtimes \operatorname{Aut}(P).\]
We gather together here the results regarding Picard groups that we will use later on.
**Proposition 2.9**.: _Let \(P\) and \(Q\) be abelian \(2\)-groups and \(n,n_{1},n_{2}\in\mathbb{N}\)._
1. \(\operatorname{Pic}(\mathcal{O}(G_{n}\times P))\cong S_{3}\times(P\rtimes \operatorname{Aut}(P))\)_. The subgroup of_ \(\operatorname{Pic}(\mathcal{O}(G_{n}\times P))\) _given by those self-equivalences fixing the projective indecomposable modules is isomorphic to_ \(P\rtimes\operatorname{Aut}(P)\)_._
2. \(\operatorname{Pic}(B_{0}(\mathcal{O}(A_{5}\times P)))\cong C_{2}\times(P \rtimes\operatorname{Aut}(P))\)_._
3. \(\operatorname{Pic}(B_{0}(\mathcal{O}(G_{n}\times A_{5})))\cong S_{3}\times C_ {2}\)_._
4. \(\operatorname{Pic}(\mathcal{O}(G_{n_{1}}\times G_{n_{2}}))=\mathcal{T}( \mathcal{O}(G_{n_{1}}\times G_{n_{2}}))\cong\)__ 1. \(S_{3}\wr S_{2}\) _if_ \(n_{1}=n_{2}\)_,_ 2. \(S_{3}\times S_{3}\) _if_ \(n_{1}\neq n_{2}\)_._
5. _If_ \(Q\cong(C_{2^{n}})^{3}\)_, then_ \(\operatorname{Pic}(\mathcal{O}((Q\rtimes C_{7})\times P))\cong(C_{7}\rtimes C _{3})\times\operatorname{Aut}(P)\)_._
Proof.: Throughout this proof, we denote by \(D\) a defect group of the block in question.
(i) The Picard group is described in [25, Theorem 1.1]. The \(S_{3}\) factor consists of (bimodules inducing) Morita equivalences corresponding to elements of \(C_{3}\rtimes\operatorname{Out}(G_{n})\), where the \(C_{3}\) is generated by multiplying by a non-trivial linear character of \(G_{n}\). The \((P\rtimes\operatorname{Aut}(P))\) factor is generated by equivalences given by multiplication by a (linear) character and by automorphisms of \(P\). Since the projective indecomposable modules correspond in this case to the irreducible characters with \(D\) in their kernel, the remainder follows.
(ii)-(iv) are from [25, Theorem 1.1].
(v) Suppose \(G=(Q\rtimes C_{7})\times P)\) where \(Q\cong(C_{2^{n}})^{3}\). Then \(G=D\rtimes E\), where \(E\) is the inertial quotient (note that \(\mathcal{O}G\) has a unique block). By [39]\(\operatorname{Pic}(\mathcal{O}G)=\mathcal{L}(\mathcal{O}G)\).
By [25, Lemma 2.1]\({\rm Out}(D,\mathcal{F})\cong N_{{\rm Aut}(D)}(E)\big{/}E\cong{\rm Out}(G) \cong C_{3}\times(P\rtimes{\rm Aut}(P))\). Also it follows from [25, Lemma 2.2] that \({\rm Out}_{D}(A)\cong E\), where \(A\) is a source algebra for the unique block \(\mathcal{O}G\). We have \({\rm Hom}(D/\mathfrak{loc}(D),\mathcal{O}^{\times})\cong C_{D}(N_{G}(D,B_{D}) )=P\). The result follows from the description of \(\mathcal{L}(B)\) in (1) above, as it is clear that the elements of \({\rm Out}_{D}(A)\) cannot commute with the elements of \({\rm Out}(D,\mathcal{F})\) obtained as automorphisms of \(Q\rtimes C_{7}\).
**Remark 2.10**.: In case (v), whilst a Sylow \(3\)-subgroup of the Picard group for the block in question occurs as a conjugate of \({\rm Out}(D,\mathcal{F})\), this is not known to be the case for every block Morita equivalent to it. In other words, it is theoretically possible for the Picard group of a Morita equivalent block \(C\) to have a subgroup \(C_{7}\rtimes C_{3}\), but for \(\mathcal{T}(C)\not\cong C_{7}\rtimes C_{3}\). We will have to beware of this inconvenience in Section 5.
## 3 Preliminaries on perfect isometries
We require a method for comparing the principal blocks of \(\mathcal{O}(A_{4}\times P)\) and \(\mathcal{O}(A_{5}\times P)\) with those of \(\mathcal{O}(A_{4}\times Q)\) and \(\mathcal{O}(A_{5}\times Q)\) respectively when \(Q\) is a subgroup of an abelian \(2\)-group \(P\). We will do this in Section 4, but first require an analysis of their perfect self-isometries, which is the content of this section. For a block \(B\), write \({\rm Perf}(B)\) for the group of perfect self-isometries of \(B\), under composition. The results of this section are an extension of those of part of Section 2 of [22].
Note that every perfect isometry \(I\) between blocks \(B_{1}\) and \(B_{2}\) gives rise to a bijection of character idempotents and so to a \(K\)-algebra isomorphism between \(Z(KB_{1})\) and \(Z(KB_{2})\), and that by [12] this induces an \(\mathcal{O}\)-algebra isomorphism \(\phi_{I}:Z(B_{1})\to Z(B_{2})\).
By Proposition 2.1, for a block with abelian defect groups, every irreducible character in a block of a normal subgroup of index \(p\) covered by \(B\) is \(G\)-stable and extends to \(G\). The following Proposition tells us that these extensions behave well with respect to perfect isometries.
**Proposition 3.1**.: _For \(i=1,2\) let \(G_{i}\) be a finite group and \(N_{i}\lhd G_{i}\) with index \(p\). Let \(B_{i}\) be a block of \(\mathcal{O}G_{i}\) with abelian defect group \(D\) and let \(b_{i}\) be a \(G_{i}\)-stable block of \(\mathcal{O}N_{i}\) covered by \(B_{i}\). For each \(\chi\in{\rm Irr}(b_{1})\) write \({\rm Irr}(B_{1},\chi)=\{\chi_{1},\ldots,\chi_{p}\}\)._
_Suppose \(I:\mathbb{Z}\,{\rm Irr}(B_{1})\to\mathbb{Z}\,{\rm Irr}(B_{2})\) is a perfect isometry such that for each \(\chi\in{\rm Irr}(b_{1})\) there is \(\psi\in{\rm Irr}(b_{2})\) and \(\epsilon_{\chi}\in\{\pm 1\}\) such that \(I(\chi_{i})=\epsilon_{\chi}\psi_{i}\) for \(i=1,\ldots,p\) where \({\rm Irr}(B_{2},\psi)=\{\psi_{1},\ldots,\psi_{p}\}\). Then the isometry \(I_{N_{1},N_{2}}:\mathbb{Z}\,{\rm Irr}(b_{1})\to\mathbb{Z}\,{\rm Irr}(b_{2})\) defined by \(I_{N_{1},N_{2}}(\chi):=\epsilon_{\chi}\psi\) is perfect and further \(\phi_{I_{N_{1},N_{2}}}=\phi_{I}|_{Z(b_{1})}\)._
**Remark 3.2**.: We may restrict \(\phi_{I}\) to \(Z(b_{1})\) since, as by Proposition 2.1\(B_{i}\) acts as inner automorphisms on \(b_{i}\) and so \(Z(b_{i})\subseteq Z(B_{i})\).
Proof.: This is Proposition 2.6 and Lemma 2.7 of [22].
**Lemma 3.3**.: _Let \(P\) be an abelian finite \(p\)-group. Then \({\rm Perf}(\mathcal{O}P)\cong{\rm Aut}(\mathcal{O}P)\times C_{2}\)._
Proof.: Since there is only one indecomposable projective module for \(\mathcal{O}P\), every perfect self-isometry of \(\mathcal{O}P\) must have all positive or all negative signs. Now every perfect self-isometry induces a permutation of \({\rm Irr}(P)\), which induces an automorphism of \({\rm Aut}(Z(\mathcal{O}P))\cong{\rm Aut}(\mathcal{O}P)\), and the result follows.
Now consider the character table of \(A_{4}\). Let \(\omega\) be a primitive 3rd root of unity. We set up the labelling of characters:
\begin{tabular}{|c||c|c|c|c|} \hline & () & (12)(34) & (123) & (132) \\ \hline \(\chi_{1}\) & 1 & 1 & 1 & 1 \\ \(\chi_{2}\) & 1 & 1 & \(\omega\) & \(\omega^{2}\) \\ \(\chi_{3}\) & 1 & 1 & \(\omega^{2}\) & \(\omega\) \\ \(\chi_{4}\) & 3 & \(-1\) & 0 & 0 \\ \hline \end{tabular}
For the rest of the section we assume \(p=2\).
**Proposition 3.4**.: _The perfect self-isometries of \(\mathcal{O}A_{4}\) are precisely the isometries of the form:_
\[I_{\sigma,\epsilon}:\mathbb{Z}\operatorname{Irr}(A_{4}) \to\mathbb{Z}\operatorname{Irr}(A_{4})\] \[\chi_{j} \mapsto\epsilon\delta_{j}\delta_{\sigma(j)}\chi_{\sigma(j)}\]
_for \(1\leq j\leq 4\), where \(\sigma\in S_{4}\), \(\epsilon\in\{\pm 1\}\) and \(\delta_{1}=\delta_{2}=\delta_{3}=-\delta_{4}=1\). Hence \(\operatorname{Perf}(B_{0}(\mathcal{O}A_{5})\cong\operatorname{Perf}( \mathcal{O}A_{4})\cong C_{2}\times S_{4}\)._
Proof.: This is [22, Proposition 2.8] together with the observation that by [12, A1.3]\(\mathcal{O}A_{4}\) and \(B_{0}(\mathcal{O}A_{5})\) are perfectly isometric.
**Theorem 3.5**.: _Let \(P\) be a finite abelian \(2\)-group. Every perfect self-isometry of \(\mathcal{O}(P\times A_{4})\) is of the form \((J,I_{\sigma,\epsilon})\), where \(J\) is a perfect isometry of \(\mathcal{O}P\) induced by an \(\mathcal{O}\)-algebra automorphism, \(\sigma\in S_{4}\) and \(\epsilon\in\{\pm 1\}\)._
Proof.: We proceed as in the proof of [22, Theorem 2.11]. The set of projective indecomposable characters (characters of projective indecomposable modules) is
\[\operatorname{prj}(\mathcal{O}(P\times A_{4}))=\{\chi_{P_{1}},\chi_{P_{2}}, \chi_{P_{3}}\},\text{ where }\chi_{P_{j}}:=\left(\sum_{\theta\in \operatorname{Irr}(P)}\theta\right)\otimes\left(\chi_{j}+\chi_{4}\right).\]
Let \(I\) be a perfect self-isometry of \(\mathcal{O}(P\times A_{4})\). Each \(I(\chi_{P_{i}})\) is an integer linear combination of projective indecomposable characters. By counting constituents we see that
\[I(\chi_{P_{u}})=\pm\chi_{P_{1}},\pm\chi_{P_{2}},\pm\chi_{P_{3}},\pm(\chi_{P_ {1}}-\chi_{P_{2}}),\pm(\chi_{P_{1}}-\chi_{P_{3}})\text{ or }\pm(\chi_{P_{2}}-\chi_{P_{3}}), \tag{2}\]
for \(1\leq u\leq 3\). Consider the set
\[X_{m}:=\left\{j:\left\langle\zeta\otimes\chi_{j},I\left(\left(\sum_{\theta\in \operatorname{Irr}(P)}\theta\right)\otimes\chi_{m}\right)\right\rangle\neq 0, \text{ for some }\zeta\in\operatorname{Irr}(P)\right\},\]
for \(1\leq m\leq 4\). By (2) we have shown that \(|X_{m}|=1\) or \(2\) for every \(1\leq m\leq 4\). If \(|X_{1}|=2\), then by considering (2) for \(u=1\) we see that \(X_{4}=X_{1}\). Similarly by considering \(I(\chi_{P_{2}})\), we get that \(X_{2}=X_{4}\). This is now a contradiction as then
\[I\left(\left(\sum_{\theta\in\operatorname{Irr}(P)}\theta\right)\otimes\left( \chi_{1}+\chi_{2}+\chi_{4}\right)\right)\]
has at most \(2|P|\) constituents with non-zero multiplicity. Therefore \(|X_{1}|=1\) and so by considering \(I(\chi_{P_{1}})\) we get that \(|X_{4}|=1\) and then by considering \(I(\chi_{P_{2}})\) and \(I(\chi_{P_{3}})\) we get that \(|X_{2}|=|X_{3}|=1\). Moreover, \(X_{1},X_{2},X_{3},X_{4}\) must all be disjoint. By composing \(I\) with the perfect isometry \((\operatorname{Id}_{\mathbb{Z}\operatorname{Irr}(P)},I_{\sigma,1})\), for some appropriately chosen \(\sigma\in S_{4}\), we may assume \(X_{m}=\{m\}\) for all \(1\leq m\leq 4\). Therefore \(I(\chi_{P_{u}})=\pm\chi_{P_{u}}\) for \(1\leq u\leq 3\) and by considering
\[I\left(\left(\sum_{\theta\in\operatorname{Irr}(P)}\theta\right)\otimes\chi_{4 }\right),\]
we see that in fact all these signs are the same and we may assume, after possibly composing \(I\) with \((\operatorname{Id}_{\mathbb{Z}\operatorname{Irr}(P)},I_{1,-1})\), that
\[I\left(\left(\sum_{\theta\in\operatorname{Irr}(P)}\theta\right)\otimes\chi_{m }\right)=\left(\sum_{\theta\in\operatorname{Irr}(P)}\theta\right)\otimes \chi_{m},\]
for \(1\leq m\leq 4\). Next we note that
\[\frac{1}{12}\theta\otimes(\chi_{1}+\chi_{2}+\chi_{3}+3\chi_{4})\in\operatorname {CF}(P\times A_{4},\mathcal{O}(P\times A_{4}),\mathcal{O}),\]
for each \(\theta\in\operatorname{Irr}(P)\). As \(3\) is invertible in \(\mathcal{O}\), this implies
\[\theta\otimes\left(\sum_{m=1}^{4}\delta_{m}\chi_{m}\right)\in 4 \operatorname{CF}(P\times A_{4},\mathcal{O}(P\times A_{4}),\mathcal{O}),\]
where \(\delta_{m}\) is defined as in Proposition 3.4, and so
\[I\left(\theta\otimes\left(\sum_{m=1}^{4}\delta_{m}\chi_{m}\right)\right)\in 4 \operatorname{CF}(P\times A_{4},\mathcal{O}(P\times A_{4}),\mathcal{O}), \tag{3}\]
for each \(\theta\in\operatorname{Irr}(P)\). Fixing for now \(\theta\in\operatorname{Irr}(P)\), define \(\theta_{m}\otimes\chi_{m}:=I(\theta\otimes\chi_{m})\), for \(1\leq m\leq 4\). Let \(x\in P\). Evaluating (3) at \((x,1)\), \((x,(123))\) and \((x,(132))\), gives
\[\theta_{1}(x)+\theta_{2}(x)+\theta_{3}(x)+\theta_{4}(x) \in 4\mathcal{O}, \tag{4}\] \[\theta_{1}(x)+\omega\theta_{2}(x)+\omega^{2}\theta_{3}(x) \in 4\mathcal{O},\] (5) \[\theta_{1}(x)+\omega^{2}\theta_{2}(x)+\omega\theta_{3}(x) \in 4\mathcal{O}. \tag{6}\]
Proceeding as in the proof of [22, Theorem 2.11] we have \(\theta_{1}(x)=\theta_{2}(x)=\theta_{3}(x)=\theta_{4}(x)\) for all \(x\in P\). In other words \(\theta_{1}=\theta_{2}=\theta_{3}=\theta_{4}\).
We have shown that we may assume \(I\) is of the form
\[I(\theta\otimes\chi_{m})\mapsto J(\theta)\otimes\chi_{m}\]
for all \(\theta\in\operatorname{Irr}(P)\), where \(J\) is a permutation of \(\operatorname{Irr}(P)\). In particular the \(\mathcal{O}\)-algebra automorphism of \(Z(\mathcal{O}(P\times A_{4})\) induced by \(I\) leaves \(\mathcal{O}P\) invariant. Therefore the permutation \(J\) of \(\operatorname{Irr}(P)\) must induce an automorphism of \(\mathcal{O}P\) and the theorem is proved.
We need two technical lemmas before we continue. Set
\[A =k[X,Y,Z]/(X^{2},Y^{2},Z^{2},XY,XZ,YZ)\cong Z(kA_{4}),\] \[A_{(m_{1},\ldots,m_{s})} =k[X_{1},\ldots,X_{s}]/(X_{1}^{2^{m_{1}}},\ldots,X_{s}^{2^{m_{s}}} )\cong k(C_{2^{m_{1}}}\times\cdots\times C_{2^{m_{s}}}),\]
for \(s\in\mathbb{N}\), \((m_{1},\ldots,m_{s})\in\mathbb{N}^{s}\).
**Lemma 3.6**.: _Let \(s,t\in\mathbb{N}\), \((m_{1},\ldots,m_{s})\in\mathbb{N}^{s}\), with \(m_{1}\geq\cdots\geq m_{s}\) and \((n_{1},\ldots,n_{t})\in\mathbb{N}^{t}\), with \(n_{1}\geq\cdots\geq n_{t}\). If \(A_{(m_{1},\ldots,m_{s})}\otimes_{k}A\cong A_{(n_{1},\ldots,n_{t})}\otimes_{k}A\), then \(s=t\) and \(m_{1}=n_{1},\ldots,m_{s}=n_{s}\)._
Proof.: Throughout this proof we use \(X_{1},\ldots,X_{s},X,Y,Z\) to denote the images of the elements of the same name in \(A_{(m_{1},\ldots,m_{s})}\otimes_{k}A\).
We first note that
\[\mathcal{B}:=\left\{\bigg{(}\prod_{i=1}^{s}X_{i}^{u_{i}}\bigg{)}X^{\epsilon_{ X}}Y^{\epsilon_{Y}}Z^{\epsilon_{Z}}|0\leq u_{i}<2^{m_{i}},\text{ for all }1\leq i\leq s,\right.\] \[\left.\epsilon_{X},\epsilon_{Y},\epsilon_{Z}\in\{0,1\},\text{ with }\epsilon_{X}+\epsilon_{Y}+\epsilon_{Z}\leq 1\right\}\]
forms a basis for \(A_{(n_{1},\ldots,n_{t})}\otimes_{k}A\) and, setting \(J:=J(A_{(m_{1},\ldots,m_{s})}\otimes_{k}A)\) (the Jacobson radical of \(A_{(m_{1},\ldots,m_{s})}\otimes_{k}A\)), we have that
\[\{X_{1}+J^{2},\ldots,X_{s}+J^{2},X+J^{2},Y+J^{2},Z+J^{2}\}\]
forms a basis of \(J/J^{2}\).
Suppose some \(W\in J\setminus J^{2}\) has image
\[\left(\sum_{i=1}^{s}\lambda_{i}X_{i}\right)+(\lambda_{X}X+\lambda_{Y}Y+ \lambda_{Z}Z)+J^{2}\]
in \(J/J^{2}\), for some \(\lambda_{i},\lambda_{X},\lambda_{Y},\lambda_{Z}\in k\). Then \(W\) has order of nilpotency at least \(2\). If \(\lambda_{i}\neq 0\) for some \(1\leq i\leq s\), then, by looking at the coefficients of powers of \(X_{i}\) with respect to the basis \(\mathcal{B}\), we can see that \(W\) has order of nilpotency at least \(2^{m_{i}}\).
Now let \(W_{1},\ldots W_{s+3}\in J\) be such that \(\{W_{1}+J,\ldots,W_{s+3}+J\}\) forms a basis of \(J/J^{2}\). We set \(o_{i}\) to be the order of nilpotency of \(W_{i}\), for \(1\leq i\leq s+3\). By reordering, we may assume that \(o_{1}\geq\cdots\geq o_{s+3}\). As a consequence of the previous paragraph \(o_{i}\geq 2^{m_{i}}\), for each \(1\leq i\leq s\) and \(o_{s+1},o_{s+2},o_{s+3}\geq 2\). Moreover, setting \(W_{i}=X_{i}\), for each \(1\leq i\leq s\) and \(W_{s+1}=X,W_{s+2}=Y,W_{s+3}=Z\), these lower bounds on the \(o_{i}\)'s can all be achieved. We have now shown that the tuple \((m_{1},\ldots,m_{s})\) can be retrieved from the isomorphism type of \(A_{(m_{1},\ldots,m_{s})}\otimes_{k}A\) and the result is proved.
**Corollary 3.7**.: _Let \(P\) and \(Q\) be finite abelian \(2\)-groups. If \(\mathcal{O}(P\times A_{4})\) is perfectly isometric to \(\mathcal{O}(Q\times A_{4})\), then \(P\cong Q\)._
Proof.: Suppose \(\mathcal{O}(P\times A_{4})\) is perfectly isometric to \(\mathcal{O}(Q\times A_{4})\). Then certainly \(Z(k(P\times A_{4}))\cong Z(k(Q\times A_{4}))\). We may assume that \(P,Q\neq 1\). Note that \(Z(kA_{4})\cong A\), \(kP\cong A_{(m_{1},\ldots,m_{s})}\) and \(kQ\cong A_{(n_{1},\ldots,n_{t})}\), where \(P\cong C_{2^{m_{1}}}\times\cdots\times C_{2^{m_{s}}}\) and \(Q\cong C_{2^{n_{1}}}\times\cdots\times C_{2^{n_{t}}}\). By Lemma 3.6, we have \(\{m_{1},\ldots,m_{s}\}=\{n_{1},\ldots,n_{t}\}\) and so \(P\cong Q\)
## 4 Normal subgroups of index 2
**Proposition 4.1** (Theorem 3.15 of [23]).: _Let \(G\) be a finite group and \(N\) a normal subgroup of \(G\) of index \(p\). Now let \(B\) be a block of \(\mathcal{O}G\) with abelian defect group \(D\) such that \(G=ND\). Then there exists a block \(b\) of \(\mathcal{O}N\) with the same block idempotent as \(B\) and defect group \(D\cap N\). Moreover there exists a \(G/N\)-graded unit \(a\in Z(B)\), in particular \(B=\bigoplus_{j=0}^{p-1}a^{j}b\)._
**Theorem 4.2**.: _Let \(G\), \(N\), \(B\), \(b\) and \(D\) be as in Proposition 4.1. Suppose further that \(D\cong P\times(C_{2})^{2}\), for some finite abelian \(2\)-group \(P\), \(D\cap N\cong Q\times(C_{2})^{2}\), for some subgroup \(Q\leq P\) of index \(2\), and that \(b\) has inertial quotient \(C_{3}\) and is Morita equivalent to the principal block of \(\mathcal{O}(Q\times A_{4})\) (respectively \(\mathcal{O}(Q\times A_{5})\)). Then \(B\) is Morita equivalent to the principal block of \(\mathcal{O}(P\times A_{4})\) (respectively \(\mathcal{O}(P\times A_{5})\))._
Proof.: We follow the proof of [22, Theorem 2.15].
Suppose that \(b\) is Morita equivalent to the principal block of \(\mathcal{O}(Q\times A_{4})\) or \(\mathcal{O}(Q\times A_{5})\) and has inertial quotient \(C_{3}\). By Proposition 2.1\(D\) acts as inner automorphisms on \(b\), so every irreducible Brauer character of \(b\) is fixed under conjugation in \(G\). Since \(G/N\) is a cyclic \(2\)-group, each irreducible Brauer character extends uniquely to \(G\) and lies in \(B\) (the unique block of \(G\) covering \(b\)), so \(l(B)=l(b)=3\). By Proposition 2.5\(B\) and \(b\) have isomorphic inertial quotients \((C_{3})_{1}\). Since \(l(B)\) is equal to the order of the inertial quotient, by the main result of [50] and its proof we have a perfect isometry
\[\mathbb{Z}\operatorname{Irr}(B)\to\mathbb{Z}\operatorname{Irr}(\mathcal{O}(P \times A_{4})).\]
Similarly there is a perfect isometry \(\mathbb{Z}\operatorname{Irr}(B_{0}(\mathcal{O}(P\times A_{5})))\to\mathbb{Z }\operatorname{Irr}(\mathcal{O}(P\times A_{4}))\).
Write \(\mathcal{A}\) for \(A_{4}\) or \(A_{5}\). From the above we have a perfect isometry
\[I:\mathbb{Z}\operatorname{Irr}(B)\to\mathbb{Z}\operatorname{Irr}(B_{0}( \mathcal{O}(P\times\mathcal{A}))),\]
for \(\mathcal{A}\) as according to the Morita equivalence class of \(b\).
Now \(I\) induces an isomorphism of groups \(\operatorname{Perf}(B)\cong\operatorname{Perf}(B_{0}(\mathcal{O}(P\times \mathcal{A})))\) via \(\beta\mapsto I\circ\beta\circ I^{-1}\) for \(\beta\) any perfect self-isometry of \(B\), and we denote this isomorphism by \(I_{\operatorname{PI}}\). Consider the perfect self-isometry
\[L:\mathbb{Z}\operatorname{Irr}(B) \to\mathbb{Z}\operatorname{Irr}(B)\] \[\chi \mapsto\operatorname{sgn}_{N}^{G}.\chi,\]
where \(\operatorname{sgn}_{N}^{G}\) is the linear character of \(G\) with kernel \(N\). So for each \(\theta\in\operatorname{Irr}(b)\), \(L\) swaps the two extensions of \(\theta\) to \(G\). We know that \(L\) is indeed a perfect isometry as it is induced by the \(\mathcal{O}\)-algebra automorphism of \(\mathcal{O}G\) given by \(g\mapsto\operatorname{sgn}_{N}^{G}(g)g\) for all \(g\in G\).
Note that \(L\) is a perfect self-isometry of order \(2\) and that it induces the trivial \(k\)-algebra automorphism on \(Z(kB)\). Furthermore, since by Proposition 2.5 induction gives a bijection between \(\operatorname{prj}(b)\) and \(\operatorname{prj}(B)\), each element of \(\operatorname{prj}(B)\) is fixed under multiplication by \(\operatorname{sgn}_{N}^{G}\) and so \(L\) is the identity on \(\mathbb{Z}\operatorname{prj}(B)\). Therefore \(I_{\operatorname{PI}}(L)\) must be of order \(2\), induce the identity \(k\)-algebra automorphism on \(Z(B_{0}(k(P\times\mathcal{A})))\) and be the identity on \(\mathbb{Z}\operatorname{prj}(B_{0}(\mathcal{O}(P\times\mathcal{A})))\). We claim that \(I_{\operatorname{PI}}(L)\) is induced by multiplication by the sign character of \(G^{\prime}:=P\times\mathcal{A}\) with respect to the subgroup \(N^{\prime}:=R\times\mathcal{A}\leq G^{\prime}\), for some index \(2\) subgroup \(R\leq P\), with \(R\cong Q\).
We first deal with the \({\cal A}=A_{4}\) case. Adopting the notation of Theorem 3.5, set \(I_{\rm PI}(L)=(J,I_{\sigma,\epsilon})\), where \(J\) is a perfect self-isometry of \({\cal O}P\) induced by an \({\cal O}\)-algebra automorphism, \(\sigma\in S_{4}\) and \(\epsilon\in\{\pm 1\}\). Then the fact that \(I_{\rm PI}(L)\) is the identity on \(\mathbb{Z}{\rm prj}({\cal O}(P\times A_{4}))\) forces \(\sigma\) to be the identity permutation and \(\epsilon=1\). So \(J\) is induced by an element of \(\alpha\in{\rm Aut}({\cal O}P)\) that has order 2 and induces the identity on \(kP\). Recall that \({\rm Aut}({\cal O}P)\cong{\rm Hom}(P,{\cal O}^{\times})\rtimes{\rm Aut}(P)\). The fact that \(\alpha\) induces the identity on \(kP\) forces \(\alpha\) to be given by multiplication by \(\lambda_{\alpha}\in{\rm Hom}(P,{\cal O}^{\times})\) of order 2. In other words \(\alpha\) is induced by multiplication by the sign character of \(P\) with respect to some normal subgroup \(R\) of index 2. Hence \(I_{\rm PI}(L)\) is induced by multiplication by the sign character of \(P\times A_{4}\) with respect to the subgroup \(N^{\prime}:=R\times A_{4}\leq G^{\prime}\). (Note that we do not know yet that \(R\cong Q\).)
We have now shown that
\[I({\rm sgn}_{N}^{G}\!\cdot\!\chi)={\rm sgn}_{N^{\prime}}^{G^{\prime}}\!\cdot\! I(\chi),\]
for all \(\chi\in{\rm Irr}(B)\). By Proposition 3.1, \(b\) is then perfectly isometric to \({\cal O}N^{\prime}\). However, \(b\) is Morita equivalent to \(B_{0}({\cal O}(Q\times A_{4}))\) and so Corollary 3.7 implies that \(R\cong Q\) as desired.
For the \({\cal A}=A_{5}\) case we fix a perfect isometry \(I_{A}:\mathbb{Z}\,{\rm Irr}(B_{0}({\cal O}A_{5}))\to\mathbb{Z}\,{\rm Irr}({ \cal O}A_{4})\). As above, we can then show that
\[({\rm Id}_{\mathbb{Z}\,{\rm Irr}(P)},I_{A})\circ I_{\rm PI}(L)\circ({\rm Id}_ {\mathbb{Z}\,{\rm Irr}(P)},I_{A})^{-1}:\mathbb{Z}\,{\rm Irr}({\cal O}(P\times A _{4}))\to\mathbb{Z}\,{\rm Irr}({\cal O}(P\times A_{4}))\]
is induced by multiplication by the sign character of an appropriate subgroup. That \(I_{\rm PI}(L)\) is of the desired now follows immediately.
Composing the perfect isometry induced by the Morita equivalence between \(b\) and \(B_{0}({\cal O}(Q\times{\cal A}))\) with that given by the isomorphism between \(Q\times{\cal A}\) and \(N^{\prime}\), we obtain a perfect isometry \(I_{\rm Mor}:\mathbb{Z}\,{\rm Irr}(b)\to\mathbb{Z}\,{\rm Irr}(B_{0}({\cal O}N ^{\prime}))\).
Denote by \(I_{N,N^{\prime}}\) the perfect isometry \(\mathbb{Z}\,{\rm Irr}(b)\to\mathbb{Z}\,{\rm Irr}(B_{0}({\cal O}N^{\prime}))\) induced by \(I\) as in Proposition 3.1.
Write \(I_{N,N^{\prime}}\circ I_{\rm Mor}^{-1}=(J^{\prime},I_{\tau,\delta})\) in the notation of Theorem 3.5 applied to \(B_{0}({\cal O}(R\times{\cal A}))\), where \(J^{\prime}\) is a perfect self-isometry of \({\cal O}R\) induced by an \({\cal O}\)-algebra automorphism \(\alpha^{\prime}\), \(\tau\in S_{4}\) and \(\delta\in\{\pm 1\}\). By post-composing \(I\) with the perfect self-isometry \(({\rm Id}_{\mathbb{Z}\,{\rm Irr}(P)},I_{\tau,\delta})^{-1}\) of \(B_{0}({\cal O}G^{\prime})\) and post-composing the Morita equivalence \(b\sim_{\rm Mor}B_{0}({\cal O}N^{\prime})\) with that induced by \(\alpha^{\prime}\otimes{\rm Id}_{B_{0}({\cal O}A)}\), we may assume that \(I_{N,N^{\prime}}=I_{\rm Mor}\).
Let \(\phi_{I}:Z(B)\to Z(B_{0}({\cal O}G^{\prime}))\) be the isomorphism of centres induced by \(I\) as in Lemma 3.1 and let \(M\) be the \(B_{0}({\cal O}N^{\prime})\)-\(b\)-bimodule inducing the above Morita equivalence \(b\sim_{\rm Mor}B_{0}({\cal O}N^{\prime})\). Since \(I_{N,N^{\prime}}=I_{\rm Mor}\), by Proposition 3.1 we have that \(\phi_{I}|_{Z(b)}=\phi_{I_{N,N^{\prime}}}:Z(b)\to Z(B_{0}({\cal O}N^{\prime}))\) is the isomorphism of centres induced by the Morita equivalence. In other words,
\[\phi_{I}({\sf b})m=m{\sf b},\mbox{ for all }{\sf b}\in b,m\in M. \tag{7}\]
Let \(a\in B\) be a graded unit as described in Proposition 4.1 and set \(a^{\prime}:=\phi_{I}(a)\). Since \(\phi_{I}\) respects the \(G/N\) and \(G^{\prime}/N^{\prime}\)-gradings, \(a^{\prime}\) is also a graded unit. We now give \(M\) the structure of a module for
\[(B_{0}({\cal O}N^{\prime})\otimes_{\cal O}b^{\rm op})\oplus(a^{\prime-1}B_{0}({ \cal O}N^{\prime})\otimes_{\cal O}(ab)^{\rm op})\]
by defining \(a^{\prime-1}.m.a=m\), for all \(m\in M\), where (7) ensures that this does indeed define a module. Now by [40, Theorem 3.4] we have proved that \(B\) is Morita equivalent to \(B_{0}(\mathcal{O}(P\times\mathcal{A}))\).
## 5 Extensions of blocks
In this section we give the possible Morita equivalence classes of blocks covering a block of a normal subgroup in some relevant Morita equivalence classes.
Let \(G\) be a finite group and \(N\lhd G\). Let \(b\) be a \(G\)-stable block of \(\mathcal{O}N\) covered by a block \(B\) of \(G\) with abelian defect group \(D\). Then \(b\) has defect group \(Q=D\cap N\). Let \((D,B_{D})\) be a maximal \(B\)-subpair.
We first extract and summarize two results of [51] and [52].
The following is a weaker version of Theorem 5.10 of [51] that is sufficient for our purposes. We use it in some cases where there is a normal subgroup of index \(p\).
**Theorem 5.1** ([51]).: _With the notation above, let \(E=N_{G}(D,B_{D})/C_{G}(D)\) suppose that \(E\) is cyclic and acts freely on \([N_{G}(D,B_{D}),D]\setminus\{1\}\) (that is, all orbits have length \(|E|\)). Suppose that \(b\) is inertial, i.e., there is a basic Morita equivalence with \(\mathcal{O}(Q\rtimes E)\). Then \(B\) is Morita equivalent to \(\mathcal{O}(D\rtimes E)\)._
The following is the main result of [52], and is particularly relevant to the case that \(b\) is a nilpotent covered block.
**Theorem 5.2** ([52]).: _Suppose that \(N\) has \(p^{\prime}\)-index. If \(b\) is inertial, then \(B\) is inertial._
We now apply Kulshammer's analysis in [34] of the situation of a normal subgroup containing the defect groups of a block, which involves the study of crossed products of a basic algebra with an \(p^{\prime}\)-group. In the general setting he finds finiteness results for the possible crossed products, but in our situation we are able to precisely describe the possibilities.
Background on crossed products may be found in [34], but we summarize what we need here. Let \(X\) be a finite group and \(R\) an \(\mathcal{O}\)-algebra. A crossed product of \(R\) with \(X\) is an \(X\)-graded algebra \(\Lambda\) with identity component \(\Lambda_{1}=R\) such that each graded component \(\Lambda_{x}\), where \(x\in X\), contains a unit \(u_{x}\). Given a choice of unit \(u_{x}\) for each \(x\), we have maps \(\alpha:X\to\operatorname{Aut}(R)\) given by conjugation by \(u_{x}\) and \(\mu:X\times X\to U(R)\) given by \(\alpha_{x}\circ\alpha_{y}=\iota_{\mu(x,y)}\circ\alpha_{xy}\), where \(U(R)\) is the group of units of \(R\) and \(\iota_{\mu(x,y)}\) is conjugation by \(\mu(x,y)\). The pair \((\alpha,\mu)\) is called a parameter set of \(X\) in \(R\). In [34] an isomorphism of crossed products respecting the grading is called a weak equivalence. By the discussion following Proposition 2 of [34] weak isomorphism classes of crossed products of \(R\) with \(X\) are in bijection with pairs consisting of an \(\operatorname{Out}(R)\)-conjugacy class of homomorphisms \(X\to\operatorname{Out}(R)\) for which the induced element in \(H^{3}(X,U(Z(R)))\) vanishes, and an element of \(H^{2}(X,U(Z(R)))\).
We adapt Proposition in Section 3 of [34], and include a proof for completeness (as given in [18]). Note that \(\alpha:X\to\operatorname{Aut}(R)\) restricts to a map \(X\to\operatorname{Aut}(Z(R))\). Hence we also have homomorphisms \(X\to\operatorname{Aut}(Z(R)/J(Z(R)))\) and \(X\to\operatorname{Aut}(U(Z(R)/J(Z(R))))\).
**Lemma 5.3**.: _Suppose that \(X=\langle x\rangle\) is a cyclic \(p^{\prime}\)-group. Then \(H^{i}(X,U(Z(R)))=0\) for each \(i\geq 1\)._
Proof.: Let \(i\in\mathbb{N}\). Following [34], \(U(Z(R))\cong U(Z(R)/J(Z(R)))\times(1+J(Z(R))\) and \(H^{i}(X,U(Z(R)))\cong H^{i}(X,U(Z(R)/J(Z(R))))\times H^{i}(X,1+J(Z(R)))\). We have \(H^{i}(X,1+J(Z(R)))=0\) since \(X\) is a \(p^{\prime}\)-group. Now \(Z(R)/J(Z(R))\) is a commutative semisimple \(k\)-algebra, which we denote by \(A\), and note as above we have a homomorphism \(X\to\operatorname{Aut}(A)\). Write \(A=A_{1}\times\cdots\times A_{r}\), where each \(A_{j}\) is a product of simple algebras constituting an \(X\)-orbit. We have \(H^{i}(X,U(A))\cong H^{i}(X,U(A_{1}))\times\cdots\times H^{i}(X,U(A_{r}))\). Now each \(H^{i}(X,U(A_{j}))\) vanishes, for as a \(kX\)-module \(A_{j}\) is induced from the trivial module of \(kY\) for some \(Y\leq X\), and so by Shapiro's Lemma \(H^{i}(X,U(A_{j}))\cong H^{i}(Y,k^{\times})\) (see [8, 2.8.4]), which vanishes since \(X\) is cyclic. Hence \(H^{i}(X,U(Z(R)))=0\) for each \(i\).
We apply the above with \(X=G/N=\langle x\rangle\), where \(G/N\) is a \(p^{\prime}\)-group. Let \(f\) be an idempotent of \(b\) such that \(fbf\) is a basic algebra for \(b\). By [3, Lemma 4.2]\(fBf\) is a crossed product of \(fbf\) with \(X\) and \(fBf\) is Morita equivalent to \(B\). Hence we may take \(R=fbf\) in the above. By Lemma 5.3 weak isomorphism classes of crossed products of \(fbf\) with \(X\) are in bijection with \(\operatorname{Out}(fbf)\)-conjugacy classes of homomorphisms \(X\to\operatorname{Out}(fbf)\). Note however, that such crossed products may be isomorphic as algebras but not weakly isomorphic as crossed products. Indeed, given \(\alpha:X\to\operatorname{Out}(fbf)\), the same algebra gives rise to parameter sets associated to \(\alpha\circ\varphi\) for each \(\varphi\in\operatorname{Aut}(X)\).
Now \(\operatorname{Out}(fbf)\) embeds in \(\operatorname{Pic}(fbf)\cong\operatorname{Pic}(b)\) and since \(fbf\) is a basic algebra \(\operatorname{Out}(fbf)\cong\operatorname{Pic}(fbf)\), so we may apply the descriptions of Picard groups in Proposition 2.9. The strategy will be to limit the number of possible Morita equivalence classes for \(B\) (given \(b\)), and to identify examples where all such Morita equivalence classes are realised.
A special case that will arise frequently, and that demonstrates the phenomenon of crossed products isomorphic as algebras but not weakly isomorphic as crossed products is as follows:
**Lemma 5.4**.: _With the notation above, suppose \(G/N\) has prime order \(r\) different to \(p\) and that \(\operatorname{Out}(fbf)\) has cyclic Sylow \(r\)-subgroups of order \(r\). Then there are precisely two possibilities for Morita equivalence class of \(B\), one of which is that \(B\) is source algebra equivalent to \(b\)._
Proof.: The trivial homomorphism \(G/N\to\operatorname{Out}(fbf)\) corresponds to the case that \(B\) is source algebra equivalent to \(b\) by Proposition 2.3, since \(G=G[b]\). Consider nontrivial \(\alpha:G/N\to\operatorname{Out}(fbf)\), and consider a block \(B\) such that \(fBf\) is a crossed product of \(fbf\) with \(G/N\) corresponding to \(\alpha\). Then for each \(\varphi\in\operatorname{Aut}(G/N)\) we have that \(\alpha\circ\varphi\) also realises \(fBf\) as a crossed product. This accounts for all possible homomorphisms \(\alpha:G/N\to\operatorname{Out}(fbf)\).
**Example 5.5**.: As observed in [3], there are examples of nilpotent blocks covering nonnilpotent blocks constructed in [46, Remark 4.4] that will be useful in the arguments that follow. Let \(P\) be an abelian \(2\)-group on which \(C_{r}\) acts regularly for a prime \(r\). Define \(N=(P\rtimes C_{r})\times C_{r}\) with Sylow \(r\)-subgroup \(H\). Note that \(N\) has \(r\)\(2\)-blocks,
corresponding to \(\operatorname{Irr}(Z(N))\). There is a group \(T\cong C_{r}\) acting on \(N\) fixing the elements of \(Z(N)\) and \(H/Z(N)\). Define \(G=N\rtimes T\). Then each nontrivial element of \(\operatorname{Irr}(Z(N))\) is covered by a nilpotent block of \(G\) (with defect group \(P\)). These cover block of \(N\) Morita equivalent to \(\mathcal{O}(P\rtimes C_{r})\). We in particular require the cases \(N=G_{n}=(C_{2^{n}})^{2}\rtimes C_{3}\) and \(N=(C_{2^{n}})^{3}\rtimes C_{7}\).
**Remark 5.6**.: In the crossed product construction, we are considering homomorphisms \(G/N\to\operatorname{Pic}(b)\) induced by congugation by elements of \(G\). Conjugation on \(\mathcal{O}N\) affords a permutation module, and so conjugation on an invariant summand (in our case \(b\)) affords a trivial source module. It follows that actually we have \(G/N\to\mathcal{T}(b)\). In general this is a less useful observation than we might hope, as \(\mathcal{T}(b)\) is not (known to be) preserved under Morita equivalence. However, it does allow us to keep some control over inertial quotients as we move from \(b\) to \(B\). We will use this observation in the following proofs.
We now apply the above to every extension of a Morita equivalence from a normal subgroup of odd prime index that we will need.
**Proposition 5.7**.: _Let \(G\) be a finite group and \(N\lhd G\) with \([G:N]=r\), where \(r\) is an odd prime. Let \(B\) be a \(2\)-block of \(\mathcal{O}G\) covering a \(G\)-stable block of \(b\) of \(\mathcal{O}N\)._
1. _Suppose that_ \(b\) _is Morita equivalent to_ \(\mathcal{O}(((C_{2^{n}})^{3}\rtimes C_{7})\times P)\) _where_ \(n\in\mathbb{N}\) _and_ \(P\) _is a cyclic_ \(2\)_-group. Suppose_ \(r=3\)_. Then_ \(B\) _is either source algebra equivalent to_ \(b\) _or Morita equivalent to_ \(\mathcal{O}(((C_{2^{n}})^{3}\rtimes(C_{7}\rtimes C_{3}))\times P)\)_. Further, if_ \(b\) _is known to have inertial quotient_ \(C_{7}\)_, then in the latter case_ \(B\) _has inertial quotient_ \(C_{7}\rtimes C_{3}\)_. Suppose_ \(r=7\)_. Then_ \(B\) _is either source algebra Morita equivalent to_ \(b\) _or is nilpotent. If_ \(r\) _is an odd prime other than_ \(3\) _and_ \(7\)_, then_ \(B\) _is source algebra equivalent to_ \(b\)_._
2. _Suppose that_ \(b\) _is Morita equivalent to_ \(\mathcal{O}(G_{m}\times P)\)_, where_ \(m\in\mathbb{N}\) _and_ \(P\) _is an abelian_ \(2\)_-group with rank at most_ \(2\)_. Let_ \(D\) _be the defect group of_ \(b\)_. If_ \(r\neq 3\)_, then_ \(B\) _is source algebra equivalent to_ \(b\)_. If_ \(r=3\) _and_ \(P\cong(C_{2^{n}})\) _for some_ \(n\in\mathbb{N}\)_, then_ \(B\) _is either nilpotent, source algebra equivalent to_ \(b\)_, or Morita equivalent to_ \(\mathcal{O}(G_{m}\times G_{n})\) _or a nonprincipal block of_ \(\mathcal{O}(D\rtimes 3_{+}^{1+2})\)_. Further, if_ \(b\) _is known to have inertial quotient_ \((C_{3})_{1}\)_, then in the latter two cases the inertial quotient of_ \(B\) _is_ \(C_{3}\times C_{3}\)_. If_ \(r=3\) _and_ \(P\) _is cyclic or a product of two cyclic groups of different orders, then_ \(B\) _is either nilpotent or source algebra equivalent to_ \(b\)_._
3. _Suppose that_ \(b\) _is Morita equivalent to_ \(B_{0}(\mathcal{O}(A_{5}\times P))\) _where_ \(P\) _is an abelian_ \(2\)_-group with rank at most_ \(2\)_. If_ \(P\) _is cyclic or a product of two cyclic groups of different orders, then_ \(B\) _is source algebra equivalent to_ \(b\)_. If_ \(P\cong(C_{2^{n}})\) _for some_ \(n\in\mathbb{N}\)_, then_ \(B\) _is either source algebra equivalent to_ \(b\) _or_ \(B_{0}(\mathcal{O}(A_{5}\times G_{n}))\)_, with the latter case only occurring when_ \(r=3\)_._
4. _Suppose that_ \(b\) _is Morita equivalent to_ \(B_{0}(\mathcal{O}(A_{5}\times G_{n}))\)_. Then_ \(B\) _is Morita equivalent to_ \(B_{0}(\mathcal{O}(A_{5}\times(C_{2^{n}})^{2}))\) _or source algebra equivalent to_ \(b\)_, with the former case only occurring when_ \(r=3\)
Proof.: (i) By Proposition 2.9\(\operatorname{Pic}(b)\cong(C_{7}\rtimes C_{3})\times(C_{2^{n}}\rtimes\operatorname{ Aut}(C_{2^{n}})\). By Lemma 5.4, for both \(r=3\) and \(r=7\) there are two possibilities for the Morita equivalence class of \(B\), and one is that \(B\) is source algebra equivalent to \(b\). For \(r=7\) the second is that \(B\) is nilpotent as realised in Example 5.5. For \(r=3\), the second case is realised by the group given in the statement. For all other odd primes, since they do not divide the order of the Picard group, \(B\) is source algebra equivalent to \(b\). It remains to prove the statement regarding inertial quotients. Suppose that \(b\) is known to have inertial quotient \(C_{7}\) and that \(r=3\). It follows from Lemma 2.6 that \(B\) has inertial quotient \(C_{7}\) or \(C_{7}\rtimes C_{3}\). By Remark 5.6 we observe that we may consider elements of \(\mathcal{T}(b)\). By the description of \(\mathcal{T}(b)\) in Section 2.5 we know that \(\mathcal{T}(b)\) maps to a subgroup of \(\operatorname{Out}(D,\mathcal{F})\cong C_{3}\). If this subgroup is trivial, then we may not construct any nontrivial homomorphism \(G/N\to\mathcal{T}(b)\) and we are done. Hence suppose the subgroup has order three. Then the action of \(G\) induces an element of order three in \(\operatorname{Aut}(D,\mathcal{F})\), so that \(B\) must have inertial quotient \(C_{7}\rtimes C_{3}\).
(ii) By Proposition 2.9\(\operatorname{Pic}(b)\cong S_{3}\times(P\rtimes\operatorname{Aut}(P))\). If \(P\) is cyclic or a product of two cyclic groups of different orders, then \(\operatorname{Aut}(P)\) is a 2-group, and so by Lemma 5.4\(B\) is either Morita equivalent to \(b\) or is nilpotent, with this case realised as in Example 5.5. Suppose that \(P\cong(C_{2^{n}})\) for some \(n\in\mathbb{N}\). Then \(\operatorname{Pic}(b)\cong S_{3}\times(P\rtimes S_{3})\). There are four conjugacy classes of homomorphisms \(G/N\to\operatorname{Pic}(b)\), and so at most four possibilities for the Morita equivalence class of \(B\). These are account for by the four cases listed in the statement, with the cases that \(B\) is Morita equivalent to \(b\) or \(\mathcal{O}(G_{m}\times G_{n})\) requiring no further explanation. The case that \(B\) is nilpotent is again realised as in Example 5.5. Note that \(D\rtimes 3_{+}^{1+2}\) has a normal subgroup \(M\) of index 3 isomorphic to \(N\times C_{3}\). Now \(B\) covers a nonprincipal block of \(\mathcal{O}M\), which is Morita equivalent to \(\mathcal{O}N\). Hence the final case is realised. It remains to prove the statement regarding inertial quotients. Suppose that \(b\) is known to have inertial quotient \((C_{3})_{1}\). It follows from Lemma 2.6 that \(B\) has inertial quotient \((C_{3})_{1}\) or \(C_{3}\times C_{3}\). as above, by Remark 5.6 we observe that we may consider elements of \(\mathcal{T}(b)\). By the description of \(\mathcal{T}(b)\) in Section 2.5 we know that \(\mathcal{T}(b)\) maps to a subgroup of \(\operatorname{Out}(D,\mathcal{F})\cong C_{3}\). If this subgroup is trivial, then we may not construct any nontrivial homomorphism \(G/N\to\mathcal{T}(b)\) and we are done. Hence suppose the subgroup has order three. Then the action of \(G\) induces an element of order three in \(\operatorname{Aut}(D,\mathcal{F})\), so that \(B\) must have inertial quotient \(C_{3}\rtimes C_{3}\).
(iii) By Proposition 2.9\(\operatorname{Pic}(b)\cong C_{2}\times(P\rtimes\operatorname{Aut}(P))\). If \(P\) is cyclic or a product of two cyclic groups of different orders, then \(\operatorname{Pic}(b)\) is a 2-group, and so \(B\) is Morita equivalent to \(b\). Suppose that \(P\cong(C_{2^{n}})\) for some \(n\in\mathbb{N}\). Then \(\operatorname{Pic}(b)\cong C_{2}\times(P\rtimes S_{3})\). Hence by Lemma 5.4 the result follows.
(iv) By Proposition 2.9\(\operatorname{Pic}(b)\cong C_{2}\times S_{3}\). By Lemma 5.4 there are two possibilities for the Morita equivalence class of \(B\), one of which is the class containing \(b\). The case \(B_{0}(\mathcal{O}(A_{5}\times(C_{2^{n}})^{2}))\) is realised by taking a product of \(A_{5}\) with a group as in Example 5.5.
**Remark 5.8**.: since (a) source algebra equivalence preserves the inertial quotient, and (b) for abelian defect groups a block is nilpotent if and only if the inertial quotient is trivial, we have shown that if the inertial quotient of \(b\) is isomorphic to that of the given Morita equivalence class representative, then \(B\) also has inertial quotient isomorphic to that of the given Morita equivalence class representative.
Putting the reduction techniques of the section together, we have the following that is in a form directly applicable to the proof of Theorem 1.1. Note that we do not need to consider all forms for \(b\) here: only those that will arise later.
**Proposition 5.9**.: _Let \(G\) be a finite group and \(N\lhd G\) with \(G/N\) solvable. Let \(B\) be a quasiprimitive \(2\)-block of \(\mathcal{O}G\) with abelian defect group \(D\) of rank at most \(4\). Suppose that \(B\) covers a block \(b\) of \(\mathcal{O}N\)._
1. _If_ \(b\) _is inertial with inertial quotient_ \(C_{7}\)_, then_ \(B\) _is Morita equivalent to_ \(\mathcal{O}D\)_,_ \(\mathcal{O}(D\rtimes C_{7})\)_,_ \(\mathcal{O}(D\rtimes(C_{7}\rtimes C_{3}))\) _or_ \(\mathcal{O}(G_{n}\times P)\) _for some_ \(n\) _and some abelian_ \(2\)_-group_ \(P\)_, and_ \(B\) _has inertial quotient_ \(1\)_,_ \(C_{7}\)_,_ \((C_{7}\rtimes C_{3}\) _or_ \((C_{3})_{1}\) _respectively._
2. _If_ \(b\) _is inertial with inertial quotient_ \(C_{3}\times C_{3}\)_, then_ \(B\) _is inertial with inertial quotient_ \(1\)_,_ \((C_{3})_{1}\) _or_ \(C_{3}\times C_{3}\)_._
3. _If_ \(b\) _is inertial with inertial quotient_ \((C_{3})_{2}\)_,_ \(C_{5}\) _or_ \(C_{15}\)_, then_ \(B\) _is inertial with inertial quotient_ \(1\)_,_ \((C_{3})_{2}\)_,_ \(C_{5}\) _or_ \(C_{15}\)_._
4. _If_ \(b\) _is Morita equivalent to a block with normal defect group and has inertial quotient_ \((C_{3})_{1}\)_, then_ \(B\) _is Morita equivalent to one of:_ 1. \(\mathcal{O}D\)_;_ 2. \(\mathcal{O}(G_{n}\times P)\)_, where_ \(D\cong(C_{2^{n}})^{2}\times P\)_, and_ \(B\) _has inertial quotient_ \((C_{3})_{1}\)_;_ 3. \(\mathcal{O}(G_{n_{1}}\times G_{n_{2}})\)_, where_ \(D\cong(C_{2^{n_{1}}})^{2}\times(C_{2^{n_{2}}})^{2}\)_, and_ \(B\) _has inertial quotient_ \(C_{3}\times C_{3}\)_;_ 4. _a non-principal block of_ \(\mathcal{O}(D\rtimes 3_{+}^{1+2})\) _and_ \(B\) _has inertial quotient_ \(C_{3}\times C_{3}\)_;_
5. _If_ \(b\) _is Morita equivalent to_ \(B_{0}(\mathcal{O}(A_{5}\times Q))\) _for some_ \(Q\leq D\) _and has inertial quotient of type_ \((C_{3})_{1}\)_, then_ \(B\) _is Morita equivalent to one of:_ 1. \(B_{0}(\mathcal{O}(A_{5}\times P))\)_, where_ \(D\cong(C_{2})^{2}\times P\)_, and_ \(B\) _has inertial quotient_ \((C_{3})_{1}\)_;_ 2. \(B_{0}(\mathcal{O}(A_{5}\times G_{n}))\)_, where_ \(D\cong(C_{2})^{2}\times(C_{2^{n}})^{2}\)_, and_ \(B\) _has inertial quotient_ \(C_{3}\times C_{3}\)_._
6. _If_ \(b\) _is Morita equivalent to_ \(B_{0}(\mathcal{O}(A_{5}\times G_{n}))\) _for some_ \(n\geq 1\)_, then_ \(B\) _is Morita equivalent to_ \(b\) _or_ \(B_{0}(\mathcal{O}(A_{5}\times(C_{2^{n}})^{2}))\)_, where_ \(D\cong(C_{2})^{2}\times(C_{2^{n}})^{2}\)_, and_ \(B\) _has inertial quotient_ \(C_{3}\times C_{3}\) _or_ \((C_{3})_{1}\) _respectively._
Proof.: Since \(b\) is \(G\)-stable and \(G/N\) is solvable, it follows from Lemma 2.2 that \(DN/N\) is an abelian Sylow \(2\)-subgroup of \(G/N\), which must then have \(2\)-length at most one. By Proposition 2.1\(D\leq G[b]\), which must also have \(2\)-length at most one. Hence there are normal subgroups \(N_{i}\) of \(G\) such that \(N=N_{0}\lhd N_{1}\lhd N_{2}\lhd N_{3}=G[b]\) with \(N_{1}/N_{0}\), \(N_{3}/N_{2}\) of odd order and \(N_{2}/N_{1}\) a \(2\)-group, so in particular \(N_{2}=N_{1}D\). Write \(b_{i}\) for the unique block of \(\mathcal{O}N_{i}\) covered by \(B\). Let \((D,(b_{2})_{D})\) be a \(b_{2}\)-subpair. By Proposition 2.4\(D\cong[D,N_{N_{2}}(D,(b_{2})_{D})]\times C_{D}(N_{N_{2}}(D,(b_{2})_{D}))\) with \([D,N_{N_{2}}(D,(b_{2})_{D})]\leq N_{1}\) (and so \([D,N_{N_{2}}(D,(b_{2})_{D})]\leq N\)).
By Proposition 2.3\(b\) is source algebra equivalent to \(b_{1}\), with the same inertial quotient as \(b\). If furthermore \(b\) is inertial, then \(b_{1}\) is also inertial.
By Proposition 2.5\(b_{2}\) has inertial quotient isomorphic to that of \(b\), with the same action (this last statement uses the fact that in this case there is a unique action given the isomorphism type of the inertial quotient and the order of commutator part \([D,N_{N_{2}}(D,(b_{2})_{D})]\) of \(D\)).
(i) Suppose that \(b\), and so \(b_{2}\), has inertial quotient \(C_{7}\) and that \(b\) is inertial. By Theorem 5.1\(b_{2}\) is Morita equivalent (but not necessarily basic Morita equivalent) to \(\mathcal{O}(D\rtimes C_{7})\). Note that \(C_{D}(N_{N_{2}}(D,(b_{2})_{D}))\) is cyclic. By Proposition 2.9\(\operatorname{Pic}(b_{2})\cong(C_{7}\rtimes C_{3})\times(C_{D}(N_{N_{2}}(D,(b_{ 2})_{D}))\rtimes\operatorname{Aut}(C_{D}(N_{N_{2}}(D,(b_{2})_{D})))\). Now consider \(G[b_{2}]\). Again applying Proposition 2.3, \(b_{2}\) is source algebra equivalent to the unique block \(c\) of \(G[b_{2}]\) covered by \(B\). Since \(G/G[b_{2}]\) is an odd order subgroup of \(\operatorname{Pic}(b_{2})\), it is isomorphic to a subgroup of \(C_{7}\rtimes C_{3}\). Suppose that \(G/G[b_{2}]\cong C_{7}\). By Proposition 5.7 either \(B\) is nilpotent or it is Morita equivalent to \(c\) with inertial quotient \(C_{7}\). If \(G/G[b_{2}]\cong C_{3}\), then again by Proposition 5.7\(B\) is Morita equivalent to \(c\) or \(\mathcal{O}(D\rtimes(C_{7}\rtimes C_{3}))\), with the appropriate inertial quotient. Suppose \(G/G[b_{2}]\cong C_{7}\rtimes C_{3}\). Write \(G_{1}\) for the preimage of \(O_{7}(G/G[b_{2}])\) in \(G\) and \(B_{1}\) for the unique block of \(G_{1}\) covered by \(B\). Then by Proposition 5.7\(B_{1}\) is either nilpotent or Morita equivalent to \(c\). Applying Proposition 5.7 again, in the first case \(B\) is either nilpotent or Morita equivalent to a block with normal defect group and inertial quotient \((C_{3})_{1}\). In the second, \(B\) is either Morita equivalent to \(c\) or to \(\mathcal{O}(D\rtimes(C_{7}\rtimes C_{3}))\). In each of these cases by Proposition 5.7 the inertial quotient is as stated.
(ii) and (iii) Suppose that \(b\), and so \(b_{2}\), has inertial quotient \((C_{3})_{2}\), \(C_{5}\), \(C_{15}\) or \(C_{3}\times C_{3}\), and that \(b\) is inertial. We have \(C_{D}(N_{N_{2}}(D,(b_{2})_{D}))=1\), so \(D=[D,N_{N_{2}}(D,(b_{2})_{D})]\leq N_{1}\), i.e., \(N_{1}=N_{2}\) and \(G/N\) has odd order. The result then follows from Theorem 5.2.
(iv) Suppose that \(b\), and so \(b_{1}\), \(b_{2}\), has inertial quotient \((C_{3})_{1}\). Hence \(b_{1}\) is Morita equivalent to \(\mathcal{O}(G_{n}\times Q)\) for some \(n\) and \(Q\leq D\). Recalling that \([D,N_{N_{2}}(D,(b_{2})_{D})]\leq N_{1}\), by Theorem 4.2\(b_{2}\) is Morita equivalent to \(\mathcal{O}(G_{n}\times Q)\), where \(D\cong(C_{2^{n}})^{2}\times P\). Applying Proposition 2.3\(b_{2}\) is source algebra equivalent to the unique block \(c\) of \(G[b_{2}]\) covered by \(B\). By Proposition 2.9\(\operatorname{Pic}(b_{2})\cong S_{3}\times(P\rtimes\operatorname{Aut}(P))\). Suppose that \(P\) is not homocyclic. Since \(G/G[b_{2}]\) is an odd order subgroup of \(\operatorname{Pic}(b_{2})\) and \(|\operatorname{Pic}(b_{2})|_{2^{\prime}}=3\), it is isomorphic to a subgroup of \(C_{3}\). By Proposition 5.7\(B\) is either Morita equivalent to \(c\) or is nilpotent. Suppose that \(P\) is homocyclic. Then \(\operatorname{Pic}(b_{2})\cong S_{3}\times(P\rtimes S_{3})\) and \(G/G[b_{2}]\) is isomorphic to a subgroup of \(C_{3}\times C_{3}\). Let \(H\) be the kernel of the action of \(G\) on the irreducible Brauer characters of \(c\) and let \(B_{H}\) be the unique block of \(\mathcal{O}H\) covered by \(B\). By Proposition 2.9\([G:H]\) divides \(3\). Suppose that \([G:G[b_{2}]]=3\). By Proposition 5.7 the possible Morita equivalence classes for \(B\) are represented by the four blocks listed. Suppose that \([G:G[b_{2}]]=9\). If \(G=G[c]\), then by Proposition 2.3\(B\) is Morita equivalent to \(c\) and we are done. If \([G:G[c]]=3\), then \(B_{1}\) is Morita equivalent to \(c\), and again there are four possibilities for the Morita equivalence class of \(B\) and these are as listed. Hence suppose that \(G[c]=G[b_{2}]\), so that \(G[b_{2}]<H<G\). The subgroup of \(Pic(c)\) of self-equivalences that preserve all irreducible Brauer characters is isomorphic to \(S_{3}\). Hence there is one possibility for the Morita equivalence class for \(B_{H}\) (as we are excluding the case that \(H\) acts as inner automorphisms), that \(B_{1}\) is Morita equivalent to \(\mathcal{O}(G_{n_{1}}\times G_{n_{2}})\). One may check as in (i) that the inertial quotients are as stated.
(v) Suppose that \(b\), and so \(b_{1}\), \(b_{2}\), is Morita equivalent to \(B_{0}(\mathcal{O}(A_{5}\times Q))\) for some \(Q\leq D\). By Theorem 4.2\(b_{2}\) is Morita equivalent to \(B_{0}(\mathcal{O}(A_{5}\times P))\), where
\((C_{2})^{2}\times P\). By Proposition 2.3\(b_{2}\) is source algebra equivalent to the unique block of \(G[b_{2}]\) covered by \(B\). By Proposition 2.9\(\operatorname{Pic}(b_{2})\cong C_{2}\times(P\rtimes\operatorname{Aut}(P))\). Since \(G/G[b_{2}]\) is an odd order subgroup of \(\operatorname{Pic}(b_{2})\) and \(P\) has rank at most \(2\), it is isomorphic to a subgroup of \(C_{3}\). If \(P\) is not homocyclic, then \(G=G[b_{2}]\) and we are done. Suppose \(P\) is homocyclic, i.e., \(P\cong(C_{2^{n}})^{2}\) for some \(n\). We may suppose \([G:G[b_{2}]]=3\). The result then follows by Proposition 5.7.
(vi) Suppose that \(b\) is Morita equivalent to \(B_{0}(\mathcal{O}(A_{5}\times G_{n}))\) for some \(n\geq 1\). As in (ii) \(G/N\) has odd order, so \(N_{1}=N_{2}\) and \(b\) is source algebra equivalent to \(b_{3}\). By Proposition 2.9\(\operatorname{Pic}(b)\cong C_{2}\times S_{3}\). Since \(G/G[b]\) is an odd order subgroup of \(\operatorname{Pic}(b)\), it is isomorphic to a subgroup of \(C_{3}\). If \(G[b]=G\), then \(b\) is source algebra equivalent to \(B\) and we are done. If \([G:G[b]]=3\), then we are done by Proposition 5.7.
**Remark 5.10**.: We have not treated the case that \(b\) is Morita equivalent to \(B_{0}(\mathcal{O}(A_{5}\times A_{5})\) here since we do not at present know the Picard group of this block. This case will be treated in the main part of the reduction, where we will make use of additional hypotheses.
Finally, the methods above may also be used to prove the following.
**Proposition 5.11**.: _Let \(D\cong(C_{2^{n_{1}}})^{2}\times(C_{2^{n_{2}}})^{2}\) for some \(n_{1},n_{2}\in\mathbb{N}\) and consider \(G=D\rtimes 3_{+}^{1+2}\), where the centre of \(3_{+}^{1+2}\) acts trivially. The \(2\)-blocks of \(\mathcal{O}G\) correspond to the simple modules of \(Z(3_{+}^{1+2})\), and the two non-principal blocks are Morita equivalent. Further, these blocks are Morita equivalent to the two non-principal blocks of \(\mathcal{O}(D\rtimes 3_{-}^{1+2})\)._
Proof.: Let \(B\) be any faithful \(2\)-block of \(G=D\rtimes 3_{+}^{1+2}\) or \(D\rtimes 3_{-}^{1+2}\). Then \(l(B)=1\). Take a maximal subgroup \(N\) of \(G\) and a block \(b\) of \(N\) covered by \(B\). Then \(N\cong(D\rtimes C_{3})\times C_{3}\) or \(D\rtimes C_{9}\) and without loss of generality \(b\) is Morita equivalent to \(\mathcal{O}(G_{n_{1}}\times(C_{2^{n_{2}}})^{2})\) for some \(n_{1},n_{2}\). By Proposition 5.7(ii) there is only one possibility for the Morita equivalence class of \(B\) under the restriction that there is just one simple module.
## 6 Proof of the main theorem
We first recall the case where the defect group is normal.
**Lemma 6.1**.: _Let \(B\) be a block of \(\mathcal{O}G\) for a finite group \(G\) with abelian normal defect group \(D\) with rank at most \(4\) and inertial quotient \(E\). Then \(B\) is source algebra equivalent to a block of \(D\rtimes\hat{E}\), where \(E\) is the inertial quotient of \(B\) and \(\hat{E}/Z(\hat{E})=E\). If \(E\cong C_{3}\times C_{3}\), then \(\hat{E}\cong E\) or \(3^{1+2}\) with \(Z(3^{1+2})\) acting trivially on \(D\). Otherwise \(\hat{E}=E\)._
Proof.: See [38, Theorem 6.14.1]. In all cases except \(E\cong C_{3}\times C_{3}\), all Sylow subgroups of \(E\) are cyclic, so the Schur multiplier is trivial. For \(C_{3}\times C_{3}\) the Schur multiplier is \(3\) and the result follows.
We now state a result used in previous reductions for results concerning Morita equivalence classes of blocks, that encapsulates the use of Fong-Reynolds reductions and [35]. It appears in [2] in full, but was extracted from the first part of the proof of [19, Proposition 4.3].
**Lemma 6.2**.: _Let \(G\) be a finite group and \(B\) a block of \(\mathcal{O}G\) with defect group \(D\). Then there is a finite group \(B\) and a block \(C\) of \(\mathcal{O}H\) such that \(B\) is basic Morita equivalent to \(C\), a defect group \(D_{H}\) of \(C\) is isomorphic to \(D\) such that:_
1. \(C\) _is quasiprimitive, that is, if_ \(N\lhd H\)_, then_ \(C\) _covers a unique block of_ \(\mathcal{O}N\)_;_
2. _If_ \(N\lhd H\) _and_ \(C\) _covers a nilpotent block of_ \(\mathcal{O}N\)_, then_ \(N\leq O_{p}(H)Z(H)\) _with_ \(O_{p^{\prime}}(N)\leq[H,H]\) _cyclic. In particular_ \(O_{p^{\prime}}(G)\leq Z(G)\)_;_
3. \([H:O_{p^{\prime}}(Z(H))]\leq[G:O_{p^{\prime}}(Z(G))]\)_._ _Note that_ \(B\) _and_ \(C\) _have the same Frobenius categories._
Proof.: This is [2, Proposition 6.1], in which Fong-Reynolds and Kulshamme-Puig reductions are applied repeatedly. It is noted in the proof of [2, Proposition 6.1] that application of these reductions reduces \([G:O_{p^{\prime}}(Z(G))]\), hence (R3).
We call the pair \((H,C)\), where \(C\) is a block of \(\mathcal{O}H\), _reduced_ if it satisfies conditions (R1) and (R2) Lemma 6.2. If the group is clear, then we just say \(C\) is reduced.
Before proceeding we recall the definition and some properties of the generalized Fitting subgroup \(F^{*}(G)\) of a finite group \(G\). Details may be found in [6]. A _component_ of \(G\) is a subnormal quasisimple subgroup of \(G\). The components of \(G\) commute, and we define the _layer_\(E(G)\) of \(G\) to be the normal subgroup of \(G\) generated by the components. It is a central product of the components. The _Fitting subgroup_\(F(G)\) is the largest nilpotent normal subgroup of \(G\), and this is the direct product of \(O_{r}(G)\) for all primes \(r\) dividing \(|G|\). The _generalized Fitting subgroup_\(F^{*}(G)\) is \(E(G)F(G)\). A crucial property of \(F^{*}(G)\) is that \(C_{G}(F^{*}(G))\leq F^{*}(G)\), so in particular \(G/F^{*}(G)\) may be viewed as a subgroup of \(\operatorname{Out}(F^{*}(G))\).
**Proof of Theorem** 1.1 Let \(B\) be a \(2\)-block of \(\mathcal{O}G\) for a finite group \(G\) with defect group \(D\) of rank \(4\) with \([G:O_{2^{\prime}}(Z(G))]\) minimised such that \(B\) is not Morita equivalent to any of the blocks listed in the statement of the theorem, or that the inertial quotient of \(B\) is not as stated. In the remainder of the proof, the reader may check that the inertial quotients are respected at each step. By Lemma 6.2 we may assume that \((G,B)\) is reduced.
If \(D\lhd G\) and \(B\) has inertial quotient \(E_{B}\), then by the main result of [32]\(B\) is source algebra equivalent to either: (i) \(D\rtimes E_{B}\) when \(E_{B}\) has cyclic Sylow \(r\)-subgroups for every \(r\), that is, for every inertial quotient other than \(C_{3}\times C_{3}\); a block of \(D\rtimes 3_{+}^{1+2}\) or \(D\rtimes 3_{-}^{1+2}\) when \(E_{B}\cong C_{3}\times C_{3}\), where \(Z(3^{1+2})\) acts trivially on \(D\). By Proposition 5.11 the non-principal blocks of \(D\rtimes 3_{+}^{1+2}\) and \(D\rtimes 3_{-}^{1+2}\) form a single Morita equivalence class. Hence \(B\) is Morita equivalent to a block listed in the statement of the theorem, a contradiction. Hence \(D\) is not normal in \(G\).
Let \(b^{*}\) be the unique block of \(\mathcal{O}F^{*}(G)\) covered by \(B\).
Write \(L_{1},\ldots,L_{t}\) for the components of \(G\), so \(E(G)=L_{1}\cdots L_{t}\lhd G\). Note that \(G\) permutes the \(L_{i}\). There must be at least one component, since otherwise \(b^{*}\) is nilpotent and so \(F^{*}(G)=Z(G)O_{2}(G)\). But \(O_{2}(G)\leq D\) and \(D\) is abelian, so we would have \(D\leq C_{G}(F^{*}(G))\leq F^{*}(G)=Z(G)O_{2}(G)\), so that \(D\lhd G\), a contradiction.
Write \(b_{E}\) for the unique block of \(E(G)\) covered by \(B\) and \(b_{i}\) for the unique block of \(L_{i}\) covered by \(b_{E}\). We claim that no \(b_{i}\) can be nilpotent (in our minimal counterexample).
Let \(Z=O_{2}(Z(E(G)))\). Write \(\bar{b}_{E}\) for the unique block of \(\overline{E(G)}:=E(G)/Z\) corresponding to \(b_{E}\) and \(\bar{b}_{i}\) for the unique block of \(\bar{L}_{i}\) corresponding to \(b_{i}\). Writing \(M:=\overline{L}_{1}\times\cdots\times\overline{L}_{t}\), where \(\overline{L}_{i}:=L_{i}Z/Z\), there is a \(2^{\prime}\)-group \(W\leq Z(M)\) and a block \(b_{M}\) of \(M\) with \(W\) in its kernel such that \(\overline{E(G)}=M/W\) and \(b_{M}\) is isomorphic to \(\bar{b}_{E}\). Then \(D\cap E(G)\) is a defect group for \(b_{E}\), \((D\cap E(G))/Z\) is a defect group for \(\bar{b}_{E}\) and \(b_{M}\) has defect groups isomorphic to \((D\cap E(G))/Z\). Then \(\bar{b}_{i}\) has defect group \(D_{i}=((D\cap E(G))Z)\cap\bar{L}_{i}\). We have that \(b_{M}=\bar{b}_{1}\otimes\cdots\otimes\bar{b}_{t}\) and \(b_{M}\) has defect group \(D_{1}\times\cdots\times D_{t}\). Suppose \(b_{j}\) is nilpotent for some \(j\). Let \(J\subseteq\{1,\ldots,t\}\) correspond to the orbit of \(L_{j}\) under the permutation action of \(G\) on the components. Define \(L_{J}\lhd G\) to be the product of the \(L_{i}\) for \(i\in J\), and write \(b_{J}\) for the unique block of \(L_{J}\) covered by \(b_{E}\). For \(i\in J\), since \(\beta_{j}\) is nilpotent, so is \(b_{i}\), hence so is \(\bar{b}_{i}\). Hence the unique block \(\bar{b}_{J}\) of \(L_{J}/Z\) corresponding to \(b_{J}\) is also nilpotent (to see this, observe that as above \(L_{J}/Z\) is the quotient of \(\mathsf{X}_{i\in J}\,\bar{L}_{i}\) by a central \(2^{\prime}\)-group and that products of nilpotent blocks are nilpotent). It follows that \(b_{J}\) is nilpotent by [49] (where the result is stated over \(k\), but follows over \(\mathcal{O}\) immediately), a contradiction. Hence no \(b_{i}\) is nilpotent. In particular, it follows that no \(D_{i}\) can have rank one, and so \(t\leq 2\).
Now it follows from Schreier's conjecture that \(G/E(G)\) is solvable. By Lemma 2.2\(DE(G)/E(G)\) is a Sylow \(2\)-subgroup of \(G/E(G)\). Since \(DE(G)/E(G)\) is abelian, it follows that \(G/E(G)\) has \(2\)-length at most one, so there are normal subgroups \(N_{i}\) of \(G\) such that \(E(G)=N_{0}\lhd N_{1}\lhd N_{2}\lhd N_{3}=G[b_{E}]\) with \(N_{1}/N_{0}\), \(N_{3}/N_{2}\) of odd order and \(N_{2}/N_{1}\) a \(2\)-group. Further \(D\leq N_{2}\) by Proposition 2.1. Write \(b_{N_{i}}\) for the unique block of \(\mathcal{O}N_{i}\) covered by \(B\).
Suppose that \(t=2\). Write \(\bar{D}_{i}:=D_{i}/O_{2}(Z(L_{i}))\), which must have rank \(2\) for each \(i\). Note that \(O_{2}(G)\leq Z(E(G))\), otherwise a quotient of \(D\) would have rank greater than \(4\). Further \(\bar{D}_{i}\cong(C_{2^{n_{i}}})^{2}\) for some \(n_{i}\), otherwise \(\bar{b}_{i}\), and so \(b_{i}\), would be nilpotent. Now \(D_{i}\cong(C_{2^{n_{i}}})^{2}\times O_{2}(Z(L_{i}))\), again since otherwise \(b_{i}\) would be nilpotent. It follows that \(O_{2}(Z(L_{i}))=1\) for each \(i\), hence \(O_{2}(G)=1\). Further note that each \(L_{i}\) is normal in \(G\), since \(N_{G}(L_{i})\) has index at most \(2\) and \(G=DN_{G}(L_{i})\). If \(G\neq N_{G}(L_{i})\), then some element of \(D\) must permute \(L_{1}\) and \(L_{2}\), contradicting our hypothesis that \(D\) is abelian. It follows from [20, Theorem 1.1] and [36] that \(b_{i}\) is Morita equivalent to \(\mathcal{O}G_{n}\) for some \(n\geq 1\) or to \(B_{0}(\mathcal{O}A_{5})\). Further, \(b_{E}\) is isomorphic (in fact basic Morita equivalent) to a block \(b_{1}\otimes b_{2}\) of \(L_{1}\times L_{2}\).
Suppose \(b_{i}\) is Morita equivalent to \(B_{0}(\mathcal{O}A_{5})\) for each \(i\). Then by [10, Theorem 1.5]\(\mathrm{Pic}(b_{i})\cong C_{2}\), and so \([G:G[b_{i}]]\) divides \(2\). But then \(G=DG[b_{i}]=G[b_{i}]\). It follows that \(G=G[b_{E}]\). Since the inertial quotient of \(b_{E}\) is \(C_{3}\times C_{3}\), it follows that \(D=[D,N_{N_{2}}(D,(b_{N_{2}})_{D})]\), which is contained in \(N_{1}\) by Proposition 2.4. We have shown that \([G:E(G)]\) is odd, hence by Proposition 2.3\(B\) is Morita equivalent to \(b_{E}\), and so to \(B_{0}(\mathcal{O}(A_{5}\times A_{5}))\), a contradiction.
Suppose that \(b_{1}\) is Morita equivalent to \(B_{0}(\mathcal{O}A_{5})\) and \(b_{2}\) to \(\mathcal{O}G_{2^{n}}\) for some \(n\). Then by Proposition 5.9(vi) \(B\) is Morita equivalent to a block in our list, a contradiction.
Suppose that \(b_{1}\) and \(b_{2}\) are Morita equivalent to \(\mathcal{O}G_{2^{n_{1}}}\) and \(\mathcal{O}G_{2^{n_{2}}}\) for some \(n_{1},n_{2}\in\mathbb{N}\). Then by Corollary 2.8\(b_{1}\) and \(b_{2}\) are both inertial. Hence \(b_{E}\) is inertial, and so by Proposition 5.9\(B\) is inertial, a contradiction.
We have ruled out \(t=2\), so now suppose that \(t=1\), so that \(E(G)=L_{1}\) is
quasissimple. We refer to Proposition 2.7 for the possibilities for \(L_{1}\) and \(b_{1}\). For ease of notation, write \(N=E(G)\) and \(b=b_{E}\). Note that \(F^{*}(G)\cong O_{2}(G)O_{2^{\prime}}(G)N\). Here we have already established that \(O_{2^{\prime}}(G)\leq Z(G)\), however it is not immediately the case that \(O_{2^{\prime}}(G)\leq N\), since in principal \(G/O_{2^{\prime}}(G)\) may have Schur multiplier larger than that of \(N/Z(N)\). Since \(F^{*}(G)\) is self-centralizing, \(G/F^{*}(G)\) is isomorphic to a subgroup of \(\operatorname{Out}(F^{*}(G))\).
We first consider the cases \(N/Z(N)\cong SL_{2}(8)\), \(SL_{2}(16)\), \(J_{1}\), \(Co_{3}\) and \({}^{2}G_{2}(3^{2m+1})\), where \(m\in\mathbb{N}\). In each case \(D\cap N\) has rank \(3\) or \(4\), so \(F^{*}(G)\cong NC_{2^{n}}\) for some \(n\). Also \(N/Z(N)\) has cyclic odd order outer automorphism group and trivial Schur multiplier. It follows from [16, Lemma 3.4] that \(\operatorname{Aut}(N/Z(N)\) also has trivial Schur multiplier in each of these cases. Hence \(G\cong C^{2^{n}}\times H\) for some \(H\) with \(N\leq H\leq\operatorname{Aut}(N)\).
Suppose that \(N\cong SL_{2}(8)\) and \(b\) is the principal block, which has elementary abelian defect group of order \(8\) and inertial quotient \(C_{7}\). Note that \(\operatorname{Out}(SL_{2}(8))\cong C_{3}\). Then by the above \(G\cong H\times C_{2^{n}}\), where \(SL_{2}(8)\leq H\leq\operatorname{Aut}(SL_{2}(8))\). Hence \(B\) is Morita equivalent to the principal block of \(SL_{2}(8)\times C_{2^{n}}\) or \(\operatorname{Aut}(SL_{2}(8))\times C_{2^{n}}\).
Suppose that \(N\cong SL_{2}(16)\) and \(b\) is the principal block, which has elementary abelian defect group of order \(8\) and inertial quotient \(C_{15}\). Note that \(\operatorname{Out}(SL_{2}(16))=1\). Hence \(G=N\) and we are done in this case.
Suppose that \(N\cong J_{1}\) and \(b\) is the principal block, which has elementary abelian defect group of order \(8\) and inertial quotient \(C_{7}\rtimes C_{3}\). Note that \(\operatorname{Out}(J_{1})=1\), so \(G\cong J_{1}\times C_{2^{n}}\) and we are done in this case.
Suppose that \(N\cong\ ^{2}G_{2}(3^{2m+1})\) for some \(m\in\mathbb{N}\) and \(b\) is the principal block, which has elementary abelian defect group of order \(8\) and inertial quotient \(C_{7}\rtimes C_{3}\). Then \(G\cong H\times C_{2^{n}}\), where \({}^{2}G_{2}(3^{2m+1})\leq H\leq\operatorname{Aut}({}^{2}G_{2}(3^{2m+1})\). It follows from [17, Proposition 3.1] that \(B\) is Morita equivalent to \(b\), and these blocks have the same inertial quotient. In turn \(b\) is Morita equivalent to \(B_{0}(\mathcal{O}(\operatorname{Aut}(SL_{2}(8))\times C_{2^{n}}))\) by [43, Example 3.3].
Suppose that \(N\cong Co_{3}\) and \(b\) is the non-principal block with elementary abelian defect group of order \(8\). Note that \(\operatorname{Out}(Co_{3})=1\), so \(G\cong Co_{3}\times C_{2^{n}}\). By [31]\(b\) is Morita equivalent to \(B_{0}(\mathcal{O}\operatorname{Aut}(SL_{2}(8)))\), so \(B\) is Morita equivalent to \(B_{0}(\mathcal{O}(\operatorname{Aut}(SL_{2}(8))\times C_{2^{n}})\).
We now move on to case (iii) of Proposition 2.7. Then \(b\) is Morita equivalent to the principal block of \(A_{5}\times P\) or \(A_{4}\times P\) for some abelian \(2\)-group \(P\) of rank at most \(2\). Since \(G/N\) is solvable, Proposition 5.9(iv,v) apply, and \(B\) is Morita equivalent to a block on our list, a contradiction.
It remains to consider the case that \(b\) is nilpotent covered. By [46, 4.3]\(b\) is inertial. Since \(F^{*}(G)=NZ(G)O_{2}(G)\) and \(O_{2}(G)\) has rank at most \(2\), it follows that \(\operatorname{Out}(O_{2}(G))\) and \(\operatorname{Out}(N)\), and so \(\operatorname{Out}(F^{*}(G))\) are solvable. It follows from Proposition 5.9 that \(B\) is Morita equivalent to a block in our list, a contradiction. We have covered each possibility for \(b\) as given in Proposition 2.7, so we have established the Morita equivalences in all cases.
Finally we observe that parts (a) and (b)(iv) are proved in [41] and [5]. \(\Box\)
Derived equivalences and Broue's abelian defect group conjecture
In this section we prove Corollary 1.3.
We first recall Kulshammer-Puig classes of blocks. Let \(D\) be a defect group for a block \(B\) with Frobenius category \(\mathcal{F}\). Following the presentation in [38, Section 8.14], a Kulshammer-Puig class is an element of \(H^{2}(\operatorname{Aut}_{\mathcal{F}}(Q))\cong H^{2}(E,k^{\times})\). As we have used in Lemma 6.1, by [38, Theorem 6.14.1] the Morita equivalence class of a block with normal defect group is determined by the inertial quotient and the Kulshammer-Puig class. If \(E\) has cyclic Sylow subgroups for all primes, then \(H^{2}(E,k^{\times})\) is trivial, so in this case the Morita equivalence class is determined just by \(E\). This is the case for all blocks considered in this paper except for those with inertial quotient \(C_{3}\times C_{3}\). Hence, since inertial quotients are preserved by the Morita equivalences in Theorem 1.1, in order to prove Corollary 1.3 for inertial quotients other than \(C_{3}\times C_{3}\) it suffices to show that in each of (b)(i)-(iv) all blocks are derived equivalent when the defect groups are isomorphic (the result is trivial for blocks in Theorem 1.1(a)). This follows from [14, Theorem 4.36] since for the groups in each of (b)(i)-(iv), the principal block of the normalizer of a Sylow \(2\)-subgroup \(P\) is Morita equivalent to \(P\rtimes E\), where \(E\) is the inertial quotient. This proves Corollary 1.3 when \(E\not\cong C_{3}\times C_{3}\).
Now suppose that \(B\) has inertial quotient \(E\cong C_{3}\times C_{3}\). We must distinguish whether the Brauer correspondent \(b\) of \(B\) in \(N_{G}(D)\) is Morita equivalent to \(\mathcal{O}(D\rtimes E)\) or to \(\mathcal{O}(D\rtimes 3_{+}^{1+2})\). A nonprincipal block of \(D\rtimes 3_{+}^{1+2}\) has just one simple module whilst all other blocks in Theorem 1.1(b)(v) have nine simple modules. However by [48] and [29, Proposition 5.5]\(l(B)=l(b)\), so we may distinguish the Morita equivalence class of \(b\) by \(l(B)\). By [14, Theorem 4.36] the principal blocks occurring in (b)(v) are derived equivalent, so Corollary 1.3 follows.
**Acknowledgment**
We thank Benjamin Sambale for a useful observation.
|
2302.12059 | A Statistical Learning Take on the Concordance Index for Survival
Analysis | The introduction of machine learning (ML) techniques to the field of survival
analysis has increased the flexibility of modeling approaches, and ML based
models have become state-of-the-art. These models optimize their own cost
functions, and their performance is often evaluated using the concordance index
(C-index). From a statistical learning perspective, it is therefore an
important problem to analyze the relationship between the optimizers of the
C-index and those of the ML cost functions. We address this issue by providing
C-index Fisher-consistency results and excess risk bounds for several of the
commonly used cost functions in survival analysis. We identify conditions under
which they are consistent, under the form of three nested families of survival
models. We also study the general case where no model assumption is made and
present a new, off-the-shelf method that is shown to be consistent with the
C-index, although computationally expensive at inference. Finally, we perform
limited numerical experiments with simulated data to illustrate our theoretical
findings. | Alex Nowak-Vila, Kevin Elgui, Genevieve Robin | 2023-02-23T14:33:54Z | http://arxiv.org/abs/2302.12059v1 | # A Statistical Learning Take on the Concordance Index for Survival Analysis
###### Abstract
The introduction of machine learning (ML) techniques to the field of survival analysis has increased the flexibility of modeling approaches, and ML based models have become state-of-the-art. These models optimize their own cost functions, and their performance is often evaluated using the concordance index (C-index). From a statistical learning perspective, it is therefore an important problem to analyze the relationship between the optimizers of the C-index and those of the ML cost functions. We address this issue by providing C-index Fisher-consistency results and excess risk bounds for several of the commonly used cost functions in survival analysis. We identify conditions under which they are consistent, under the form of three nested families of survival models. We also study the general case where no model assumption is made and present a new, off-the-shelf method that is shown to be consistent with the C-index, although computationally expensive at inference. Finally, we perform limited numerical experiments with simulated data to illustrate our theoretical findings.
## 1 Introduction
Survival analysis (Gross et al., 1981; Kalbfleisch and Prentice, 2002), the field of statistics concerned with modeling time-to-event data, is central to healthcare applications to predict time from diagnosis to death or risk of disease recurrence. Rather than directly modeling time-to-event, many survival models predict risk of event occurrence (Haider et al., 2020). Many definitions of risk can be found in the literature; the most classic are the expected time-to-event, the probability of an event occurring after a given time, or the multiplicative factor in the hazard rate under the proportional hazards (PH) assumption. Importantly, survival data are often _right-censored_, and only a lower bound on the time-to-event is observed; it usually corresponds to the time at which patients leave the study. Most classical survival models have therefore been extended to the censored case (see, e.g. Klein and Moeschberger (2011) Chapter 3 for a review of the different types of censoring and Chapter 4 for survival estimation in the censored case).
Machine learning models are increasingly used in survival analysis and have shown state-of-the-art results in various application areas (Zhu et al., 2016, 2017; Yousefi et al., 2017; Katzman et al., 2018; Ching et al., 2018; Kvamme et al., 2019; Barnwal et al., 2020; Steingrimsson and Morrison, 2020; Cottin et al., 2022; Schutte et al., 2022). Evaluating this jungle of risk models is therefore an important issue to make it comprehensive for practitioners (Park et al., 2021). Among existing metrics in survival analysis, the concordance index (C-index) is probably the most commonly used (Harrell et al., 1996). It can be viewed as an extension of the Area Under the ROC Curve (AUC) for continuous outcomes and assesses the ability of a risk prediction method to correctly rank individuals according to their risk scores. More specifically, it is defined as the probability that pairs of predicted risks are ranked in the same order as the corresponding observed time-to-events.
From a statistical learning perspective, the question arises whether the C-index can be directly optimized, i.e., used as an objective function to be maximized in an ML approach. Unfortunately, the C-index is a non-concave, discontinuous loss with respect to the parameters of the risk model; consequently, gradient-based methods cannot be used directly to maximise it. In practice, models are often learned by minimizing a smooth surrogate loss on the training data and then evaluated using the C-index. Examples of training losses include the negative log-likelihood of survival models such as Cox (1972) Proportional Hazards (PH) or Accelerated Failure Time (AFT) models (Wei, 1992), loss functions defined as the expectation of an error measure between the time to event and the risk predictor (Steingrimsson and Morrison, 2020), or smooth approximations of the negated C-index (Chen et al., 2013).
Despite the widespread use of the C-index as an evaluation
measure, the relationship between optimizers of these training losses and those of the C-index is not well understood. In particular, it is not known under what conditions _Fisher consistency_(Fisher, 1922), also known as classification-calibration (Bartlett et al., 2006), holds, i.e., minimizers of the training loss correspond to optimizers of the C-index. If this property holds, we can safely say that the ML model converges to the optimal C-index as sample size grows to infinity--if the model is expressive enough.
The aim of this paper is to answer this very question. We study the consistency properties of classical cost functions in survival analysis with respect to the C-index, and provide associated excess risk bounds. We analyze in particular the properties of Maximum Likelihood Estimation (MLE), conditional average risk estimation, and smooth C-index maximization. We identify conditions under which these methods are consistent, under the form of three nested families of survival models. In addition, we study the more general case where no model assumption is made. In this case, we present a new, off-the-shelf convex method that is shown to be consistent with the C-index, although computationally expensive at inference. Finally, we perform limited numerical experiments with simulated data to illustrate our theoretical findings. In all cases, we discuss how censoring can be incorporated in our results. Note that, most of the theoretical results can be applied beyond survival analysis to any continuous ranking task in the sense of Clemencon and Achab (2018). Specifically, the following contributions are provided:
* The properties of commonly used risk estimation procedures in survival analysis are analyzed in terms of Fisher-consistency and C-index excess risk bounds.
* Conditions under which these procedures are Fisher-consistent with respect to the C-index are derived in the form of three nested families of survival models corresponding to increasingly stringent model assumptions. For each family, we characterize the maximizers of the C-index, and provide important examples.
* We discuss a novel, off-the-shelf convex estimation method which, although computationally expensive at inference, proves consistent without any modeling assumption.
* Limited experiments are conducted with simulated data to illustrate our theoretical findings.
#### Related Work
This work is in line with an extensive literature on the statistical efficiency of minimizing surrogate losses of non-convex and discontinuous evaluation metrics such as the 0-1 loss. Bartlett et al. (2006) derives upper bounds on the excess risk of convex surrogates for binary classification, while Agarwal (2014) provides similar results for surrogates of bipartite ranking losses. Cortes and Mohri (2003) provides a statistical analysis of the relationship between AUC and error rate minimization, and Gao and Zhou (2015) identifies sufficient conditions for consistency of pairwise surrogate losses with the AUC. From an optimization point of view, Calders and Jaroszewicz (2007) suggests maximizing the AUC directly using polynomial approximations.
While the C-index is widely used in survival analysis and several papers have investigated its properties from a practical point of view (Longato et al., 2020; Park et al., 2021), there are comparatively few statistical learning results evaluating its relationship to commonly used cost functions used in survival analysis. Steck et al. (2007) provide lower bounds on the C-index that can be directly optimized and examine their relationship to Cox's proportional hazards, showing that this popular model approximately maximizes the C-index. The authors do not examine consistency. Chen et al. (2013) develop a gradient-boosting procedure to optimize smooth surrogates of the C-index; however, the statistical consistency of such surrogates remains to be analyzed.
Since the C-index is fundamentally a ranking measure, our work has similarities with the extensive literature on ranking algorithms and their statistical properties (Clemencon et al., 2008; Duchi et al., 2010; Chapelle et al., 2011; Rajkumar and Agarwal, 2014; Yuan et al., 2016; He et al., 2018; Ai et al., 2019; Wu et al., 2021; Werner, 2021). As far as we are aware, none of these papers analyze ranking algorithms in terms of consistency with the C-index. Clemencon and Achab (2018) examines the optimizers of the C-index, but they focus only on the case where the conditional cumulative distribution functions do not cross one another; we consider much less restrictive assumptions.
## 2 Setting and Background
### Survival Analysis
Consider the classical survival analysis framework to model time-to-events and their relationship to individual covariates. The time-to-event, denoted \(T\), is assumed continuous and takes values in \(\mathbb{R}_{+}\); individual covariates are denoted by \(X\) and take values in \(\mathcal{X}\subset\mathbb{R}^{d}\). Let \(\mathrm{P}=\mathrm{Prob}(\mathbb{R}_{+}\times\mathcal{\mathbb{I}})\) be the space of joint probability densities \(\mu\) of non-negative time-to-events and covariates; \(\mu(t,x)\) is also referred to as the _survival model_. The density of events conditional on covariates \(x\) is denoted \(\mu(t|x)\), and the conditional survival function
\[S(t|x)=\mathbb{P}\left\{T>t|X=x\right\}=\int_{t}^{+\infty}\mu(t|x)dt. \tag{1}\]
We also consider the right-censored setting where the time-to-event \(T\) is not directly observed but rather a lower bound \(U=C\wedge T\) where \(C\) is a continuous, nonnegative random variable corresponding to the censoring time. The binary random variable \(\Delta=1(C\geq T)\) specifying whether the lower bounds corresponds to the time-to-event or to the censoring time is also observed. Throughout this paper, the censoring is assumed independent of the covariates, i.e. \(C\perp X\), and the censoring curve is defined by \(G(t)=\mathbb{P}\left\{C>t\right\}\).
### Concordance Index
Consider a scalar valued function
\[f:x\in\mathbb{R}^{d}\mapsto f(x)\in\mathbb{R};\]
\(f\) may for instance come from an inference procedure assessing the risk of occurrence of events, depending on covariates \(x\). In such settings, the higher \(f(x)\), the smaller the time-to-event. Many quantities may be used to define the risk; for instance, the conditional expectation of the time-to-event \(\mathbb{E}\left\{T|X=x\right\}\), the probability of an event occurring beyond a certain time \(t_{0}\), \(\mathbb{P}\left\{T>t_{0}|X=x\right\}\), or a statistic specific to a survival model such as the multiplicative factor of the baseline hazard under the PH assumption.
The C-index is defined as the probability of having a pairwise concordant order between the predicted risks and the observed time-to-events (Harrell et al., 1982). It is usually presented as the following conditional probability:
\[\mathrm{C}(f)=\mathbb{P}\left\{f(X)<f(X^{\prime})\mid T>T^{\prime}\right\}. \tag{2}\]
The C-index depends on the joint distribution of \((T,X)\), so that two survival models \(\mu\) and \(\mu^{\prime}\) yield two different definitions of the C-index \(C_{\mu}\) and \(C_{\mu^{\prime}}\). Whenever the model \(\mu\) is clear from context, we drop the subscript for ease of notation. Since the C-index measures the quality of the ranking induced by \(f\) rather than the risk values themselves, it is defined up to monotone transformation of the risk.
For any pair of real random variables \(Y\) and \(Z\), the _statistical preference_ order (Taplin, 1997) denoted by \(\succeq\), is defined as
\[Y\succeq Z\iff\mathbb{P}\left\{Y>Z\right\}\geq\frac{1}{2}. \tag{3}\]
A risk function \(f\) defining a global ordering maximally preserving the statistical preference (3) in expectation for all pairs of conditional random variables \(T|X=x\) and \(T|X=x^{\prime}\) is optimal with respect to the C-index. This follows directly from the fact that:
\[\mathrm{C}(f) \propto\mathbb{P}\left\{f(X)<f(X^{\prime}),T>T^{\prime}\right\}\] \[=\mathbb{E}\,1(f(X)<f(X^{\prime}))1(T>T^{\prime})\] \[=\mathbb{E}_{X,X^{\prime}}\,\mathbb{P}\left\{T>T^{\prime}|X,X^{ \prime}\right\}1(f(X)<f(X^{\prime})). \tag{4}\]
Definition 2.1 presents the case where there exists a global ordering respecting _all_ pairwise comparisons under (3).
**Definition 2.1** (Optimal risk ordering).: _An optimal risk ordering is a function \(f^{\star}\) satisfying_
\[f^{\star}(x)\leq f^{\star}(x^{\prime})\Rightarrow\mathbb{P}\left\{T>T^{\prime} |x,x^{\prime}\right\}\geq\frac{1}{2}, \tag{5}\]
_for all pairs \(x,x^{\prime}\). Note that if condition (5) is satisfied then it follows directly that \(f^{\star}\) is an optimizer of the C-index and it only depends on the conditional density of events \(\mu(t|x)\)._
We show in Sec. 3 that an optimal risk ordering does not exists in general.
**C-index estimation with right-censored data**
In the survival analysis setting introduced in Sec. 2.1, time-to-events \(T\) are not observed but rather a lower bound \(U=C\wedge T\)1 and an event indicator \(\Delta=1(C\geq T)\). In this case the question arises of how to consistently estimate the C-index of a risk model \(f\). Using the Inverse Censored Probability Weighting (IPCW) strategy of Robins and Finkelstein (2000), one can see that
Footnote 1: a \(\wedge b=\min(a,b)\).
\[\mathbb{E}\left\{1(T>T^{\prime})\right\} =\mathbb{E}\left\{\frac{1(T>T^{\prime})1(C\wedge C^{\prime}\geq T ^{\prime})}{G(T^{\prime})^{2}}\right\}\] \[=\mathbb{E}\,\left\{\frac{\Delta^{\prime}1(U>U^{\prime})}{G(U^{ \prime})^{2}}\right\},\]
where \(G\) is the censoring curve defined at the end of Sec. 2.1. In particular, this leads to the following expression for the C-index as an expectation over \((X,U,\Delta)\):
\[C(f)=\mathbb{E}\,\left\{\frac{\Delta^{\prime}1(U>U^{\prime})1(f(X)<f(X^{\prime }))}{G(U^{\prime})^{2}}\right\}\]
The C-index can be consistently estimated from data using the empirical average instead of the expectation; this estimator is known as Uno's C-index (Uno et al., 2011).
### Fisher Consistency
As previously discussed, the C-index (2) cannot be maximized using gradient descent. Instead, the function \(f\) is learned by minimizing a _smooth_ risk \(\mathcal{R}(f)\). Fisher consistency is a property guaranteeing that the minimizers of the smooth risk are also maximizers of the C-index. Definition 2.2 formalizes the notion of Fisher consistency over a family of distributions.
**Definition 2.2** (Fisher Consistency).: _The risk \(\mathcal{R}\) is said to be Fisher consistent to the C-index under a distribution family \(\mathrm{Q}\subseteq\mathbb{P}\) if_
\[\mathcal{R}_{\mu}(f^{\star})=\min_{f}\ \mathcal{R}_{\mu}(f)\implies C_{\mu}(f^{ \star})=\max_{f}C_{\mu}(f),\]
_for all distribution \(\mu\in\mathrm{Q}\), where \(\mathcal{R}_{\mu},C_{\mu}\) are the smooth risk and C-index computed over the joint distribution \(\mu\)._
In this paper, pairs of smooth risks and families of distributions are examined for which Fisher consistency holds for the C-index.
**Consistency for AUC with binary labels** The C-index is a continuous outcome version of the well-known AUC used in binary output ranking problems, also known as bipartite ranking. AUC is defined as the pairwise probability of concordant order between the risk and the binary labels:
\[\mathrm{AUC}(f)=\mathbb{P}\left\{f(X)>f(X^{\prime})|Y=1,Y^{\prime}=0\right\}, \tag{6}\]
where \(Y,Y^{\prime}\in\{0,1\}\) are binary. The loss is non-continuous and cannot be optimized directly by gradient descent. In this case, however, the set of optimizers of (6) can be easily characterized as monotone transformations of the conditional distribution (Agarwal, 2014). This can be seen as the function \(f(x)=\mathbb{P}\left\{Y=1|X=x\right\}\) satisfies
\[\mathbb{P}\left\{Y>Y^{\prime}|x,x^{\prime}\right\}\geq\frac{1}{2}\iff f(x)\geq f (x^{\prime}),\]
which directly follows from the identity \(f(x)(1-f(x^{\prime}))=\mathbb{P}\left\{Y>Y^{\prime}|x,x^{\prime}\right\}\). Hence, any smooth risk \(\mathcal{R}\) whose minimizer is a monotone transformation of the conditional distribution is consistent to the AUC. This includes least squares, logistic regression, and more generally any proper loss function (Agarwal, 2014).
## 3 Maximizers of C-index
The problem of maximizing the C-index associated to a survival model \(\mu\) writes
\[C^{\star}_{\mu}=\max_{f}C_{\mu}(f). \tag{7}\]
The aim of this section is to classify the set of possible survival models \(\mu\)--the joint density of \((T,X)\)--in terms of properties of the associated maximizers of (7). We introduce four families of survival models, denoted by \(\mathrm{A}\subsetneq\mathrm{B}\subsetneq\mathrm{C}\) and \(\mathrm{D}\coloneqq\mathrm{C}^{c}=\mathrm{P}\setminus\mathrm{C}\), defined informally as follows:
* A: Survival curves \(S(t|x)\) do not cross. Existing work studying consistency with respect to C-index, such as Clemencon et al. (2013); Clemencon and Achab (2018) are limited to this family of models.
* B: \(-\mathbb{E}\{T|X=x\}\) is an optimal risk ordering (5). We prove in Sec. 4 that a large family of smooth risk functions is consistent in this family.
* C: There exists an optimal risk ordering satisfying (5). We prove in Sec. 4 that MLE is Fisher-consistent for several examples of models in family \(\mathrm{C}\).
* D: There is no optimal risk ordering satisfying (5). We introduce in Sec. 4 an off-the-shelf estimation method which proves consistent in this setting, although being computationally expensive at inference.
In the following, the four families of survival models are described alongside examples and theoretical results characterizing the associated oracle C-index maximizers.
## Appendix A Conditional Survival Curves do not Cross
Family \(\mathrm{A}\) is defined as the set of survival models whose conditional survival curves uniformly bound one another. In other words, models \(\mu\in\mathrm{A}\) satisfy assumption A.
**Assumption A**.: _For all \((x,x^{\prime})\in\mathcal{X}^{2}\), \(t\mapsto S(t|x)-S(t|x^{\prime})\) has constant sign._
Under Assumption A, Thm. 3.1 shows that the negative conditional expectation \(-\mathrm{CE}(x)=\mathbb{E}\left\{T\mid X=x\right\}\) satisfies Condition (5); the proof can be found in the Appendix.
**Theorem 3.1**.: _If \(\mu\in\mathrm{A}\), the negative conditional expectation is an optimal risk ordering for the C-index satisfying Condition (5), thus \(C_{\mu}(-\mathrm{CE})=C^{\star}_{\mu}\)._
The two most commonly used survival models, namely Cox PH and AFT, satisfy Assumption A and therefore Thm. 3.1 applies, as discussed below.
**Proportional Hazards model.** The hazard function is defined as \(h(t)=S^{\prime}(t)(1-S(t))^{-1}\), where \(S^{\prime}\) denotes the notetine derivative of the survival curve. In the PH model, the conditional hazard \(h(t|x)\) factorizes as \(h(t|x)=h_{0}(t)e^{f(x)}\), where \(h_{0}\) is the (non-negative) baseline hazard and \(f\) is a function of the covariates (Cox, 1972). This yields
\[S(t|x)=S_{0}(t)^{e^{f(x)}},\ \ \ \ S_{0}(t)=e^{-\int_{0}^{t}h_{0}(\tau)d\tau},\]
where \(S_{0}\) is the baseline survival curve. It directly follows that survival curves do not cross at any point in time, therefore Assumption A is satisfied and Thm. 3.1 applies. In this example \(f(x)\) also defines an optimal ranking; derivations are provided in the appendix to support this claim.
**Accelerated Failure Times model.** The AFT model assumes the following form for time-to-events:
\[\log T=f(X)+\varepsilon, \tag{8}\]
where \(\varepsilon\) is an independent random variable. The survival curve is parametrized as \(S(t|x)=S_{0}(te^{-f(x)})\), where \(S_{0}\) is the survival curve of \(e^{\varepsilon}\). Note that the survival curves do not cross each other as they are defined as scaling by \(e^{f(x)}\). In this example also, Assumption A is satisfied and Thm. 3.1 applies. As for the PH model, note that \(f(x)\) also defines an optimal ranking; derivations are provided in the appendix to support this claim.
## Appendix B Conditional Expectation is an Optimal Ordering
Family \(\mathrm{B}\) is defined as the set of survival models for which the negative conditional expectation is an optimal risk ordering. In other words, models \(\mu\in\mathrm{B}\) satisfy the following condition.
Assumption B: The negative conditional expectation \(-\operatorname{CE}(x)=-\operatorname{\mathbb{E}}\left\{T|X=x\right\}\) is an optimal risk ordering satisfying (5).
Note that Thm. 3.1 proved the inclusion \(\operatorname{A}\subset\operatorname{B}\). Now, the strict inclusion, \(\operatorname{A}\subsetneq\operatorname{B}\) is proved by providing an example of survival models such that \(\mu\in\operatorname{B}\setminus\operatorname{A}\). Consider the extended AFT model presented in the previous section by adding _symmetric heteroscedastic noise_.
Definition 3.2 (Aft-h): In the AFT-H model, the time-to-event has the form
\[\log T=f(x)+\sigma(x)\varepsilon, \tag{9}\]
where \(\sigma:\mathcal{X}\to\mathbb{R}_{+}\) is a positive-valued function satisfying \(f(x)\leq f(x^{\prime})\implies\sigma(x)\leq\sigma(x^{\prime})\) and \(\varepsilon\) is a centered Gaussian random variable.
Prop. 3.3 shows that AFT-H satisfies assumption B.
Proposition 3.3 (Aft-h satisfies \(\operatorname{B}\)): Assume that \(\mu\) is in AFT-H. Then, the negative conditional expectation is an optimal risk ordering, thus \(C_{\mu}(-\operatorname{CE})=C_{\mu}^{\star}\).
This result is a reformulation of Corollary 2 by Lebedev (2019); we provide the original statement in the appendix. Note that under AFT-H the conditional survival curves take the following form
\[S(t|x)=S_{\varepsilon}\Big{(}\frac{\log t-f(x)}{\sigma(x)}\Big{)},\]
where \(S_{\varepsilon}(X)=\operatorname{\mathbb{P}}\left\{X>\varepsilon\right\}\). Fixing \(f(x)\) and varying \(\sigma(x)\) we can clearly see how the survival curves cross so that AFT-H _does not_ always satisfy assumption A.
## Appendix C There exists an Optimal Risk Ordering
Family \(\operatorname{C}\) is defined as the set of survival models admitting an optimal risk ordering satisfying (5).
Assumption C: There exists an optimal ordering \(f_{\mu}^{\star}\) for survival model \(\mu\), satisfying (5).
Under assumption C, a closed form for the maximum C-index attained at \(f^{\star}\) can be derived by combining Prop. 3.4 below to the expression of pairwise conditional probabilities for specific models.
Proposition 3.4: Assume that \(\mu\) satisfies assumption C. Then, the optimal C-index takes the following form:
\[C_{\mu}^{\star}=C_{\mu}(f_{\mu}^{\star})=\operatorname{\mathbb{E}}_{X,X^{ \prime}}\varphi(\operatorname{\mathbb{P}}\left\{T>T^{\prime}|X,X^{\prime} \right\}),\]
where \(\varphi(a)=\max(a,1-a)\) and \(f^{\star}\) satisfies (5).
Analogously to the previous cases, a family of distributions satisfying assumption C is introduced, using exponential family models. Then, an example of model \(\mu\in\operatorname{C}\setminus\operatorname{B}\) is provided.
Definition 3.5 (Exponential family survival model): For \(\theta:\mathcal{X}\to\mathbb{R}\), \(\beta:\mathbb{R}_{+}\to\mathbb{R}_{+}\), \(\tau:\mathbb{R}_{+}\to\mathbb{R}\), \(\eta:\mathbb{R}\to\mathbb{R}\), and \(A:\mathbb{R}\to\mathbb{R}\) such that, for all \(x\in\mathcal{X}\), the conditional density of a curved exponential family model is given by
\[\mu(t|x)=\beta(t)\exp\left[\eta\circ\theta(x)\tau(t)-A\circ\theta(x)\right], \tag{10}\]
with associated parameter \(\theta(x)\). For instance, \(\theta(x)=\theta^{\top}x\) in a generalized linear model.
The scalar exponential family from definition 3.5 covers many of the survival curves classically used in survival analysis, e.g., the exponential, chi-squared, Laplace and normal distributions. Under the exponential family model, the scalar parameterization \(\theta(x)\) satisfies the optimal risk ordering condition, as shown in the following proposition proved in the Appendix. Thus, maximization of the C-index can be achieved by estimating the parameters \(\theta(x)\) of the model.
Proposition 3.6: Under Assumption C, with \(\theta\) continuous, \(\beta\) positive, \(\tau\) non-decreasing and \(\eta\) continuously differentiable and non-decreasing, \(\theta(x)\) is an optimal risk ordering for the C-index, thus \(C_{\mu}(\theta)=C_{\mu}^{\star}\).
Weibull with varying shape parameterConsider the model defined by Weibull conditional survival curves with varying shape parameter
\[S(t|x)=e^{-t^{f(x)}}. \tag{11}\]
The following Prop. 3.7 proves the strict inclusion \(\operatorname{B}\subsetneq\operatorname{C}\).
Proposition 3.7: The above Weibull model (11) satisfies assumption C but not assumption B.
The proof is based on the result by Lebedev (2019) showing that \(f\) gives an optimal risk ordering. Assumption B is not satisfied as the expectation of a Weibull is not monotone on the shape parameter. Indeed, the expectation is given by \(\Gamma(1+\frac{1}{f(x)})\), where \(\Gamma\) denotes the Gamma function. This function has a minimum between \(1.46\) and \(1.47\), decreasing first and increasing for larger values. Thus, it gives a different ranking than the optimal risk ordering \(f\).
## Appendix D There is no Optimal Risk Ordering
Family \(\operatorname{D}\) contains survival models for which there does not exist an optimal risk ordering satisfying condition (5). The following example illustrates this phenomenon. Let \(\{T_{i}\}_{1\leq i\leq n}\) be random variables corresponding to time-to-event of individuals \(1\leq i\leq n\), and consider the following assumption.
Assumption D: There exists \(m\geq 3\), a subset of indices \(\mathcal{I}\subset\{1,\ldots,n\}\), \(|\mathcal{I}|=m\), and an ordering \(i_{1}<i_{2}<\ldots<i_{m}\) such that, denoting \(i_{m+1}=i_{1}\),
\[\min_{k\in\{1,\ldots,m\}}\left(\operatorname{\mathbb{P}}\left\{T_{i_{k}}<T_{i_ {k+1}}\right\}\right)>\frac{1}{2}.\]
Assumption D implies the existence of a cyclic sequence with respect to the statistical preference order (3), which implies there is no ranking function satisfying the optimal risk ordering (5). In Fig. 1, we illustrate this phenomenon with a cyclic sequence made of a uni-modal and two multi-modal time-to-event distributions.
In the previous sections, we showed that for \(\mu\in\mathrm{A},\mathrm{B},\mathrm{C}\), the maximizer of the C-index in fact only depends on the _conditional density_\(\mu(t|x)\); however, if \(\mu\in\mathrm{D}\), the maximizer may also depend on the marginal covariate distribution \(\mu(x)\). This is an important characteristic of cyclic sequences, since the optimizer of the C-index may change under distributional shifts of the marginal population of patients. This phenomenon is shown in the following result, proved in the Appendix.
**Proposition 3.8**.: _Under Assumption D, the maximizer of the C-index depends on the marginal distribution of the patients covariates \(\mu(x)\)._
In particular, this means that the optimal relative order between patients may change if new patients are added to the original patient cohort. This phenomena does not happen in the binary setting where the ranking measure is the AUC as its optimizer is the conditional expectation, as discussed in Sec. 2.3.
## 4 Consistency and excess risk bounds
We now study several estimation procedures classically used in survival analysis, all based on Fisher-consistent, smooth cost functions. In particular, we discuss under which families of survival models introduced in the previous section Fisher-consistency holds. Although existing work was limited to family A (e.g. Clemencon et al. (2013); Clemencon and Achab (2018)), we prove that all the considered methods are consistent in family \(\mathrm{B}\supsetneq A\). We start by introducing the considered method and their extensions to the censoring case. Then, we provide excess risk bounds on the associated C-index suboptimality.
### Estimation Procedures
**Estimating the conditional expectation (A, B).** Without a specific survival model but under assumption B, one can use any cost function minimized by a monotone transformation of the conditional expectation. The following Thm. 4.1 provides a family of risks based on Fenchel-Young losses (Blondel et al., 2020) satisfying this property.
**Theorem 4.1**.: _Let \(\Omega:\mathcal{C}\to\mathbb{R}\) be a twice-differentiable strongly convex function 2 defined in a closed domain \(\mathcal{C}\supseteq\mathbb{R}_{+}\) such that \(\lim_{u\to\infty}\nabla\Omega(u)=+\infty\). Define the cost function \(S(v,t)=\Omega^{*}(v)-vt\) where \(\Omega^{*}\) is the Fenchel conjugate of \(\Omega\)(Rockafellar, 1997) 3. Then, the following risk_
Footnote 2: A one-dimensional strongly convex function is one for which the Hessian is uniformly lower bounded \(\nabla^{2}_{u}\Omega\geq C>0\) for all \(u\) in the domain.
Footnote 3: The Fenchel conjugate \(\Omega^{*}\) of \(\Omega\) is defined for all \(v\in\mathbb{R}\) as \(\Omega^{*}(v)=\sup_{u\in\mathcal{C}}\,vu-\Omega(u)\)
\[\mathcal{R}(f)=\mathbb{E}_{(X,U,\Delta)}\,\,\,\frac{\Delta S(f(X),U)}{G(U)}, \tag{12}\]
_is convex, smooth, and its minimizer is a monotone transformation of the conditional expectation. Thus, under assumption B it is Fisher consistent to the C-index._
When \(\mathcal{C}=\mathbb{R}\) and \(\Omega(u)=u^{2}/2\) this corresponds precisely to ICPW (Robins and Finkelstein, 2000) on least squares. However, Thm. 4.1 provides a larger family of consistent smooth risks by choosing \(\Omega\) and its domain \(\mathcal{C}\) using the construction of Fenchel-Young losses.
Note that the estimator minimizing (12) is not efficient in the presence of censoring as only samples corresponding to events \(\Delta=1\) contribute to the cost function. To alleviate this issue, Steingrimsson and Morrison (2020) uses semi-parametric efficiency theory for missing data (Tsiatis, 2006; Robins et al., 1994), and develops an augmented estimator with smallest asymptotic variance among all unbiased estimators of \(\mathbb{E}\,\,S(f(X),T)\).
**Maximum Likelihood Estimators (A, B, C).** Assume the conditional survival model \(\mu_{f}(t|x)\) lies in a family of distributions so that one of the parameters \(f(x)\) gives the optimal risk ranking (5), thus belonging to class \(\mathrm{C}\). We can learn the optimal parameter under censoring by minimizing the following MLE loss (Kalbfleisch and Prentice, 2002):
\[\mathcal{R}(f)=-\,\mathbb{E}_{(X,U,\Delta)}\,\,\Delta\log\mu_{f}(U|X)+(1- \Delta)\log S_{f}(U|X).\]
The derivation of this loss can be found in the Appendix. This loss can be used for all models presented in the previous section, namely PH, AFT, AFT-H, Weibull and the exponential family. Whenever the model is identifiable, the
Figure 1: Survival curves and time-to-event distributions of a cycle of length three. In this case \(P(T_{1}<T_{2})=P(T_{2}<T_{3})\approx 0.52\) and \((T_{3}<T_{1})\approx 0.55\). Hence, it is a cycle as \(\min(P(T_{1}<T_{2}),P(T_{2}<T_{3}),P(T_{3}<T_{1}))>\frac{1}{2}\).
MLE is consistent to the true parameter and thus Fisher consistent to the C-index. An important advantage of this loss is that the censored samples have an explicit role and provide signal during the learning procedure. Note that, penalized versions of (4.1) can also be used to obtain explicit finite sample risk bounds (see Prop. 4.4).
**Smooth C-index (A, B, C, D).** In family D, one cannot assume the existence of an optimal risk ordering. In this case, an alternative method is smooth C-index maximization, which is based on smoothing the indicator function defining the C-index under censoring as presented in Sec. 2.2, leading to a non-convex smooth loss (Mayr et al., 2016). The non-convexity of this approach may cause convergence problems and a scaling parameter may be tuned to guarantee proper convergence.
**Estimating pairwise probabilities (A, B, C, D).** The smooth C-index has two-main problems: its non concavity and the fact its optimizer is not robust to marginal distribution shifts due to Prop. 3.8. We propose a novel methodology whereby instead of learning a risk function \(f\) specifying the ranking on the training cohort, we (1) learn the pairwise conditional probabilities \(\mathbb{P}\left\{T>T^{\prime}|x,x^{\prime}\right\}\) on the training data with an estimator \(h(x,x^{\prime})\) and then (2) we construct the ranking that better satisfies the relative order constraints in expectation over the finite validation cohort \(x_{1},\ldots,x_{n}\) by solving:
\[\min_{\sigma\in\mathcal{S}}\sum_{i,j=1}^{n}\gamma_{ji}1(\sigma(i)<\sigma(j)), \tag{13}\]
where \(\mathcal{S}\) is the set of permutations of size \(n\) and \(\gamma_{ij}=|2h(x_{i},x_{j})-1|1(h(x_{i},x_{j})>1/2)\). The derivation of this methodology follows easily from the C-index expression (4) and it is consistent by construction. The combinatorial problem (13) is known as the Minimum Weight Feedback Arc Set (MWFAS) problem (Duchi et al., 2010; Karp, 1972); it is known to be NP-Hard. However, multiple approximations of this problem exist (Even et al., 1998; Demetrescu and Finocchi, 2003).
This method addresses the two problems of the smooth C-index. First, the pairwise conditional probabilities can be learned using convex estimation methods, such as logistic regression on the binary problem \(\hat{Y}=\operatorname{sign}(T-T^{\prime})\) and \(\hat{X}=(X,X^{\prime})\). Second, the ranking estimator is robust to marginal distributional shifts as the inference algorithm (13) is computed on the validation cohort. It is interesting to note that the computational bottleneck of the smooth C-index coming from its non concavity has now been transposed into the computational bottleneck of the combinatorial inference problem.
### Excess Risk Bounds
The question is now to obtain excess risk bounds on the C-index when there exists an optimal risk ranking \(f^{\star}\) satisfying (5). The following Thm. 4.2 bounds the excess risk of the C-index.
**Theorem 4.2** (Excess risk bounds).: _Let \(f^{\star}\) be an optimal risk ranking satisfying (5). Let \(L>0\) a positive constant satisfying_
\[|2\mathbb{P}\left\{T>T^{\prime}|x,x^{\prime}\right\}-1|\leq L|f^{\star}(x)-f^ {\star}(x^{\prime})|, \tag{14}\]
_for all pairs \(x,x^{\prime}\). Then, the excess risk of the C-index can be bounded as_
\[C(f^{\star})-C(f)\leq 2L\,\mathbb{E}_{X}\,|f(X)-f^{\star}(X)|.\]
The proof of this result can be found in the Appendix, and is based on a reduction of the problem to a binary classification problem with input \((X,X^{\prime})\) and output \(Z=\operatorname{sign}(T-T^{\prime})\), similar to Agarwal (2014) for the bipartite ranking setting. The following Prop. 4.3 shows that condition (14) is satisfied for most of the models presented in Sec. 3. The proof is in the Appendix.
Figure 2: C-index empirical excess risks of the different methods of ranking for various training sizes \(n\). The left, center and right figures correspond to data generated from models satisfying assumptions A, B and C, respectively.
**Proposition 4.3**.: _Condition (14) is satisfied by:_
1. _The PH model with_ \(L=1\)_._
2. _The AFT model (_8_) with_ \(L\) _the Lipschitz constant of the cumulative distribution function_ \(F_{\varepsilon-\varepsilon^{\prime}}\) _of the symmetric random variable_ \(\varepsilon-\varepsilon^{\prime}\)_._
3. _The AFT-H model (_9_) with_ \(L\) _the same as AFT scaled by a factor_ \(a\) _where_ \(\sigma(x)\geq 1/a\) _for all_ \(x\)_._
4. _The exponential family (_10_)._
We now provide two examples of application of Prop. 4.3.
**Example of Lasso estimator on Cox PH model** Prop. 4.3 shows that Thm. 4.2 applies in particular to the PH model. For this specific case, we illustrate our theoretical findings with an application to the Lasso estimator for the Cox PH model. This estimator is analyzed in Huang et al. (2013), where finite sample bounds on the \(\ell_{1}\)-penalized negative log-likelihood estimation and prediction errors are proven. More specifically, assuming a generalized linear model \(f^{\star}(x)={\theta^{\star}}^{\top}x\), the Lasso estimator \(\hat{\theta}_{\text{Lasso}}\) is shown to satisfy \(\|\hat{\theta}_{\text{Lasso}}-{\theta^{\star}}\|_{1}\lesssim\frac{\|{\theta^ {\star}}\|_{0}\log(d)}{n}\) in high probability for sufficiently large training datasets. Operator \(\lesssim\) denotes inequality up to log and constant terms depending on the model, and \(\|{\theta^{\star}}\|_{0}\) denotes the number of non-zero entries of \({\theta^{\star}}\). Combining this result with Thm. 4.2, and noticing that
\[\mathbb{E}_{X}\left|(\hat{\theta}_{\text{Lasso}}-{\theta^{\star}})^{\top}X \right|\leq\|\hat{\theta}_{\text{Lasso}}-{\theta^{\star}}\|_{1}\,\mathbb{E}_{ X}\,\|X\|_{\infty},\]
we obtain the following informal result.
**Proposition 4.4** (**informal**).: _With probability at least \(1-\varepsilon(n)\), \(\varepsilon(n)\to 0\) as \(n\to+\infty\),_
\[C({\theta^{\star}})-C(\hat{\theta}_{\text{Lasso}})\lesssim\mathbb{E}_{X}\,\|X \|_{\infty}\cdot\frac{\|{\theta^{\star}}\|_{0}\log(d)}{n}.\]
The quantity \(\mathbb{E}_{X}\,\|X\|_{\infty}\) depends on the covariates model and corresponds to "the size" of the input space.
**Finite sample bounds for family B.** Under assumption B and using the smooth cost function (12) we can obtain excess risk bounds on the C-index in terms of the excess of the smooth risk \(\mathcal{R}(f)\).
**Theorem 4.5**.: _Let \(\mathcal{R}\) be the risk defined in (12). Under assumption B, the following inequality holds:_
\[C^{\star}-C(f)\leq 4L\gamma\sqrt{\mathcal{R}(f)-\mathcal{R}^{\star}},\]
_where \(\Omega^{\prime\prime}(u)\geq 1/\gamma^{2}\) for all \(u\) in the domain \(\mathbb{C}\)4._
Footnote 4: Recall that \(\Omega\) is convex thus \(\Omega^{\prime\prime}(u)\geq 0\)
Combining the above Thm. 4.5 with finite sample bounds on the excess risk such as the ones obtained by Ausset et al. (2019) one can translate them to the C-index.
## 5 Experiments
In this section we perform experiments to validate our theoretical findings 5. More specifically, we assess empirically the consistency of different estimation methods with respect to the C-index, under simulation regimes corresponding to families \(\mathrm{A}\), \(\mathrm{B}\) and \(\mathrm{C}\).
**Data Generation and Evaluation Procedure** We simulate survival data using the three different regimes A, B and C presented in Sec. 3. We first simulate the training covariates \(x_{1},\ldots,x_{n}\in\mathbb{R}^{d}\) as well as a unit vector \(\beta\in\mathbb{R}^{d}\) with \(d=10\) and parameterize the optimal ranking linearly \(f(x)=\beta^{\top}x\). We then simulate the corresponding time-to-events \(t_{1},\ldots,t_{n}\) as realizations of the distribution of \(T|X\), parameterized by \(\beta\) and depending on the selected regime of simulation. Four different models are fit using the the cost functions studied in Sec. 4. First, a linear model (**L-MSE**) optimizing a mean square error loss function to regress the conditional expectation of \(T\) given \(x\); second, a linear Cox model (**Cox**) optimizing a log-likelihood; third, a linear model (**L-smooth\({}_{\sigma}\)**) optimizing a smooth C-index (with smoothing parameter \(\sigma\in\{.01,10\}\) that accounts for the smoothness of the approximation of the indicator function), and finally a pairwise model (**MWFAS**) that predicts pairwise probabilities using XGBoost, and from which the ranking is obtained by solving the MWFAS combinatorial problem presented in Sec. 4.1 with a fast approximation algorithm. We compute the obtained C-index from a test dataset of fixed size (\(n_{\text{test}}=3000\)) using the same distribution as the training dataset.
Footnote 5: Code can be found in [https://github.com/owkin/owkin-metric](https://github.com/owkin/owkin-metric)
**Results** The results are provided in Fig. 2. For the three generation regimes, we observe that the proposed method **MWFAS** yields the best performance and converges to the optimal C-index when \(n\) is sufficiently large. It has similar performance to **Cox** in the regime \(\mathrm{A}\) where **Cox** is well specified. When **Cox** is not well-specified it does not converge to the optimal ranking. Lastly, note that the value of the smoothing parameter used by **L-smooth\({}_{\sigma}\)** can harm the performance of the resulting model and it is important to choose it properly to improve convergence guarantees.
### Acknowledgements
The authors would like to thank Paul Trichelair for his valuable guidance throughout the project.
|
2308.10615 | Beyond 2-D Mass-Radius Relationships: A Nonparametric and Probabilistic
Framework for Characterizing Planetary Samples in Higher Dimensions | Fundamental to our understanding of planetary bulk compositions is the
relationship between their masses and radii, two properties that are often not
simultaneously known for most exoplanets. However, while many previous studies
have modeled the two-dimensional relationship between planetary mass and radii,
this approach largely ignores the dependencies on other properties that may
have influenced the formation and evolution of the planets. In this work, we
extend the existing nonparametric and probabilistic framework of \texttt{MRExo}
to jointly model distributions beyond two dimensions. Our updated framework can
now simultaneously model up to four observables, while also incorporating
asymmetric measurement uncertainties and upper limits in the data. We showcase
the potential of this multi-dimensional approach to three science cases: (i) a
4-dimensional joint fit to planetary mass, radius, insolation, and stellar
mass, hinting of changes in planetary bulk density across insolation and
stellar mass; (ii) a 3-dimensional fit to the California Kepler Survey sample
showing how the planet radius valley evolves across different stellar masses;
and (iii) a 2-dimensional fit to a sample of Class-II protoplanetary disks in
Lupus while incorporating the upper-limits in dust mass measurements. In
addition, we employ bootstrap and Monte-Carlo sampling to quantify the impact
of the finite sample size as well as measurement uncertainties on the predicted
quantities. We update our existing open-source user-friendly \texttt{MRExo}
\texttt{Python} package with these changes, which allows users to apply this
highly flexible framework to a variety of datasets beyond what we have shown
here. | Shubham Kanodia, Matthias Y. He, Eric B. Ford, Sujit K. Ghosh, Angie Wolfgang | 2023-08-21T10:28:45Z | http://arxiv.org/abs/2308.10615v1 | Beyond 2-D Mass-Radius Relationships: A Nonparametric and Probabilistic Framework for Characterizing Planetary Samples in Higher Dimensions
###### Abstract
Fundamental to our understanding of planetary bulk compositions is the relationship between their masses and radii, two properties that are often not simultaneously known for most exoplanets. However, while many previous studies have modeled the two-dimensional relationship between planetary mass and radii, this approach largely ignores the dependencies on other properties that may have influenced the formation and evolution of the planets. In this work, we extend the existing nonparametric and probabilistic framework of MRExo to jointly model distributions beyond two dimensions. Our updated framework can now simultaneously model up to four observables, while also incorporating asymmetric measurement uncertainties and upper limits in the data. We showcase the potential of this multi-dimensional approach to three science cases: (i) a 4-dimensional joint fit to planetary mass, radius, insolation, and stellar mass, hinting of changes in planetary bulk density across insolation and stellar mass; (ii) a 3-dimensional fit to the California Kepler Survey sample showing how the planet radius valley evolves across different stellar masses; and (iii) a 2-dimensional fit to a sample of Class-II protoplanetary disks in Lupus while incorporating the upper-limits in dust mass measurements. In addition, we employ bootstrap and Monte-Carlo sampling to quantify the impact of the finite sample size as well as measurement uncertainties on the predicted quantities. We update our existing open-source user-friendly MRExo Python package with these changes, which allows users to apply this highly flexible framework to a variety of datasets beyond what we have shown here.
0000-0002-4880-7885]Shubham Kanodia
0000-0002-8870-788X]Matthias Y. He
0000-0002-4883-7888]Eric B. Ford
0000-0002-4733-277X]Sujit K. Ghosh
## 1 Introduction
In the \(\sim 30\) years since the discovery of the first extrasolar planets (Wolszczan and Frail, 1992; Mayor and Queloz, 1995), astronomers have discovered over 5000 exoplanets (NASA Exoplanet Archive; Akeson et al., 2013). The growth in sample size lends itself to the use of increasingly sophisticated statistical tools and increasing the dimensionality of the models used for interpreting the exoplanet sample (and population). For example, based on the first handful of known exoplanets from radial velocities (RVs), Gonzalez (1997) noted a preference for giant planets to be found around metal-rich host stars. This trend has held up with more sophisticated analysis of larger samples of short-period giant exoplanets from both RV and transiting surveys (Santos et al., 2001; Fischer and Valenti, 2005; Ghezzi et al., 2010; Sousa et al., 2011; Buchhave et al., 2014; Wang and Fischer, 2015; Petigura et al., 2018; Narang et al., 2018).
Another observed feature in the exoplanet population is the "radius gap" for small exoplanets (\(R_{p}<4\)\(R_{\oplus}\)) that was first predicted by Owen and Wu (2013) and Lopez and Fortney (2013) and identified observationally by Fulton et al. (2017) using a sample of transiting planets from the _Kepler_ mission (Borucki et al., 2010) combined with precise stellar parameters from the California-_Kepler_ Survey (CKS; Petigura et al., 2017). Initially, the radius gap referred to a deficit in planets with radii \(\sim 1.7\)\(R_{\oplus}\) in a histogram (1-D space). Now,
it refers to a valley in 2-D planet radius-orbital period space (Fulton et al., 2017; Van Eylen et al., 2018; Berger et al., 2018; Martinez et al., 2019; Hsu et al., 2019). In a quest to disambiguate between the various physical mechanisms that can produce this deficit of planets, astronomers have considered how the radius gap varies with additional planetary and stellar parameters, e.g., using slices of the 1-D radius histogram or 2-D radius-period plane for different stellar properties (e.g., Fulton and Petigura, 2018; Berger et al., 2020; Van Eylen et al., 2021; Otegi et al., 2020), stellar metallicities (Owen and Murray-Clay, 2018; Petigura et al., 2022; Otegi et al., 2020), and ages (Berger et al., 2020; Petigura et al., 2022).
Similar 2-D models have been used in a wide variety of exoplanet studies, such as planet mass-metallicity relations (Welbanks et al., 2019, and references therein) and the haziness of planet atmospheres (e.g., Yu et al., 2021; Edwards et al., 2022). Likewise, ALMA measurements of Type II protoplanetary disks have helped estimate the mass in dust (with continuum measurements at \(\sim 870\)\(\mu\)m) and gas (typically using CO and its isotopologues). These studies found the relationships depend on the host stellar mass (e.g., Andrews et al., 2013; Ansdell et al., 2016; Pascucci et al., 2016) and other stellar properties.
Of course, planet structure models rely on more than two parameters, typically including 4-5 dimensions - planet radius, mass, equilibrium temperature (or insolation flux), and age - (Fortney et al., 2007; Baraffe et al., 2008; Muller and Helled, 2021). A similar increase in the dimensionality and the complexity of modeling tools has also taken place for characterizing planetary mass-radius (MR) relations. Initially, studies assumed deterministic 2-D power laws (Seager et al., 2007; Wu and Lithwick, 2013; Weiss and Marcy, 2014; Thorngren et al., 2019). More recently, studies have used Hierarchical Bayesian Modeling (HBM) to develop probabilistic models based on a 2-D power-law (Wolfgang et al., 2016) or piecewise power-law over the mass-radius plane (Bashi et al., 2017; Chen and Kipping, 2017; Otegi et al., 2020). These parametric models assume a relatively simple mathematical model over some region of the MR plane. However, it appears that a more flexible model is required to capture the MR relation over a broader range of planetary radii and masses. Further, most of these models cannot reproduce all of the observed features in the 2-D period-radius (\(P\)-\(R_{p}\)) plane, such as the radius-valley or the Neptune desert (Mazeh et al., 2016). Additionally, it is not clear what the functional form of these relations should be, particularly when additional dimensions beyond just mass and radius are considered.
In parallel, a variety of nonparametric methods have been employed (e.g., beta density functions, Ning et al., 2018; Kanodia et al., 2019; the Maximum Entropy approach, Ma and Ghosh, 2019; random forests, Ulmer-Moll et al., 2019; neural networks, Tasker et al., 2020) to characterize the exoplanet MR relation. MR relations can be useful to infer the composition of planets based on their bulk density (Lopez and Fortney, 2014; Rogers, 2015; Zeng et al., 2019) and for predicting other planetary properties (Chen and Kipping, 2017; Kanodia et al., 2019). Some studies have expanded the MR relationship to three dimensions (MR+) either using a product of power laws (Weiss et al., 2013) or Bayesian models (Neil and Rogers, 2018, 2020; Ma and Fuller, 2021).
In this work we expand on previous work by Ning et al. (2018) and Kanodia et al. (2019) offering a nonparametric method for inferring the probability density describing a 2-D sample using beta density functions. Here we allow for the simultaneous modeling of up to four dimensions1 and provide an implementation in the updated MRExo2 Python package (Kanodia et al., 2019). While primarily developed as an expansion to the MR relation, it can be used as a general purpose modeling tool between any (up to four) measured quantities. Additionally, it has been generalized to work with symmetric or asymmetric measurement uncertainties and can incorporate observations resulting in upper limits. Some examples of 3-D spaces that can be modeled using such a framework are: studying Type II disk dust mass (including upper limits) as a function of stellar mass and age, estimating log(_C/O_) as a function of planet radius and insolation flux. Similarly in 4-D, one can jointly model the M-R-insolation space as a function of stellar mass, or conversely M-R and orbital separation as a function of stellar metallicity. MRExo can also be used to infer the dependence of water scale height in transmission spectroscopy (or conversely haze amplitude) as a function of equilibrium temperature, surface gravity and stellar insolation (bolometric or high-energy).
Footnote 1: While the current algorithm can fit four dimensions, it can be trivially expanded to higher dimensions if required.
Footnote 2: [https://github.com/shbhuk/mrexo](https://github.com/shbhuk/mrexo)
In Section 2 we describe the generalized nonparametric model. In Section 3 we present a few scientific applications to demonstrate the utility and advantage of multi-dimensional nonparametric approach. Finally, we conclude in Section 4. A detailed appendix discusses the updates to the model from previous work, as well as some salient features of the model.
We expand the framework from Ning et al. (2018) and Kanodia et al. (2019) to use Bernstein polynomials3 for multivariate density estimation, i.e. to model an \(n\)-dimensional joint distribution of the variables \(f(x_{1},x_{2},..,x_{n})\), where \(x_{t}\) represents the variable within different dimensions such as mass, radius, insolation, etc. This is essentially the probability of having a particular set of variables \(x_{1},x_{2},..,x_{n}\), given weights (or coefficients) \(\mathbf{w}\), and polynomial degrees \(d^{(1)},...,d^{(n)}\).
Footnote 3: When normalized, each Bernstein polynomial has the same functional form as beta density functions. See the Appendix from Ning et al. (2018) for more details on the choice of basis function.
For example, if we assume a 3-D joint distribution, \(f(x_{1},x_{2},x_{3}|\mathbf{w},d^{(1)},d^{(2)},d^{(3)}))\), for the probability density of a planet to have three properties \(x_{1}\), \(x_{2}\), and \(x_{3}\), then the value of \(f\) at each point in this continuous space should be interpreted as the probability for a planet to exist with given \(x_{1}\), \(x_{2}\), \(x_{3}\). The density is uniquely specified by a set of non-negative weights \(\mathbf{w}\) and degrees \(d^{(1)},d^{(2)},d^{(3)}\). Next, to use this model for predictive purposes, the joint distribution can be conditioned to obtain the probability density function (PDF). Following the laws of conditional probability (see Equation 10 from Ning et al., 2018),
\[f(x_{1}|x_{2},x_{3},\mathbf{w},d^{(1)},d^{(2)},d^{(3)})\] \[=\frac{f(x_{1},x_{2},x_{3}|\mathbf{w},d^{(1)},d^{(2)},d^{(3)})}{\int f (x_{1},x_{2},x_{3}|\mathbf{w},d^{(1)},d^{(2)},d^{(3)})\ \mathrm{d}x_{1}} \tag{1}\] \[=\frac{f(x_{1},x_{2},x_{3}|\mathbf{w},d^{(1)},d^{(2)},d^{(3)})}{f(x_{ 2},x_{3}|\mathbf{w},d^{(1)},d^{(2)},d^{(3)})}, \tag{2}\]
Then, the expected value for \(x_{1}\) can be computed from the PDF,
\[E(x_{1})=\frac{\int x_{1}f(x_{1}|...)\ \mathrm{d}x_{1}}{\int f(x_{1}|...)\ \mathrm{d}x_{1}} \Rightarrow\int x_{1}f(x_{1}|...)\ \mathrm{d}x_{1}. \tag{3}\]
One advantage of the Bernstein polynomial formulation is that conditional probabilities and expectation values can be computed efficiently. The detailed mathematical formalism for the general \(n\)-dimensional case is included in the Appendix A, including the joint distribution (Appendix A.1) and the likelihood for the model (Appendix A.2). The likelihood is maximized to estimate the two unknown sets of parameters in the model, the matrix of weights \(\mathbf{w}\), and the choice of degrees for each dimension (Appendix A.3). Similar to Kanodia et al. (2019), we set the edge weights in each dimension (e.g., first and last row and column in 2-d) to zero to reduce edge effects. Furthermore, we allow for the possibility of asymmetric measurement uncertainties which allows us to include upper (or lower) limits in our framework. We incorporate this into MRExo by modifying the framework as explained in Appendix B. Lastly, we include the possibility to estimate the optimum number of degrees for a given dataset by either using the Akaike Information Criterion (AIC; Akaike, 1973) or \(k\)-fold cross-validation (CV; James et al., 2013) and discuss this further in Appendix C.
For illustrative purposes, we show an example of a MR joint distribution -- \(f(m,r)\) -- in 2-D for a sample of 182 planets from the NASA Exoplanet Archive (Akeson et al., 2013; NASA Exoplanet Archive, 2022) queried on 2023 March 6 for planets with masses and radii \(>3\sigma\) precision, planetary radii \(<4\ R_{\oplus}\) and stellar masses \(<1.5\ M_{\odot}\). The data and fitted joint distribution in planet mass-radius are shown in Figure 1. We used MRExo with this sample of \(\sim 180\) planets and the cross-validation method to select 40 degrees for this model by maximizing the log-likelihood as discussed in Ning et al. (2018). The mean predictions for the distribution of planet masses conditioned on several values of planet radii are shown in Figure 2. In these figures (and throughout this paper), these distributions quantify the _observed_ samples instead of the intrinsic populations due to detection biases, for which a detailed treatment is outside the scope of this work. Future work can combine
Figure 1: 2-D sample set with \(\sim 180\) planets. Joint distribution \(f(m,r)\) showing the masses and radii for the input dataset, where the colours in the background represent the PDF, where blue is a higher probability and red is lower.
the Bernstein polynomial formalism for nonparametric density estimation with other techniques to account for detection bias for characterizing an underlying population. Planet formation and evolution models predict that planet interiors and bulk-structure is likely to depend on the size of the host-star, the nature (and quantity) of primordial material in protoplanetary disks, stellar metallicity, stellar luminosity, etc. (Ida & Lin, 2004, 2005; Fortney et al., 2007; Burn et al., 2021), which motivate a higher-dimensional extension to the mass-radius relationship. This is also seen empirically in Figure 3, by colour-coding the planet mass-radius plane by stellar mass and insolation flux.
## 3 Science Applications
In this section we give a few examples of the updated MRExo framework, allowing for higher dimensional datasets and also asymmetric errorbars to include upper (or lower) limits in the models.
### Mass-Radius+ with MRExo
We apply a 4-D model to the dataset described in Section 2 with \(\sim 180\) planets, across planetary masses, radii, insolation fluxes, and stellar masses. By extending the cross-validation framework described in Ning et al. (2018) to higher dimensions here, we select the optimum degrees after maximizing the log-likelihood to be 40 in each dimension. Here we note that while the framework allows for unequal degrees in each dimension, for speed and simplicity, we assume an equal number of degrees. This results in \((40-2)^{4}\simeq 2.1\times 10^{6}\) weights for the Bernstein polynomials, 95% of which are lesser than \(10^{-8}\) (the absolute tolerance adopted during numerical integration).
We calculate the joint distribution \(f(M_{p},R_{p},M_{\star},S)\), which is then conditioned on different quantities to predict the expected planetary mass as a function of planetary radius, stellar mass and insolation flux -- \(f(M_{p}|R_{p},M_{\star},S)\). We characterize one source of uncertainties on this conditional distribution using a Monte-Carlo approach where the input sample is perturbed within the measurement uncertainties, and then re-fit to obtain a new set of weights, joint distribution and predictions. Furthermore, since this analysis has been performed in sample space (without accounting for detec
Figure 3: 2-dimensional representations of the input sample in planetary mass-radius plane coloured by stellar insolation and stellar mass respectively. It is evident that insolation and/or stellar mass dependent bulk-density trends are harder to tease out with conventional 2-dimensional analysis.
Figure 2: Conditional mass distribution \(f(m|r)\) for the joint distribution shown in Figure 1 on three different radii to obtain the conditional distribution for masses. The dashed lines and the masses are the expected value (Equation 3) for each radius, whereas the histogram shows the predictions from each Monte-Carlo for illustrative purposes.
Figure 4: We perform a 4-dimensional fit to the sample of confirmed small planets (\(R_{p}<4R_{\oplus}\)), and then condition it to show the variation in planetary bulk-density with insolation and stellar mass. For each panel, the bottom \(x\)-axis denotes the insolation flux received by the planet (same for all panels) while the top \(x\)-axis denotes the orbital period (which varies across the panels due to the different stellar masses). To convert insolation to orbital period, we adopt stellar luminosities from Table 6 in Cifuentes et al. (2020). The solid lines depict the expectation value (mean) of the prediction, whereas the shaded region shows the 16th – 84th percentile region from 100 bootstrapped samples. For reference, we also plot the values for the mean bulk densities of Earth and Neptune as the dashed and dashed-dotted lines, respectively.
Figure 5: Similar to the previous figure, we condition the 4-dimensional joint distribution to obtain the expectation value for planetary mass as a function of different planetary radii, insolation and plotted across stellar mass. The shaded region depicts the 16th – 84th percentile region from 100 bootstrapped samples.
tion biases), we also bootstrap resample (with replacement) the data 100 times to estimate the impact of the small sample size in 4-D when making predictions with the model (the model uncertainty due to the finite sample size is further described in Appendix D). For this application, we find that the variance from bootstrapping the sample exceeds the Monte-Carlo uncertainties.
Then we convert the predicted mass into bulk density4 to consider the change in bulk density for planets of different radii, insolation fluxes and stellar masses (Figure 4 and Figure 5). This example demonstrates an application of this technique. While we note a few preliminary trends here, we caution against over-interpretation given the heterogeneous nature of the dataset and the complication of inhomogeneous detection completeness5. More detailed analyses and the scientific interpretations of the predictions are left to future work. Figure 4 shows that (i) the detected sub-Neptunes (\(R_{p}>2.5~{}R_{\oplus}\)) have fairly constant bulk densities across insolation; (ii) the detected Earth radius objects tend to have densities higher than Earth. We caution that the trends seen at lower insolation for the 0.7 and 1.0 \(M_{\odot}\) case have a large variance estimated from the bootstraps, which suggests that the predictive power in this region is low due to a small number of data points.
Footnote 4: To avoid confusion between bulk planetary density and the statistical usage of density, i.e., probability density, we use bulk density for the physical quantity (i.e., \(\mathrm{g~{}cm^{-3}}\)) and density for the statistical probabilities.
Footnote 5: Such a fit can be performed on a simulated dataset based on the well-characterized _Kepler_ data (Hsu et al., 2019; He and Ford, 2022) to reduce the impact of survey incompleteness.
Similarly, Figure 5 shows preliminary trends with stellar mass where we see an increase in the bulk densities of the detected super-Earths (\(R_{p}\sim 1.5~{}R_{\oplus}\)) with stellar mass, almost by a factor of two (from 4 g cm\({}^{-3}\) to 9 g cm\({}^{-3}\)) between 0.3 \(M_{\odot}\) and 1.0 \(M_{\odot}\), whereas this effect is not seen for the gaseous sub-Neptunes. Since the RV semi-amplitude precision has been limited to 1 m s\({}^{-1}\) up until recently, this trend could be due to the enhanced RV signatures of these small planets around lower-mass stars. While the high bulk density for rocky planets around solar-type stars could potentially be at least partially due to a detection bias, this cannot explain the lack of comparable high bulk density super-Earths around the lower-mass M-dwarfs (\(<0.6~{}M_{\odot}\)). This trend is seen across the samples for insolation fluxes 50 \(S_{\oplus}\) and above. In contrast, super-Earths around M-dwarfs tend to be much lower in bulk density, potentially indicative of water-worlds (50% water mass + 50% silicate mass fraction; Zeng et al., 2019; Luque and Palle, 2022), though Rogers et al. (2023) suggest that this bulk density could also be explained by volatile rich H/He dominated atmospheres.
Finally, as a follow-up to the predictive function included with MRExo to predict pla
Figure 6: Comparing Mass-Radius distributions after incorporating additional dimensions for a 1.5 \(R_{\oplus}\)planet. **Top:** The various coloured dashed lines represent the expectation value of \(f(m|r,stm)\), whereas the black dashed line represents the prediction from a 2D \(f(m|r)\) distribution. The grey histogram shows the distribution of predictions from a Monte-Carlo simulation on the input 2D dataset. **Bottom:** Predicted planetary mass as a function of stellar mass and insolation flux where the colour bar represents the expectation value for the planetary mass compared to the 2D prediction of \(\sim\) 4.5 \(M_{\oplus}\).
for samples of _Kepler_ (FGK hosts) and M-dwarf planets Kanodia et al. (2019), we include a predictive function based on planetary mass, radius, insolation and stellar mass called calculate_conditional_distribution() with the new version of MRExo released with this manuscript. The results for the fit are included on Zenodo along with sample-scripts on GitHub6.
Footnote 6: DOI: 10.5281/zenodo.8222163
As TESS is contributing to the sample of planets with measured masses, the range of stellar masses and insolation fluxes covered by the planets is no longer restricted to predominantly short-period objects around Solar-type stars. This is evident in the sample shown in Figure 3. The impact of considering these additional dimensions is shown in Figure 6 with the same dataset and fit presented above, where planetary massses for a 1.5 \(R_{\oplus}\) planet can change by more than 5x across this parameter space.
### CKS-X data: the Exoplanet Radius Valley
Here, we use MRExo to apply our model framework to the sample of exoplanets from the California-_Kepler_ Survey (CKS-X; Petigura et al., 2022). The CKS-X sample is a subset of the _Kepler_ DR25 planet catalog with precise stellar properties as measured from optical spectra obtained on Keck/HIRES (Vogt et al., 1994). This sample builds upon the catalog presented in CKS-I (Petigura et al., 2017) by incorporating additional stellar spectra of planet-hosting stars extending down to \(\sim 0.4\)\(M_{\odot}\), expanding the original sample of spectra for stars in the \(\sim 0.8-1.4\)\(M_{\odot}\) range, for a total of 1246 KOIs orbiting 888 host stars (Petigura et al., 2022). The CKS-X sample thus provides an excellent dataset for modeling the planet period-radius distribution as a function of stellar host properties, as already demonstrated in Petigura et al. (2022).
While Petigura et al. (2022) characterized the joint \(P\)-\(R_{p}\) distribution using a series of Gaussian kernel density estimates (KDEs) by dividing the sample into several stellar mass bins, our approach enables the simultaneous fitting of the full 3-D (or even higher-dimensional) distribution using a joint \(P\)-\(R_{p}\)-\(M_{\star}\) distribution with which we can condition on any stellar mass in the range constrained by the data. In Figure 7, we plot the CKS-X sample from Petigura et al. (2022) in \(P\)-\(R_{p}\)-\(M_{\star}\) space. We apply a few minor filters and modifications to the CKS-X data before fitting the non-parametric model, by: (1) keeping only planets with orbital periods in the range of \(P=[1,100]\) days and radii in the range \(R_{p}=[0.6,6]R_{\oplus}\) (which also filters out some objects with spuriously large values), (2) keeping only planets around stars with stellar masses between \(M_{\star}=[0.4,1.6]M_{\odot}\) and removing those with no stellar mass uncertainties ('E_Mstar-iso'= 0), and (3) assuming no uncertainties in the orbital periods. This results in 1073 remaining planets, for which we fit a model to their joint \(P\)-\(R_{p}\)-\(M_{\star}\) distribution using 30 degrees for each dimension, chosen to be close to the optimal number of degrees from the cross-validation method (we note that the AIC method chooses much fewer degrees, \(d\sim 10\)). In Figure 8, we show the resulting joint \(P\)-\(R_{p}\) distributions conditioned on various values of stellar mass, \(f(P,R_{p}|M_{\star})\). In each panel, the radius valley is clearly visible as the relative dip between two modes of peak probability density. The detection efficiency decreases for smaller and longer period planets, and this clearly contributes to the observed decrease in density at the smallest sizes and longest period. However, the detection efficiency varies smoothly and does not have a local maximum that would lead to the local minimum in the \(P\)-\(R_{p}\) space that could cause the radius valley to be due to select effects. This is confirmed by other non-parametric population analyses that do model the complex detection efficiency of the _Kepler_ mission (Hsu et al., 2019; Kunimoto and Matthews, 2020; Bryson et al., 2021). We also note that the weights near the boundaries of each dimension can be less reliable. To guard against this, the chosen bounds should be away from regions of scientific interest if feasible. Another possibility is to try joint fits with and without the edge polynomials, and quantify the impact on inferred
Figure 7: The CKS-X sample in plotted in period–planet radius–stellar mass (\(P\)-\(R_{p}\)-\(M_{\star}\)), consisting of 1073 planets given our cuts as described in Section 3.2. Each point denotes a planet, where the color denotes its host stellar mass.
Figure 8: **Modeling the distribution of planets in 3-D (period–planet radius–stellar mass) using the CKS-X planet sample.** Joint planet radius–period distributions conditioned on various stellar masses (i.e. \(f(P,R_{p}|M_{*})\), where \(M_{*}=0.6\), 0.8, 1.0, and 1.2 \(M_{\odot}\), as labeled above each panel). The model was fit to the CKS-X sample (1073 planets, as filtered in Section 3.2) with a fixed number of degrees in each dimension (\(d=30\), chosen from the cross-validation method). These distributions represent the modeled-observed distributions, as no detection biases were corrected for in any manner. The color-scale in each panel represents the conditional probability density such that each conditional distribution integrates to unity, as computed in \(\log R_{p}\)–\(\log P\) space.
conditional PDFs using standard distribution comparison metrics.
While some methods have been recently devised to fit the radius valley using a linear relation (see e.g., Berger et al., 2023), their results are sensitive to the exact procedure and we do not attempt to fit a functional form to the exact location of the radius valley in this work. Yet, our non-parametric model provides an avenue for future studies to characterize the radius valley as a function of stellar properties that has at least two advantages: (1) it does not rely on discretizing the data into various bins (e.g. of stellar mass), and (2) the radius valley can be fit to the full (e.g., 3-D) joint distribution that is characterized by a flexible, probabilistic model, instead of fitting to slices of kernel density estimates (KDEs, as in Berger et al., 2023). From Figure 8, we make the qualitative observation that the location of the radius valley (in terms of planet radius) appears to increase with stellar mass, consistent with previous findings with the CKS data (Berger et al., 2020) and predictions from theoretical models for photoevaporation (e.g., Owen and Wu, 2013; Wu, 2019) and core-powered mass loss (Gupta and Schlichting, 2020).
### Class II Protoplanetary Disk Dust masses
We also use MRExo on a sample of 69 class II protoplanetary disks in the 1 - 3 Myr Lupus sample based on ALMA observations (Ansdell et al., 2016). Specifically, we perform a joint fit on the stellar mass and disk dust mass while including the 3-\(\sigma\) (99.7 %) upper limits for the latter as a combination of two Gaussian half-normal distributions (described in Section B). Using the cross-validation method, we estimate 15 degrees in each dimension, and calculate the 2-d joint distribution (Figure 9).
Similar to the approach earlier, we condition the 2-D joint distribution on a few different stellar masses to obtain posteriors for predicted dust masses based on the given sample, along with their Monte-Carlo uncertainties Figure 10, thereby demonstrating the utility of this approach on a different dataset than exoplanet mass-radius.
## 4 Conclusion
We build upon the 2-dimensional nonparametric framework utilizing beta density functions as the basis set for density estimation (Ning et al., 2018; Kanodia et al., 2019) to perform simultaneous density estimation in up to 4-dimensions. Furthermore, we also modify the existing algorithm to allow measurement upper (and lower) limits to be fit. We discuss the caveats and the degeneracies in log-likelihood space associated with this dimensional expansion, and also run some simulations to demonstrate the utility of the bootstrap and Monte-Carlo methods to explore the impact of the finite sample size and measurement precision of the dataset, respectively, on the inferred predictions. We summarize some of the salient features of this framework below:
Figure 10: Conditional distribution predicting the disk dust mass for different stellar masses. The dashed lines and the masses are the expected value (Equation 3) for each radius, whereas the histogram shows the predictions from the Monte-Carlo simulation.
Figure 9: Joint distribution of the stellar mass - disk dust mass distribution for 69 disks in the Lupus complex as presented in Ansdell et al. (2016). Additionally, the red shaded region depicts the power law fit from Ansdell et al. (2016) for reference.
* The non-parametric nature of the framework makes it agnostic to most7 assumptions for an intrinsic functional form (e.g., linear or power-law, etc.) and thus also very flexible. Footnote 7: It does assume that the joint density is continuous and smoothly varying, which are desirable properties of a model/well-behaved function. Our implementation also assumes that the density is bounded within the chosen box for the parameter space, since we set the weights along the \(n\)-dimensional boundary to zero; however, in principle this choice can be relaxed.
* Its probabilistic nature allows one to properly account for both the intrinsic astrophysical spread and measurement uncertainties in the data in a hierarchical framework.
* The model treats all dimensions symmetrically, performing a joint fit for the full \(n\)-dimensional distribution that does not assume that any dimension is dependent on another (e.g., planet mass as a function of radius or vice versa).
* The model framework naturally generalizes to higher dimensions (\(n\geq 2\)).
Motivated primarily by the final point above, we expand the framework by introducing the updates summarized below:
* We generalize the nonparametric model to be fit to any number of dimensions (Appendix A.1, A.2). In practice, this approach is feasible for performing joint fits in up to four dimensions (limited by the available memory for constructing the multi-dimensional arrays).
* We switch the optimizer used to calculate the coefficients for each weight to the MM-algorithm which is much more computationally efficient than previous methods (Appendix A.3).
* It can now account for asymmetric measurement uncertainties (i.e, different upper and lower error bars), as well as measurement upper limits, by treating the probability density function as a mixture of two half-normal distributions (Appendix B).
* We generalize the framework to allow for different degrees (i.e., varying levels of resolution or complexity) in each dimension (Appendix C).
* We provide two different methods for choosing the number of degrees by maximizing the log-likelihood and finding the optimum number of degrees: (i) the cross-validation (CV) method, and (ii) the AIC method. We find that the AIC method tends to return a lower number of degrees than cross-validation in the example applications considered (Appendix C).
* To quantify the model uncertainties due to the reduced density of samples in higher dimensional parameter spaces (i.e., the finite sample size), we include a bootstrap sampling algorithm, which can be used to quantify the variance in prediction outcomes due to this effect (Appendix D).
* We also include the possibility of performing Monte-Carlo sampling on the input dataset to quantify the impact of the measurement uncertainties on the predictions.
Finally, we combine these statistical techniques and explore three case studies to showcase the applicability of this methodology.
1. We perform a 4-dimensional fit to a sample of small planets (\(R_{p}<4\)\(R_{\oplus}\)) with mass measurements in the joint distribution of planetary mass, radius, insolation and stellar mass. The model hints at trends in bulk density with insolation for super-Earths and Neptunes. We also see hints in the sample that 1.5 \(R_{\oplus}\) super-Earths tend to be lower in bulk density around M-dwarfs (\(M_{\star}<0.6\)\(M_{\odot}\)) than FGK host stars. The absence of the higher bulk density super-Earths cannot be a detection bias, and reinforces previous studies of water-world super-Earths (or H/He rich sub-Neptunes) around M-dwarfs (Luque and Palle, 2022; Rogers et al., 2023).
2. We perform a 3-dimensional fit to the CKS-X sample in terms of the planetary radius, orbital period, and stellar mass. This example demonstrates that our nonparametric model can clearly capture the observed radius valley, as well as its dependence on host stellar mass without discretizing the sample into various bins (as was done in previous studies). We also use this example to showcase the utility of bootstrap resampling in masking out the regions in which the mean joint density is poorly constrained by the data (Appendix D).
3. We perform a 2-dimensional fit to a sample of protoplanetary disks in terms of their dust masses and host stellar masses, which offers more flexibility than the simple power-law fits used in previous studies. We demonstrate that this approach allows us to predict disk properties (including
the Monte-Carlo uncertainties) for different host-stellar masses while incorporating measurement upper limits.
Alongside this manuscript, we also release the updated version of our free open-source python package - MRExo - which allows users to perform their own exploration of different datasets in multi-dimensional space to tease out trends as well as use as a predictive tool for inferences.
## 5 Acknowledgements
We thank Suvrath Mahadevan, Johanna Teske, Gudmundur Stefansson, Anjali Piette and Peter Gao for helpful discussions and feedback regarding this manuscript. SK acknowledges Peter Gao for help with computing resources to perform some of the analysis presented in this manuscript.
The Pennsylvania State University campuses are located on the original homelands of the Erie, Haudenosaunee (Seneca, Cayuga, Onondaga, Oneida, Mohawk, and Tuscarora), Lenape (Delaware Nation, Delaware Tribe, Stockbridge-Munsee), Shawnee (Absentee, Eastern, and Oklahoma), Susquehannock, and Wahzhazhe (Osage) Nations. As a land grant institution, we acknowledge and honor the traditional caretakers of these lands and strive to understand and model their responsible stewardship. We also acknowledge the longer history of these lands and our place in that history.
Computations for this research were performed on the Pennsylvania State University's Institute for Computational and Data Sciences Advanced CyberInfrastructure (ICDS-ACI). This content is solely the responsibility of the authors and does not necessarily represent the views of the Institute for Computational and Data Sciences.
The Center for Exoplanets and Habitable Worlds is supported by the Pennsylvania State University, the Eberly College of Science, and the Pennsylvania Space Grant Consortium.
This research made use of the (i) NASA Exoplanet Archive, which is operated by Caltech, under contract with NASA under the Exoplanet Exploration Program, (ii) SIMBAD database, operated at CDS, Strasbourg, France, and (iii) NASA's Astrophysics Data System Bibliographic Services.
This research has made use of the SIMBAD database, operated at CDS, Strasbourg, France, and NASA's Astrophysics Data System Bibliographic Services. astropy(Robitaille et al., 2013; Astropy Collaboration et al., 2018), ipython(Perez & Granger, 2007), matplotlib(Hunter, 2007), MRExo(Kanodia et al., 2019, and this work), numpy(Oliphant, 2006), pandas(McKinney, 2010), scipy(Oliphant, 2007; Virtanen et al., 2020),
## Appendix A Generalizing to \(2+\) Dimensions
### Joint Distribution
Generalizing Equation 7 from Ning et al. (2018) for the joint distribution from 2 to \(n\) dimensions we have,
\[f(x_{1},...,x_{n}|\mathbf{w},d^{(1)},...,d^{(n)})\] \[=\sum_{\tau_{1}=1}^{d^{(1)}}...\sum_{\tau_{n}=1}^{d^{(n)}}w_{ \tau_{1}...\tau_{n}}\frac{\beta_{\tau_{1}d^{(1)}}\left(\frac{x_{1}-X_{1}}{X_{ 1}-X_{1}}\right)}{\overline{X_{1}}-\underline{X_{1}}}...\frac{\beta_{\tau_{n} d^{(n)}}\left(\frac{x_{n}-X_{n}}{X_{n}-X_{n}}\right)}{\overline{X_{n}}-\underline{X_{n}}}\] (A1)
where,
* \(t\) iterates through each dimension and \(t\in\{\)1,.., n\(\}\)
* \(d^{(t)}\) is the number of degrees in each dimension.
* \(\tau_{t}\) iterates through \(d^{(t)}\) in dimension \(t\). Earlier denoted using \(k\), \(l\) in Ning et al. (2018).
* \(w_{\tau_{1}...\tau_{n}}\) is an element in the \(n\)-dimensional matrix of weights \(\mathbf{w}\).
* \(x_{t}\) is the continuous variable used to sample dimension \(t\) of sample size \(N\)
* \(\overline{X_{t}}\) and \(\underline{X_{t}}\) are the upper and lower bounds for dimension \(t\).
* \(\beta_{\tau_{t}d^{(i)}}\) is the beta distribution function, with one of the shape parameters being \(\tau_{t}\) and the other \(d^{(t)}\), and the continuous variable \(x_{t}\) is normalized by the upper and lower bounds.
### Calculating Likelihood
There are two main unknown parameters in this model, the matrix of weights \(\mathbf{w}\), and the choice of degrees for each dimension. To estimate these we continue to expand on the formalism from Ning et al. (2018) and define a likelihood function \(\mathcal{L}\) (similar to their Equation 8),
\[\mathcal{L}(\mathbf{w},d^{(1)},...,d^{(n)}\ |\ X_{1}^{obs},...,\mathbf{X}_{n}^{ obs},\mathbf{\sigma}_{1}^{obs},...,\mathbf{\sigma}_{n}^{obs})\] \[\ \ \ =\int_{\underline{X_{1}}}^{\overline{X_{1}}}...\int_{ \underline{X_{n}}}^{\overline{X_{n}}}f(\mathbf{X}_{1}^{obs},...,\mathbf{X}_{n}^{obs}, x_{1},...,x_{n}|\mathbf{w},d^{(1)},...,d^{(n)},\mathbf{\sigma}_{1}^{obs},...,\mathbf{ \sigma}_{n}^{obs})\ \mathrm{d}x_{1}\...\ \mathrm{d}x_{n}\] (A2) \[\ \ \ =\prod_{i=1}^{N}\int_{\underline{X_{1}}}^{\overline{X_{1}}}... \int_{\underline{X_{n}}}^{\overline{X_{n}}}f(X_{1,i}^{obs}|x_{1},\sigma_{1,i} ^{obs})\...\ f(X_{n,i}^{obs}|x_{n},\sigma_{n,i}^{obs})\times f(x_{1},...,x_{n}|\mathbf{w}, d^{(1)},...,d^{(n)})\ \mathrm{d}x_{1}\...\ \mathrm{d}x_{n}\] (A3)
where
* \(i\) iterates through each observed point. \(i\in\) {1,2,..,N}
* \(X_{t,i}^{obs}\) is measured quantity \(i\) in dimension \(t\), drawn from \(\mathbf{X}_{t}^{obs}\).
* \(\sigma_{t,i}^{obs}\) is the uncertainty on the measured quantity \(i\) in dimension \(t\), drawn from \(\mathbf{\sigma}_{t}^{obs}\)
Here the measured quantity is expressed as,
\[f(X_{t,i}^{obs}|x_{t},\sigma_{t,i}^{obs})=\mathcal{N}\left(\frac{X_{t,i}^{obs} -x_{t}}{\sigma_{X_{t,i}^{obs}}}\right),\] (A4)
where \(\mathcal{N}\) is the standard normal distribution. Therefore the likelihood function \(\mathcal{L}\) entails the convolution of the measured probability distribution (which is assumed to be normal) with the beta distribution (from the joint distribution Equation A1). Then,
\[\mathcal{L}=\prod_{i=1}^{N}\sum_{\tau_{1}=1}^{d^{(1)}}...\sum_{ \tau_{n}=1}^{d^{(n)}}w_{\tau_{1}...\tau_{n}}\int_{\underline{X_{1}}}^{ \overline{X_{1}}}...\int_{\underline{X_{n}}}^{\overline{X_{n}}}\] (A5) \[\
Equivalent to equation 9 from Ning et al. (2018), the likelihood can then be expressed as the product of this \(\mathbf{c}\) and the weights \(\mathbf{w}\). Here we note that while multiplying with \(\mathbf{c}\) we flatten \(\mathbf{w}\) such that it is a 1-d array of length \(m\), where \(\sum_{j}^{m}w_{j}=1\).
\[\text{log }\mathcal{L}=\sum_{i=1}^{n}\text{log }(\mathbf{c}_{i}^{T}\mathbf{w})=\sum_{i=1}^{n} \text{log }(\sum_{j=1}^{m}c_{ij}w_{j})\] (A9)
While \(\mathbf{c}\) can be computed by numerical integration for a input sample set, we use the MM (EM) algorithm to maximize the log-likelihood in a computationally efficient manner, which is discussed in the next section.
### Maximizing Likelihood using MM Algorithm
We also modify the method followed to optimize for the weights of the Bernstein polynomials. Ning et al. (2018) used the inbuilt R non-linear optimizer Rsolnp, and Kanodia et al. (2019) used the Sequential Least Squares optimization routine in scipy -- fmin_slsqp. Due to convexity of the function in Equation A9, we adopt the "Majorize-Minimization" (MM) prescription to construct an optimization routine8, where we maximize the log-likelihood through the following \(r\) iterations in optimization, after initializing \(\mathbf{w}\) as:
Footnote 8: See Lange and Zhou (2022) for a review of the MM algorithm
\[\mathbf{w}^{(0)}=\left(\frac{1}{m},\frac{1}{m},...,\frac{1}{m}\right)\] (A10)
then,
\[w_{j}^{(r)}=\frac{1}{n}\sum_{i=1}^{n}\frac{c_{ij}w_{j}^{(r-1)}}{\sum_{k=1}^{m }c_{ik}w_{k}^{(r-1)}}\ \forall\ r\in(1,2,..,)\] (A11)
where we stop iterating when \(|\text{log }\mathcal{L}^{(r)}-\text{log }\mathcal{L}^{(r-1)}|\)\(\leq\epsilon\)\(|\text{log }\mathcal{L}^{(r-1)}|\) where \(\epsilon\) = 10\({}^{-3}\). This typically converges in \(<\) 20 iterations, and is much faster than the black-box solvers available in R or python. When benchmarked on the 127 planet sample from Ning et al. (2018) we find the log-likelihood to converge in 0.06 seconds compared to a few hours using fmin_slsqp.
## Appendix B Asymmetric Errorbars
Astronomical measurements are rarely associated with Normal (Gaussian) uncertainties. For example, orbital eccentricities need to be positive and finite, which can bias their estimates or posteriors (Lucy and Sweeney, 1971). Often due to instrumental limitations or astrophysical confounding factors, observations are not precise enough to obtain statistically significant (say at \(3\sigma\) or \(5\sigma\)) measurements, in which case measurement upper limits are reported at some confidence level (\(95\%\) or \(99.7\%\)). This is particularly common for planetary mass measurements where \(3\sigma\) (\(99.7\%\)) mass upper limits are often used9, or in protoplanetary disk flux measurements, where for faint disks, the flux upper limits can be reported10.
Footnote 9: See Plavchan et al. (2015) and Figueira (2018) for a review of planetary mass measurements using the RV technique.
Footnote 10: See Miotello et al. (2022) for a review on the measurements of fundamental protoplanetary disk properties.
To incorporate these measurements into our framework, we account for the possibility of asymmetric measurement errors (with \(\sigma_{u}\) and \(\sigma_{l}\)) for each data point (\(\mathbf{X}_{-\mathbf{\sigma}_{l}}^{+\mathbf{\sigma}_{u}}\)) in the sample by modifying Equation A4 as:
\[f(X_{t,i}^{obs}|x_{t},\sigma_{t,i,u}^{obs},\sigma_{t,i}^{obs})=\mathcal{N}_{+ }\left(\frac{X_{t,i}^{obs}-x_{t}}{\sigma_{X_{t,i}^{obs}}}\right)+\mathcal{N}_ {-}\left(\frac{X_{t,i}^{obs}-x_{t}}{\sigma_{X_{t,i}^{obs}}}\right),\] (B12)
where \(\mathcal{N}_{+}\) (\(\mathcal{N}_{-}\)) is the upper (lower) standard half-normal distribution. Finally the convolved probability for each measurement (Equation A7) becomes
\[\mathcal{P}_{t}(\tau_{t},i)=\int_{\underline{X_{t}}}^{X_{t,i}^{obs}}\frac{1}{ \sigma_{X_{t,i}t}}\frac{\beta_{\tau_{t}d^{(i)}}\left(\frac{x_{t}-X_{t}}{X_{t}-X_ {t}}\right)}{\overline{X_{t}}-\underline{X_{t}}}\mathcal{N}_{-}\left(\frac{X_ {t,i}^{obs}-x_{t}}{\sigma_{X_{t,i}^{obs}}}\right)\mathrm{d}x_{t}+\int_{X_{t,i }^{obs}}^{\overline{X_{t}}}\frac{1}{\sigma_{X_{t,i}^{obs}}}\frac{\beta_{\tau_{ t}d^{(i)}}\left(\frac{x_{t}-X_{t}}{X_{t}-X_{t}}\right)}{\overline{X_{t}}- \underline{X_{t}}}\mathcal{N}_{+}\left(\frac{X_{t,i}^{obs}-x_{t}}{\sigma_{X_{t,i}^{obs}}}\right)\mathrm{d}x_{t}\] (B13)
For example, for typical mass upper limits only the \(2\sigma\) (95%) or \(3\sigma\) (99.7%) upper limit is reported, and not the median value. If we have a measurement with a \(2\sigma\) (95%) upper limit of 10 \(M_{\oplus}\), then we assume \(X^{obs}\equiv\underline{X}\), such that the lower half-normal PDF \(\mathcal{N}_{-}\to 0\) in Equation B13, and we estimate \(\sigma_{u}\) such that the upper half-normal PDF integrates to 97.5% (instead of 95% since it is a half-normal PDF reproducing the upper limit) at 10 \(M_{\oplus}\). We note the caveat that posteriors in orbital parameters (such as eccentricity and \(\omega\)) are often non-Gaussian, and thus recommend authors to also report posteriors for the variables Monte-Carlo sampled such as \(e\mathrm{cos}\omega\), \(e\mathrm{sin}\omega\), etc. that are more likely to be Gaussian (Lucy and Sweeney, 1971; Fulton et al., 2018).
## Appendix C Degree Selection
The degrees represent the shape of the beta distribution. Modifying previous versions of the algorithm from Ning et al. (2018) and Kanodia et al. (2019), we allow for the possibility of different degrees in each dimensions, which should allow the user to use MRExo for density estimation across parameters with different levels of complexity. By default, we sample 10 degree candidates for each dimension and then use the AIC or cross-validation method to pick the optimum degree combination \(d^{(1)},...,d^{(n)}\), where the latter is described by Ning et al. (2018). The AIC metric is given by \(2k-\ln(\mathcal{L})\), where \(\ln(\mathcal{L})\) is the log-likelihood described in Section A.3, and \(k\) is the effective number of weights or the effective sample size, which we compute using Design Effect (Kish, 1965), and is given by \(k=1/\sum w_{i}^{2}\). We show a sample 2-D grid of AIC in Figure 10(a). To investigate the impact on the conditional distribution, of degree selection within the final contour, i.e., where the AIC values are roughly similar, we fit a range of models for the same dataset with degree choices sampled from the innermost (lowest AIC) contour. Based on Figure 10(b), we conclude that the conditional distribution is not very sensitive to the exact choice of degrees when the input dataset has large measurement errors or intrinsic scatter as seen in Figure 1. Aside from the AIC method, we also extend the k-fold cross-validation approach from Ning et al. (2018) to higher dimensions.
For MRExo users, we suggest starting with a simple optimization with an equal number of degrees for quick checks (sampling through 10 degree candidates instead of \(10^{n}\) for \(n\)-dimensions). Subsequently, one can perform a more detailed analysis by exploring a full grid of degree candidates which allows for different degrees in each dimension. This has been implemented using a boolean SymmetricDegreePerDimension function call, which then utilizes multiple cores to explore each degree choice with parallel computing implemented through the multiprocessing module in Python.
## Appendix D Understanding the Effect of Sample Size in N-Dimensions
In addition to the uncertainty in the model arising from the measurement errors of the data points, there is also uncertainty due to the finite sample size. This is a byproduct of performing the analysis based on a finite sample of points from the target distribution rather than from the target distribution itself.. For example, there can be significant variance in the _mean_ prediction in a region of parameter space where there is a relatively low density of data points, even when the data points in that region are known precisely (i.e. have small measurement errors). This is especially problematic when fitting the model in higher dimensions, since the volume of parameter space grows so rapidly that it is often impractical to collect enough data to maintain a high density of samples. To account for this source of uncertainty in which only one or a few data points strongly dominate the model behavior in some regions, we use bootstrap resampling of the data (with replacement).11
Footnote 11: This has previously been done in e.g. Ning et al. (2018) to quantify the confidence intervals of the mean prediction separately from the predictive intervals around the mean (which capture the intrinsic spread in the data).
In Figure 12, we show the results of the model fits to 100 bootstrap resamplings of the data for the CKS-X dataset (from Section 3.2, with degrees set to 30), in terms of their joint \(P\)-\(R_{p}\) distributions conditioned on a given stellar mass, \(f(P,R_{p}|M_{\star}=0.80\ M_{\odot})\). The left panel shows the mean joint probability density (\(\mu_{f(P,R_{p}|M_{\star})}\)) divided by the standard deviation of the joint probability densities (\(\sigma_{f(P,R_{p}|M_{\star})}\)), over the 100 bootstraps. The reason we use the ratio \(\mu_{f(P,R_{p}|M_{\star})}/\sigma_{f(P,R_{p}|M_{\star})}\) instead of just \(\sigma_{f(P,R_{p}|M_{\star})}\) is because while the latter is low for regions where there is a
high density of data points, it can also be low where there are no data points (and thus both the mean and standard deviation approach zero). In other words, the ratio \(\mu_{f(P,R_{p}|M_{*})}/\sigma_{f(P,R_{p}|M_{*})}\) can be thought of as a measure of the significance of the mean probability density relative to its variation arising from the finite sample size. In this example, we note that while the ratio peaks in the region near \(P\sim 20\) days and \(R_{p}\sim 2.5R_{\oplus}\) (i.e. where there is a high density of planets consisting of the sub-Neptunes above the radius valley), it also peaks in the region of the radius valley _itself_ (\(P\sim 10\) days and \(R_{p}\sim 1.8R_{\oplus}\)). This implies that even though there is a reduced occurrence of planets in the radius valley (the mean probability density is low), the radius valley itself is _robust_ (the standard deviation of the probability density from the bootstraps is even lower).
Figure 11: **a)** Showing a 2-D grid of AIC for the MR dataset shown in (Figure 1), along with contours of similar AIC. In this case, the optimized degrees are roughly equal (20,23) and are marked with an ‘X’. **b)** The change in conditional distribution \(f(m|r=12~{}R_{\oplus})\) as the degree choices are sampled from within the innermost AIC contour **(a)**. We conclude that for such a dataset with a large amount of scatter, the conditional distribution is not too sensitive to the exact choice of degrees.
Figure 12: The effect of finite sample size on the joint \(P\)-\(R_{p}\) distribution conditioned on a given stellar mass, \(f(P,R_{p}|M_{*}=0.80~{}M_{\odot})\), from 100 bootstrap resamplings of the CKS-X dataset (described in Section 3.2). **Left:** the mean divided by the standard deviation of the joint probability densities, \(\mu_{f(P,R_{p}|M_{*})}/\sigma_{f(P,R_{p}|M_{*})}\), over the bootstraps. Higher values denote regions where the model is more robust due to a greater density of data points. **Right:** the mean joint probability density of the bootstraps, with the regions \(\mu_{f(P,R_{p}|M_{*})}/\sigma_{f(P,R_{p}|M_{*})}<3\) masked out.
The ratio \(\mu_{f(P,R_{p}|M_{*})}/\sigma_{f(P,R_{p}|M_{*})}\) can also be used to appropriately mask out the regions of where there are too few measurements to provide a robust estimate of the sample density, as shown in the right panel of Figure 12 (where we show the mean joint probability density where \(\mu_{f(P,R_{p}|M_{*})}/\sigma_{f(P,R_{p}|M_{*})}>3\) is chosen for the mask). One has the flexibility to choose the threshold for \(\mu_{f(P,R_{p}|M_{*})}/\sigma_{f(P,R_{p}|M_{*})}\) depending on how much one wishes to restrict their analyses to regions that are well characterized by the data. The example here illustrates that a choice of "\(3\sigma\)" for the bootstrap mean can effectively mask out the regions where the data is not well sampled. Further, one can eliminate the influence of the poorly sampled regions when making predictions with the model (e.g., when computing the mean prediction marginalized over a given dimension) by multiplying the joint distribution with the mask and renormalizing. A similar procedure for masking out regions of high uncertainty due to finite sample size can be applied for joint probability distributions conditioned on other dimension(s).
|
2302.13347 | Tensor network simulation of the quantum Kibble-Zurek quench from the
Mott to superfluid phase in the two-dimensional Bose-Hubbard model | Quantum simulations of the Bose-Hubbard model (BHM) at commensurate filling
can follow spreading of correlations after a sudden quench for times long
enough to estimate their propagation velocities. In this work we perform tensor
network simulation of the quantum Kibble-Zurek (KZ) ramp from the Mott towards
the superfluid phase in the square lattice BHM and demonstrate that even
relatively short ramp/quench times allow one to test the power laws predicted
by the KZ mechanism (KZM). They can be verified for the correlation length and
the excitation energy but the most reliable test is based on the KZM scaling
hypothesis for the single particle correlation function: the correlation
functions for different quench times evaluated at the same scaled time collapse
to the same scaling function of the scaled distance. The scaling of the space
and time variables is done according to the KZ power laws. | Jacek Dziarmaga, Jakub M. Mazur | 2023-02-26T16:41:44Z | http://arxiv.org/abs/2302.13347v2 | # Tensor network simulation of the quantum Kibble-Zurek quench
###### Abstract
Quantum simulations of the Bose-Hubbard model (BHM) at commensurate filling can follow spreading of correlations after a sudden quench for times long enough to estimate their propagation velocities. In this work we perform tensor network simulation of the quantum Kibble-Zurek (KZ) ramp from the Mott towards the superfluid phase in the square lattice BHM and demonstrate that even relatively short ramp/quench times allow one to test the power laws predicted by the KZ mechanism (KZM). They can be verified for the correlation length and the excitation energy but the most reliable test is based on the KZM scaling hypothesis for the single particle correlation function: scaled correlation functions for different quench times evaluated at the same scaled time collapse to the same scaling function of the scaled distance. The scaling of the space and time variables is done according to the KZ power laws.
## I Quantum Kibble-Zurek mechanism
The Kibble-Zurek mechanism (KZM) originated from a scenario for topological defect formation in cosmological phase transitions driven by expanding and cooling Universe [1]. Kibble considered independent selection of broken symmetry vacua in causally disconnected regions. The result is a mosaic of broken symmetry domains, whose size is limited by the causal horizon, leading to topologically nontrivial configurations. However, the speed of light is not relevant for laboratory experiments in condensed matter systems where, instead, a dynamical theory for the continuous phase transitions [2; 3] predicts the scaling of the defects density as a function of the quench rate employing equilibrium critical exponents. It has been verified by numerous simulations [4; 5; 6; 7; 8; 9; 10; 11; 12; 13; 14; 15] and condensed matter experiments [16; 17; 18; 19; 20; 21; 22; 23; 24; 25; 26; 27; 28; 29; 30; 31; 32; 33; 34; 35; 36; 37; 38; 39; 40]. Topological defects play central role in these studies as they survive inevitable dissipation.
Their role was played down in the quantum KZM (QKZM) that considers quenches across quantum critical points in isolated quantum systems [41; 42; 43; 44; 45; 46; 47; 48; 49; 50]. It was tested by experiments [81; 82; 83; 84; 85; 86; 87; 88; 89; 90; 91; 92; 93; 94]. Recent developments in Rydberg atoms' quantum simulators [93; 94; 95; 96] and coherent D-Wave [92; 97] open possibility to study the QKZM in two and three spatial dimensions and/or to employ it as a test of quantumness of the simulator [77; 78; 80; 92].
The QKZM can be described in brief as follows. A smooth ramp crossing the critical point at time \(t=0\) can be linearized in its vicinity as
\[\epsilon(t)=\frac{t}{\tau_{Q}}. \tag{1}\]
Here \(\epsilon\) is a dimensionless parameter in a Hamiltonian, that measures distance from the quantum critical point, and \(\tau_{Q}\) is called a quench time. Initially, the system is prepared in its ground state far from the critical point. At first the evolution adiabatically follows the ground state of the changing Hamiltonian until the adiabaticity fails near time \(-\hat{t}\) when the energy gap becomes comparable to the ramp rate: \(\Delta\propto|\epsilon|^{z\nu}\propto|\hat{\epsilon}/\epsilon|=1/|t|\). This KZM timescale is
\[\hat{t}\propto\tau_{Q}^{z\nu/(1+z\nu)}. \tag{2}\]
Here \(z\) and \(\nu\) are the dynamical and the correlation length critical exponents, respectively.
From a causality point of view [71; 2], which is most straightforward when the dynamical exponent \(z=1\) and the excitations have a definite speed of sound at the critical point, the correlation length initially grows as \(\xi\propto|\epsilon|^{-\nu}\) in step with the correlation length in the adiabatic ground state, that would eventually diverge at the critical point, but near \(-\hat{t}\) its diverging growth rate,
\[\frac{d\xi}{dt}=\frac{d\epsilon}{dt}\frac{d\xi}{d\epsilon}\propto\tau_{Q}^{-1} \frac{1}{|\epsilon|^{\nu+1}}, \tag{3}\]
exceeds the speed limit at which correlations can spread near the critical point. The following growth is limited by \(2c\)[71], where \(c\) is the relevant speed of sound at the critical point. The correlation length at \(-\hat{t}\),
\[\hat{\xi}\propto\tau_{Q}^{\nu/(1+z\nu)}, \tag{4}\]
defines the characteristic KZ length. Despite the following growth between \(-\hat{t}\) and \(0\), the correlation range when crossing the critical point is also proportional to \(\hat{\xi}\) although usually a few times longer [71]. The causality picture can be generalized to \(z\neq 1\) where \(c\) has to be replaced by a relevant speed of excitations that depends on \(\tau_{Q}\)[71].
The two KZ scales are interrelated by
\[\hat{t}\propto\hat{\xi}^{z}. \tag{5}\]
Accordingly, in the KZM regime after \(-\hat{t}\), observables are expected to satisfy the KZM dynamical scaling hypothesis [98; 99; 100] with \(\hat{\xi}\) being the unique scale. For, say, a
two-point observable \(\mathcal{O}_{r}\), where \(r\) is a distance between the two points, it reads
\[\tilde{\xi}^{\Delta_{\mathcal{O}}}\langle\psi(t)|\mathcal{O}_{r}|\psi(t)\rangle=F _{\mathcal{O}}\left(t/\hat{\xi}^{z},R/\hat{\xi}\right), \tag{6}\]
where \(|\psi(t)\rangle\) is the state during the quench, \(\Delta_{\mathcal{O}}\) is the scaling dimension, and \(F_{\mathcal{O}}\) is a non-universal scaling function.
In this paper we consider the QKZM in the 2D Bose-Hubbard model (BHM) on an infinite square lattice. We assume commensurate filling of one particle per site with a well defined Mott-superfluid quantum phase transition. A sudden quench from deep in the Mott phase to the superfluid side of the transition was studied both experimentally [101] and numerically [102]. After the quench the system was allowed to evolve with the final Hamiltonian for a time long enough to estimate the speed at which correlations were spreading - the central phenomenon in the causal interpretation of the QKZM. The aim of the present paper is to demonstrate numerically that these evolution times would also be long enough to verify the KZM scaling hypothesis.
The experimental set up [101], where the initial state is a Mott state with the commensurate \(n=1\) particle per site, provides an opportunity to go beyond the previous experimental test [85] where the initial atomic cloud had non-uniform occupation numbers in the range \(n=1..3\). \(n\approx 3\) in the center of the trap may be just large enough to explain why the measured power laws for relatively fast quenches were consistent with the QKZM but with the mean-field values of the critical exponents. Another attempt was made in Ref. [103] but a limited range of available parameters made the experimental results inconclusive, though in good agreement with numerical simulations of the experimental set-up. On the numerical front a more tractable 1D version was considered [104; 105] where the Kosterlitz-Thouless nature of the transition makes \(\hat{\xi}\) only logarithmically dependent on \(\tau_{Q}\) and, therefore, a clear-cut test of the KZM would require quench times ranging over many orders of magnitude. In contrast, the 2D transition is sharper, the KZM power laws are steeper and their experimental verification should be unambiguous. However, numerical simulation of the non-integrable 2D model is more demanding as the applicability of the numerically exact tensor-network DMRG-like methods becomes severely limited in 2D and one may be forced to resort to the mean-field Gutzwiller ansatz [106] instead. In this work we overcome the limitations of the quasi-1D DMRG by employing a genuine 2D tensor network.
## II 2D tensor network algorithm
Typical quantum many body states can be represented efficiently by tensor networks [107; 108]. These include the matrix product states (MPS) in one dimension (1D) [109], the projected entangled pair state (PEPS) in 2D [110; 111], or the multi-scale entanglement renormalization ansatz (MERA) [112; 113; 114; 115] incarnating the real space renormalization group. Recently an infinite PEPS ansatz (iPEPS) was employed to simulate unitary time evolution on infinite lattices [102; 116; 117; 118; 119; 120; 121; 122; 123; 124]. The simulations include spreading of correlations after a sudden quench in the Bose-Hubbard model (BHM) [102] and the transverse field Ising model [124] as well as the KZ ramp in the latter [77]. In this work we perform simulations of the KZ ramp in the BHM that seem timely in view of the new opportunities opened by the recent experiment [101].
We apply the neighbourhood tensor update (NTU) algorithm [122] that was previously used to simulate the many body localization [123] and the KZ ramp in the Ising model [77]. The evolution operator is Suzuki-Trotter decomposed [125; 126; 127] into a product of nearest neighbor(NN) Trotter gates. As each Trotter gate increases the bond dimension along its NN bond, it has to be truncated back to its original value to prevent its exponential growth with time. The truncation has to be done in a way that minimizes an error afflicted to the quantum state. There are several numerical error measures, each of them implying a different algorithm: the simple update (SU) [118; 120], the full update (FU) [116; 128], the neighbourhood tensor update (NTU) [122; 77; 123], or gradient tensor update (GTU) [129]. The NTU error measure is explained in Fig. 1. This is the efficient and stable algorithm to be employed here.
In each Trotter gate the Frobenius norm of the difference between the left (\(L\)) and right (\(R\)) hand sides of Fig. 1b is minimized. The norm,
\[\delta=||L-R||, \tag{7}\]
is what we call an NTU error. For small enough time step it should become proportional to \(dt\). \(\delta\) is an estimate for an error inflicted on local observables by the bond dimension truncation. Accumulating Trotter errors can eventually derail the time evolution. In the worst case scenario the errors are additive. This motivates an integrated NTU error [130],
\[\Delta=\sum_{i}\delta_{i}, \tag{8}\]
where the sum is over all performed Trotter gates. For a second order Suzuki-Trotter decomposition on a bipartite square lattice, where each time step is a sequence of 8 NN Trotter gates, which is 4 gates per site, \(4\Delta\) estimates an error of a typical local observable. The observables are calculated with the help of the corner transfer matrix renormalization group [131; 132].
## III Bose Hubbard model
The Hamiltonian on an infinite square lattice is
\[H=-J\sum_{\langle i,j\rangle}\left(b_{i}^{\dagger}b_{j}+b_{j}^{\dagger}b_{i} \right)+\frac{U}{2}\sum_{i}n_{i}\left(n_{i}-1\right). \tag{9}\]
Here \(b_{i}^{\dagger}\) and \(b_{i}\), respectively, creates and annihilates a boson on site \(i\), \(n_{i}=b_{i}^{\dagger}b_{i}\) is the number operator, \(J\) is the strength of the hopping between nearest-neighbor sites, and \(U\) is on-site repulsion strength. \(\langle i,j\rangle\) denotes summation over nearest-neighbor (NN) pairs in the hopping energy (every pair contributes to the sum only once). For the commensurate filling of \(n=1\) particles per site the continuous Mott-superfluid quantum phase transition is located at \(U/J=16.7\)[133; 134; 135]. The dynamical exponent \(z=1\) and the correlation length exponent \(\nu=0.67\), hence \(\hat{\xi}\propto\tau_{Q}^{0.40}\).
In an optical lattice both \(J\) and \(U\) depend on the recoil energy. Deep in the tight binding regime the dependence of \(J\) is roughly exponential while that of \(U\) is relatively weak (if not negligible). In a tensor network simulation the dimension of the local Hilbert space has to be truncated to a finite physical dimension \(d\), i.e., to occupation numbers \(0,...,d-1\). This is self-consistent on the Mott side of the transition, including the critical point, thanks to limited variance of occupation numbers \(n_{i}\).
Figure 1: **Essential NTU.** In (a) infinite PEPS with tensors \(A\) (lighter green) and \(B\) (darker green) on the two sublattices. The red lines are physical spin indices and the black lines are bond indices, with bond dimension D, contracting NN sites. In one of Suzuki-Trotter steps a Trotter gate is applied to every NN pair of \(A\)-\(B\) tensors along every horizontal row (but not to horizontal \(B\)-\(A\) pairs). The gate can be represented by a contraction of two tensors by an index with dimension \(r\). When the two tensors are absorbed into tensors \(A\) and \(B\) the bond dimension between them increases from \(D\) to \(r\times D\). In (b) the \(A\)-\(B\) pair – with a Trotter gate applied to it – is approximated by a pair of new tensors, \(A^{\prime}\) (lighter blue) and \(B^{\prime}\) (darker blue), connected by an index with the original dimension \(D\). The new tensors are optimized to minimize difference between the two networks in (b). After \(A^{\prime}\) and \(B^{\prime}\) are converged, they replace all tensors \(A\) and \(B\) in a new iPEPS shown in (c). Now the next Trotter gate can be applied. The dominant numerical cost of the NTU procedure scales as \(D^{8}\) and is fully parallelizable[122].
Figure 2: **Sudden quench to \(U/J=19.6\).** In (a) the NTU error and in (b) the NN single particle correlator in function of time. The correlator appears converged in \(D\) already for \(D=6.8\) but the NTU error in this range is still unacceptable (\(4\Delta\approx 0.1\)) and, indeed, for higher \(D=11..14\) the correlator finds a new converged curve, this time with acceptable errors (\(4\Delta\approx 0.01\)). Here we set \(J=1\), \(U=19.6\), \(Jdt=0.005\), and physical dimension \(d=3\).
Sudden quench revisited
As a benchmark, but also to make contact with Ref. [102], we begin with a sudden quench from deep in the Mott insulator phase to the superfluid. In this section we define the energy scale by setting \(U=1\). The initial Hamiltonian has zero tunnelling, \(J=0\), and the initial ground state is a Fock state:
\[|11111111....\rangle \tag{10}\]
with one particle per site. This is a product state that can be represented by an initial iPEPS with bond dimension 1. Then non-zero tunnelling is suddenly switched on at \(t=0\). As in Ref. [102] we consider \(1/J=19.6\), i.e., a quench withing the Mott phase. This quench has been performed experimentally in Ref. [101] although with a somewhat smoother ramp.
After the quench we follow time evolution of the single particle correlation function
\[C_{R}^{sp}=\frac{1}{2}\langle\psi(t)|b_{i}^{\dagger}b_{j}+b_{j}^{\dagger}b_{i} |\psi(t)\rangle. \tag{11}\]
Here \(r\) is a distance between sites \(i\) and \(j\). Figure 2 shows time evolution of the NN correlator, \(C_{1}^{sp}\), up to \(Jt=0.5\). Acceptable convergence in this time window requires bond dimension at least \(D=11...14\). If we were looking just at \(C_{1}^{sp}(Jt)\) then it might appear converged already for \(D=6...9\) but closer inspection of the corresponding NTU error in the bottom panel of Fig. 2 reveals that the NTU error does not improve in this range of \(D\) as if adding more bond dimension did not improve expressive power of the iPEPS ansatz for this problem. Hidden symmetries may require increasing \(D\) not by 1 but by 2 or more in order to accommodate not just one more virtual state but a whole multiplet before the expressive power is improved [136]. The error begins to improve again from \(D=10\) and already \(D=11\) brings it down to an acceptable level. At the same time, the curves \(C_{1}^{sp}(Jt)\) appear converging again but this time with an acceptable level of the integrated NTU error.
This test shows that a combination of the more \(D\)-efficient NTU algorithm, than the simple update used in Ref. [102], and higher bond dimensions can significantly increase simulable evolution time. The result encourages us to step beyond the sudden quench and attempt smooth KZ ramps that, by their very nature, take longer times.
## V Kibble-Zurek ramp
The Kibble-Zurek quench also begins from the product state (10) but the hopping rate is increased by a smooth ramp instead of the sudden jump. Near the critical point the ramp can be approximated by a linear slope. It is convenient to parameterize the ramp as
\[J=J_{c}\left[1+\epsilon(t)\right], \tag{12}\]
where \(J_{c}\) is the critical point and \(\epsilon(t)\) is varied from \(-1\) to \(\infty\) either as a straight linear ramp \(\epsilon(t)=t/\tau_{Q}\) or, for instance,
\[\epsilon(t)=\left\{\begin{array}{ll}\frac{t}{\tau_{Q}}-\frac{4}{27}\frac{t^ {3}}{\tau_{Q}^{3}}&,\text{when}\ \ t<0\\ \frac{t}{\tau_{Q}}&,\text{when}\ \ t\geq 0\end{array}\right. \tag{13}\]
The former is just linear while the latter can be considered approximately linear in the neighborhood of the critical point at \(t=0\), where \(\epsilon(t)\approx t/\tau_{Q}\), provided that quench time \(\tau_{Q}\) is long enough for \(\hat{t}\) in (2) to fall withing the regime of validity of the linearization. The additional qubit term in (13) was added to make its first derivative equal to zero at the beginning of the ramp when
Figure 3: **KZ ramp - single particle correlations at \(\mathbf{t=0}\).** The figure shows the single particle correlation functions at the scaled time \(t/\hat{t}=-1\) for several values of the quench time, \(\tau_{Q}\). The correlator is scaled according to the more general KZM scaling hypothesis (15). The scaling makes the plots for different \(\tau_{Q}\) collapse to a single scaling function \(F_{C}(-1,R/\hat{\xi})\). Here we set \(U=1\), \(Jdt=0.005\), physical dimension \(d=3\), and bond dimension \(D=14\).
\(t=-3\tau_{Q}/2\). This smoothing prevents extra initial excitations that would be created by the abrupt beginning of the linear ramp and might overshadow the KZM excitations created near the critical point. They do not pose a problem for long enough \(\tau_{Q}\) when their energy, proportional to \(\tau_{Q}^{-2}\), becomes negligible compared to the KZM excitation energy that is proportional to \(\hat{\xi}^{-3}\propto\tau_{Q}^{-1.2}\), but extra bond dimension would be necessary from the very beginning of the tensor network simulation in order to accommodate their extra entanglement. In principle the extra entanglement is not a problem for a quantum simulator/experiment but the relative suppression of the abrupt excitation still requires longer ramp times that are limited by dissipation. In either case there are good reasons to begin the ramp smoothly.
Furthermore, as the on-site repulsion strength, \(U\), depends on the recoil energy relatively weakly -- when compared to the hopping rate -- here we conveniently assume that it is constant and choose the unit of energy such that \(U=1\). Even if we allowed \(U\) to be time-dependent it could be linearized near the critical point and the only effect of the time dependence would be effective multiplication of \(\tau_{Q}\) by a constant factor. This factor would not affect the KZM scaling hypothesis.
In our simulations the tunneling rate is smoothly ramped up to the critical point at \(J_{c}=1/16.7\) with a time step \(dt-0.1\) that is short enough for the second order Suzuki-Trotter scheme to be accurate. As our aim is to verify the KZM power laws, quench times are incremented geometrically as \(\tau_{Q}=0.1\cdot 2^{m/2}\), where \(m\) is a non-negative integer up to \(16\). Longer \(\tau_{Q}\) require larger bond dimensions, up to \(D=14\), as they allow for longer KZM correlation length \(\hat{\xi}\) to build up. The accuracy/convergence was monitored with the NTU error as for the sudden quench. We present results obtained with the physical dimension \(d=3\). Selective test with \(d=4\) show that \(d=3\) is accurate enough in consistency with small variance of occupation numbers in our simulations.
Our main focus is the single particle correlation function. It is the most sensitive probe of the KZM as it quantifies just how the long range order builds up when the system is driven across the Mott-superfluid transition. In particular, according to the general KZM scaling hypothesis (6), when the ramp is crossing the critical point at \(t=0\) the correlator should satisfy:
\[\hat{\xi}^{2\Delta_{sp}}C_{R}^{sp}(t=0)=f_{C}\left(R/\hat{\xi}\right). \tag{14}\]
Here \(f_{C}\) is a non-universal scaling function, \(\Delta\) is an anomalous dimension, and \(\hat{\xi}\propto\tau_{Q}^{\nu/(1+2\nu)}\) is the KZ correlation length. The correlator at the critical point is plotted in Fig. 3. The top panel shows raw data for \(C_{R}^{sp}(t=0)\) while the bottom one the same data but scaled according to (14). In the rescaling we use \(\hat{\xi}=1\cdot\tau_{Q}^{\nu/(1+z\nu)}\) and \(\hat{t}=1\cdot\hat{\xi}^{z}\) with the numerical coefficients set equal to \(1\) for definiteness. For the single particle correlation function \(2\Delta_{sp}=1+\eta\), where \(\eta=0.038176(44)\)[133; 134; 135]. The collapse of the plots with different \(\tau_{Q}\) demonstrates that we reached quench times long enough for the KZM scaling hypothesis to hold as their \(\hat{t}\) is small enough to fall within the critical regime near the transition.
Although the correlation function is not quite exponential, an exponential profile seems to be a reasonably good first approximation that allows to characterize the range of correlations by a single number. In order to ignore numerical noise in the correlator's long range tail we define the correlation length as \(\xi(t)=\ln C_{1}^{sp}(t)/C_{2}^{sp}(t)\). The length is plotted in the top panel of Fig. 4 for several different quench times. Furthermore, motivated by
Figure 4: **KZ ramp - correlation length.** The single particle correlation function was fitted with exponents, \(C_{R}^{sp}(t)\approx A\exp(-r/\xi)\), to obtain/define time dependence of the correlation length, \(\xi(t)\), during the KZ ramp. In (a) bare \(\xi(t)\) is shown for several quench times \(\tau_{Q}\). In (b) scaled correlation length is shown in function of scaled time. For the slowest quenches the scaled plots collapse in the KZ regime after \(-\hat{t}\). In this regime they are linear fitted with the dashed line. Its slope yields velocity \(v=0.11(3)\) for \((U/J)_{c}=16.7\) and \(U=1\) or, more generally, \(v=1.8(5)J\) for \((U/J)_{c}=16.7\).
a more general KZM scaling hypothesis,
\[\hat{\xi}^{2\Delta}C_{R}^{sp}(t)=F_{C}\left(t/\hat{t},R/\hat{\xi}\right), \tag{15}\]
that should hold in the KZM regime after \(-\hat{t}\), in the bottom panel of Fig. 4 we show scaled correlation length, \(\xi(t)/\hat{\xi}\), in function of scaled time, \(t/\hat{t}\). According to the hypothesis, for long enough \(\tau_{Q}\) the scaled plots should collapse in the KZM regime and, indeed, this is what we can see for the slowest quenches. The collapse allows a linear fit to the collapsed sections of the plots after \(-\hat{t}\). Our estimate of the slope is \(v=1.8(5)J\). According to the causality version of KZM, the slope is upper bounded by twice the sound velocity at the critical point and, indeed, it is lower than the Lieb-Robinson velocity \(6(2)J\) predicted and measured in Refs. [101] and [102], respectively. However, it is strongly low as compared to the upper bound, at odds with many other examples [71]. We will come back to this issue below.
In the meantime, we observe that the collapse in the bottom panel of Fig. 4 is consistent with the general KZM scaling hypothesis (15). This conclusion is further corroborated by a direct test -- without any assumption of an exponential or any other specific profile -- made in Fig. 5, where scaled correlation functions for different \(\tau_{Q}\), but for the same scaled time \(t/\hat{t}=-1\), are plotted together. Their collapse appears even better than the later one at \(t/\hat{t}=0\) in Fig. 3. These earlier states are less entangled, their correlations are shorter, hence their representation by the tensor network is more accurate. Similar collapses can be obtained in the whole range \(t/\hat{t}\in[-1,0]\) completing demosntration of the KZM scaling hypothesis for the single particle correlation function.
The collapsed correlation functions in Figs. 3 and 5, equal to the scaling functions in (15), provide a more controlled way to estimate the propagation speed [71]. For a small threshold value \(h>0\) and the two values of the scaled time, \(t/\hat{t}=-1,0\), equation
\[F_{C}\left(t/\hat{t},R/\hat{\xi}\right)=h \tag{16}\]
can be solved with respect to scaled distance \(R/\hat{\xi}\). Given that for \(z=1\) we have \(\hat{t}=\hat{\xi}\), the increase of the scaled distance between \(t/\hat{t}=-1\) and \(t/\hat{t}=0\) is the propagation speed, \(v(h)\). Gradually decreasing \(h\) allows to probe the speed at which farther correlations are spreading and thus make contact with the Lieb-Robinson bound on the asymptote of the correlation function. Figure 6 shows graphic solution of (16), including its error bars, that results in a series of estimates: \(v(0.3)=0.19(4)\), \(v(0.2)=0.23(5)\), \(v(0.1)=0.30(7)\), \(v(0.05)=0.34(11)\) in our units where \(U=1\). The same speed estimates for an arbitrary \(U\) are listed in Table 1. The speed appears to increase as threshold \(h\) is lowered but, at the same time, its error bars increase due to the growing relative significance of numerical uncertainties farther in the correlator tail. Within the error bars the speed is approaching the estimate \(6(2)J\)[101; 102] that is its upper speed limit according to the casual picture of the Kibble-Zurek mechanism.
Figure 6: **KZ ramp - correlation growth.** Here we collect together the collapsed scaled correlation functions at the scaled times \(t/\hat{t}=-1,0\) in Figs. 5 and 3. Horizontal lines mark values of threshold \(h\) in Eq. (16) that are used to estimate the increase of the correlation range between the two scaled times. Each pair of vertical line segments delimits the range of scaled correlation distance, \(R/\hat{\xi}\), where the horizontal line is estimated to cross with the collapsed scaled correlation function at either \(t/\hat{t}=-1\) or \(t/\hat{t}=0\). For each \(h\) the difference between the two distances is the speed at which the correlation range is growing between the two scaled times. The speeds are listed in Table 1 together with their error bars.
In addition to the single particle correlation function we can also consider excitation energy per site:
\[Q(t)=\lim_{N\to\infty}N^{-1}\left[\langle\psi(t)|H(t)|\psi(t)\rangle-E_{\rm GS}( t)\right]. \tag{17}\]
Here \(E_{\rm GS}(t)\) is the ground state energy of the instantaneous Hamiltonian \(H(t)\) and \(N\) is the number of lattice sites. In the KZ regime after \(-\hat{t}\) the excitation energy should satisfy a scaling hypothesis
\[\hat{\xi}^{z+d}Q(t)=F_{Q}\left(t/\hat{t}\right), \tag{18}\]
where \(F_{Q}\) is a non-universal scaling function. On the one hand, with \(z+d=3\) the dependence of \(Q\) on \(\tau_{Q}\) is very steep allowing for a clear-cut test but, on the other hand, with increasing \(\tau_{Q}\) the excitation energy quickly becomes a small difference of two large numbers that is prone to numerical errors. Nevertheless, in the top panel of Fig.7 we plot the excitation energy in function of time for several values of the quench time and in the bottom panel we show the same plots but after the rescaling. The scaled plots demonstrate a rather convincing collapse in the KZM regime after \(t/\hat{t}=-1\).
## VI Thermalization
In order to follow thermalization in the non-integrable model the KZM ramp can be stopped either on the superfluid side of the transition or even right at the critical point where the thermalization should be the most expedient, unhumped by any gap in the energy spectrum. The following unitary evolution with the critical Hamiltonian conserves the KZM excitation energy density \(Q\propto\hat{\xi}^{-(z+d)}\) while the state evolves into a thermal one with temperature \(T\). The critical dispersion, \(\omega\propto k^{z}\), means thermal excitations up to \(k_{T}\propto T^{1/z}\) and thermal excitation energy \(U_{T}\propto T^{(z+d)/z}\). Equating \(Q\) with \(U_{T}\) we obtain a "KZ temperature"
\[T_{\rm KZ}\propto\hat{\xi}^{-z}\propto\tau_{Q}^{-z\nu/(1+z\nu)} \tag{19}\]
and a thermal correlation range \(\xi_{T}\propto k_{T}^{-1}\propto\hat{\xi}\). Despite this proportionality the thermal correlator is not the same as the KZ one immediately after stopping the ramp. Interestingly, similar thermalization at and near the critical point but after a sudden quench was considered in Ref. [137].
## VII Conclusion
The state of the art quantum simulators of the Bose-Hubbard model at commensurate filling allow one to follow spreading of correlations after a sudden quench for times long enough to estimate their propagation velocities. Our 2D tensor network simulations demonstrate that the experimental times would also be long enough
\begin{table}
\begin{tabular}{|c|c|} \hline h & v \\ \hline
0.3 & 3.2(7)J \\
0.2 & 3.8(8)J \\
0.1 & 5.0(11)J \\
0.05 & 5.6(17)J \\ \hline \end{tabular}
\end{table}
Table 1: **KZ ramp - correlation growth.** The speed at which the single particle correlations are spreading in the KZ regime estimated in Fig. 6 for decreasing values of threshold \(h\) in Eq. (16). The brackets enclose maximal error bars of the last digit. Its upper speed limit is \(6(2)J\) according to Refs. [101] and [102].
Figure 7: **KZ ramp - excitation energy per site.** Both panels show the excitation energy per site. In the top panel bare energy \(Q\) is shown in function of time \(t\). In the bottom panel both the energy and the time are scaled according to the KZM scaling hypothesis (18). The scaling make the plots with different \(\tau_{Q}\) collapse in the KZM regime after \(t/\hat{t}=-1\). Here we set \(U=1\), \(Jdt=0.005\), physical dimension \(d=3\), and bond dimension \(D=14\).
to test the quantum Kibble-Zurek mechanism by verifying the KZM scaling hypothesis for the single particle correlation function. The experiment could push this test beyond the limited range of quench times achievable by the classical simulation where the KZM scaling hypothesis should become even more convincing. It could also follow thermalization of the KZ excitations after the ramp is stopped which is a notoriously difficult task for the classical simulation due to rapid growth of entanglement. These are the challenges worthy a genuine quantum simulation.
###### Acknowledgements.
We are indebted to Ryu Kaneko for comments on the speed limit for correlations. This research was supported in part by the National Science Centre (NCN), Poland under project 2019/35/B/ST3/01028 (J.M.) and project 2021/03/Y/ST2/00184 within the QuantERA II Programme that has received funding from the European Union Horizon 2020 research and innovation programme under Grant Agreement No 101017733 (J.D.). The research was also supported by a grant from the Priority Research Area DigiWorld under the Strategic Programme Excellence Initiative at Jagiellonian University (J.D.).
|
2308.06077 | Fly-Swat or Cannon? Cost-Effective Language Model Choice via
Meta-Modeling | Generative language models (LMs) have become omnipresent across data science.
For a wide variety of tasks, inputs can be phrased as natural language prompts
for an LM, from whose output the solution can then be extracted. LM performance
has consistently been increasing with model size - but so has the monetary cost
of querying the ever larger models. Importantly, however, not all inputs are
equally hard: some require larger LMs for obtaining a satisfactory solution,
whereas for others smaller LMs suffice. Based on this fact, we design a
framework for cost-effective language model choice, called "Fly-swat or cannon"
(FORC). Given a set of inputs and a set of candidate LMs, FORC judiciously
assigns each input to an LM predicted to do well on the input according to a
so-called meta-model, aiming to achieve high overall performance at low cost.
The cost-performance tradeoff can be flexibly tuned by the user. Options
include, among others, maximizing total expected performance (or the number of
processed inputs) while staying within a given cost budget, or minimizing total
cost while processing all inputs. We evaluate FORC on 14 datasets covering five
natural language tasks, using four candidate LMs of vastly different size and
cost. With FORC, we match the performance of the largest available LM while
achieving a cost reduction of 63%. Via our publicly available library,
researchers as well as practitioners can thus save large amounts of money
without sacrificing performance. | Marija Šakota, Maxime Peyrard, Robert West | 2023-08-11T11:29:51Z | http://arxiv.org/abs/2308.06077v3 | # Fly-Swat or Cannon? Cost-Effective Language Model Choice via Meta-Modeling
###### Abstract.
Generative language models (LMs) have become omnipresent across data science. For a wide variety of tasks, inputs can be phrased as natural language prompts for an LM, from whose output the solution can then be extracted. LM performance has consistently been increasing with model size--but so has the monetary cost of querying the ever larger models. Importantly, however, not all inputs are equally hard: some require larger LMs for obtaining a satisfactory solution, whereas for others smaller LMs suffice. Based on this fact, we design a framework for _Cost-Effective Language Model Choice_ (CELMOC). Given a set of inputs and a set of candidate LMs, CELMOC judiciously assigns each input to an LM predicted to do well on the input according to a so-called meta-model, aiming to achieve high overall performance at low cost. The cost-performance trade-off can be flexibly tuned by the user. Options include, among others, maximizing total expected performance (or the number of processed inputs) while staying within a given cost budget, or minimizing total cost while processing all inputs. We evaluate CELMOC on 14 datasets covering five natural language tasks, using four candidate LMs of vastly different size and cost. With CELMOC, we match the performance of the largest available LM while achieving a cost reduction of 63%. Via our publicly available library,1 researchers as well as practitioners can thus save large amounts of money without sacrificing performance.
Footnote 1: [https://github.com/epfl-dlab/CELMOC](https://github.com/epfl-dlab/CELMOC)
## 1. Introduction
In recent years, a clear trend has emerged in natural language processing and has subsequently spread across data science, characterized by the increasing prominence of large language models (LLMs). With the wide range of applications these models are capable of solving many companies have contributed to this trend by offering their own LLMs as services. As a result, the landscape of language processing is undergoing a dynamic shift, with an increasing number of LLMs becoming available on the market.
As the size of LLMs continues to expand, their capabilities are undergoing substantial enhancements, leading to notable improvements across various language-related tasks [2; 5; 10; 21]. With the growth in the number of parameters, their ability to understand complex contexts has improved significantly. These bigger models are better at picking up subtle changes in meaning, which helps them give more relevant responses that fit the context. Moreover, their increased size allows LLMs to generate text that is more coherent and fluent, often resembling a human-like conversation. Finally, as training datasets have been getting larger, the amount of knowledge baked into LLM parameters has also broadened. This results in improved factual accuracy and a better capability to provide well-informed answers to a wider range of questions.
However, as the use of LLMs becomes more common, there is a relevant concern about the rising costs of running them. State-of-the-art language models have hundreds of billions of parameters, which means they need a lot of computing power, leading to higher expenses. For instance, running GPT-4 with an 8K-token context is 20 times more expensive than running GPT-3.5 with a 4K-token context for the same input size.2 Even though LLMs excel at handling complex language tasks, it is important to realize that not every situation needs their massive capabilities. Smaller language models (LMs) are generally good at handling simpler language tasks and can be a more cost-effective choice in cases where full LLM power is not necessary. For example, on the 14 datasets that we examined with 4 different language models, 33% of data samples are successfully solved both by the biggest model and at least one of the smaller ones, while 11% are exclusively solved by one or more of the smaller models, with the biggest model failing to answer correctly (cf. Sec. 4.2).
Footnote 2: [https://openai.com/pricing](https://openai.com/pricing)
There is thus an opportunity to save cost by assigning each input to the cheapest model able to solve it. The problem in realizing this opportunity is how to predict ahead of time which models would correctly solve which inputs--without actually running each LM on each input, which would defeat the purpose. Chen et al.
Figure 1. Overview of CELMOC, our framework for cost-effective LM choice (details in Sec. 3.2). CELMOC consists of two steps: (1) Predict cost and performance of each candidate LM on each input query. Cost prediction is done using API pricing. Performance prediction is done using a _meta-model_, trained ahead of time (not shown) based on existing pairs of LM queries and LM performance scores. (2) Assign each query to at most one LM using an assignment strategy, aiming for high total expected performance at low cost. Note that neither of the two steps requires interacting with the LMs; queries are fed to the assigned LMs only after the above steps.
[3] proposed to employ increasingly expensive LMs in a cascade until a satisfactory result is obtained. This still requires querying potentially multiple LMs per input, something we set out to avoid in our approach.
**Proposed solution.** In this paper, we propose a novel approach for saving LM costs by introducing _Cost-Effective Language Model Choice_ (CELMOC), a cost-aware framework that aims to assign each query from a query set provided by the user to an appropriate LM, without the need to run any of the LMs in the process. As shown in Fig. 1, CELMOC consists of two steps: First, we predict the cost and performance of each candidate LM on each input query. Cost prediction is done using the LM provider's API pricing; performance prediction is done using a _meta-model_, a regression model trained ahead of time based on existing pairs of LM queries and LM performance scores. Second, we assign each query to at most one LM using an assignment strategy, aiming for high total expected performance at low cost. The cost-performance tradeoff can be tuned by choosing from multiple strategies, each formalized as an optimization problem.
**Advantages of CELMOC.** As mentioned, CELMOC does not require any interaction with the LMs when assigning queries to LMs. Rather, each query is fed to its assigned LM once CELMOC has terminated. This opens up the possibility for greater budget savings in comparison to existing work. Next, the meta-model can be trained on inputs from the union of a wide range of tasks and datasets, without using any information about the task or dataset from which an input was sourced. This way, at run time, CELMOC can handle inputs without having to know which tasks they correspond to, and as we show, CELMOC even works on inputs from tasks not seen during meta-model training. Finally, compared to prior work, we offer the user more flexibility by providing them with more options for cost and performance constraints and preferences.
**Results.** With the help of CELMOC, on the 14 datasets, over 5 different task types, with 4 different LMs available, we are able to reduce the cost of running the test dataset by 63% while maintaining the same performance as the biggest LM. To facilitate the use of CELMOC, we release the library as open-source code.1
Footnote 1: [https://github.com/google/research/](https://github.com/google/research/)
**Contributions.** Briefly, our contributions are the following:
1. We propose CELMOC, a cost-aware framework that automatically assigns input queries to suitable LMs without the need to run LMs in the process.
2. We show that, by employing CELMOC, we are able to substantially reduce the cost of running queries from different tasks, while maintaining the performance equal to the biggest LM available in our evaluation.
3. We release a library for our framework, enabling users to run the existing setting, or train different meta-models tailored to their needs.
## 2. Background and Related Work
### LM evaluation
Thorough assessment of LMs is a complex, but necessary task that can uncover room for investigation and improvement in LMs' performance. With the appearance of general purpose LMs, the need for an all-encompassing evaluation across different tasks that set a common standard became apparent. There have been several efforts to simplify this evaluation process. For instance, EleutherAI's Language Model Harness Evaluation framework [6], Huggingface's Evaluate library [22], and BIG-Bench [20] all offer convenient open-source repositories that enable common evaluation and encourage collaborative advancements in the field.
Liang et al. [14] performed an exhaustive evaluation of a wide set of LMs, termed _Holistic Evaluation of Language Models_ (HELM). They evaluated LMs of different sizes and capabilities, on various datasets and tasks, from many performance aspects, such as accuracy, robustness, fairness, etc. This enabled a standardized view of LM performance and a more reliable way to compare LMs.
Their results reveal that smaller LMs are able to solve some tasks as well as bigger LMs. This is an indicator that it is unnecessary to use the biggest LM for each scenario. However, for more complex tasks, as smaller LMs mostly fail to solve them, we should use bigger, more capable LMs. These insights demonstrate that there is a need for an automatized framework that would help us decide when to use which LM.
The results align with the findings from the Inverse Scaling Prize competition [16]. Results from the two rounds of competition [17; 18] reveal numerous tasks where larger LMs perform worse than their smaller counterparts.
### Inference cost optimization
The majority of the cost for an LM product comes from inference, not training [21]. Despite that, most of the existing research focuses on minimising the training cost [10; 12; 21].
High cost of executing these models has driven the development of several inference cost reduction techniques, such as quantization [1; 7; 23], distillation [8; 11; 19], and pruning [9; 13].
Zong et al. [24] include not only the training cost, but also annotation and inference cost in their empirical analysis. They focus on one type of task only\(-\)text classification\(-\)and evaluate different types of models, including non-neural models, an LLM, and smaller language models. They give insights on which model would be the best choice to train or use in a specific real-world scenario. Contrary to this, our work focuses on LMs only, developing a framework that automatically decides which LM is suitable for the tasks presented under inference budget constraints.
Similarly to our work, Chen et al. [3] attempt to develop a framework working with LMs only. They propose using LMs in cascade, i.e. sending a query to available LMs sequentially until the reliability of an answer is over some predefined threshold. Their approach is specialized, meaning that parts of the framework are adapted specifically for one dataset during training. We, however, focus on determining which LM to use prior to sending it to any of them. Because we do not need to run any of the LMs to find out the best one, our approach has potential to be a lot cheaper. In addition, it is tested on a much wider range of datasets, and we aim to develop a general framework, without the need to retrain our meta-model for each dataset separately.
## 3. Method
### Problem setting
**Tasks.** We start with the assumption that the user has a set of queries they want to solve using an LM. A lot of the commonly known tasks, such as question-answering (QA), reasoning, and summarization can be rewritten in a format of a query. For example, to transform a summarization task into the query format, one can construct a prompt to the LM by adding an instruction such as "Summarize the above article in 1 sentence." to the text that needs to be summarized. QA tasks can often be sent to the LM in their original form.
We work under the premise that all of the queries are evaluated using just one metric. This way, we can be sure that the evaluation process is straightforward and consistent along all the tasks.
**LMs.** We consider a scenario where a user has access to \(k\) LMs (\(l_{i}\) for \(i=1,\ \dots,\ k\)) that could potentially solve the set of \(m\) queries (\(q_{j}\) for \(j=1,\ \dots,\ m\)). These LMs can be different in their capabilities and size. Each LM is associated with its own cost. Cost can be defined by the user, e.g., by utilising the price of the API for the LM.
**Goal.** Our goal is to use this pool of several LMs to solve queries in a budget-conscious way. We aim to assign each query \(q_{j}\) to at most one \(l_{i}\) while respecting the cost-performance requirements set by the user.
### Framework setting
Our framework consists of three main components: meta-model, cost estimation, and model strategy. In Fig. 1, we illustrate the way CELMOC works. First, the user needs to specify a set of queries they want to solve. Then, using meta-model, we predict the performance \(p_{ij}\) of each LM \(l_{i}\) when running the query \(q_{j}\). At the same time, we estimate the cost \(c_{ij}\) of running each query \(q_{j}\) using LM \(l_{i}\). After this is done, the user needs to specify one of the assignment strategies described below, and optional cost-performance requirements. The strategy will then be used to assign each query to at most one of the LMs.
**Meta-model and cost estimation.** In order to know which LM to use for a certain query, we first have to be able to predict the performance metric \(p_{ij}\) an LM \(l_{i}\) achieves when solving this query \(q_{j}\). To do that, we train a meta-model. In our case, the meta-model is a binary classifier. During the training, we send a query \(q_{j}\) to which we append token representing LM \(l_{i}\) as input to the meta-model, while the targets are 1 or 0, depending on whether LM \(l_{i}\) solves the query \(q_{j}\) or not. Our meta-model is significantly smaller than all the LMs we are working with (cf. Sec. 4.1) and it is trained on a diverse set of tasks, which allows it to be suitable for general use (cf. Sec. 4.1).
It is worth noting that one can train a meta-model tailored to their own needs, with different LMs in the pool or different datasets, and plug it into the framework pipeline.
Along with the measure of performance, we have to estimate the cost \(c_{ij}\) of running the query \(q_{j}\) with each model \(l_{i}\). In our case, we decide on using API pricing to do these estimations. Depending on the individual case and LMs in the pool, the cost function can be defined differently. For more details on our implementation for experiments, see Sec. 4.1.
**Assignment strategies.** Once we have a functional meta-model, we need to decide how to assign each query to one of the possible LMs in the pool, based on performance and cost estimates. We call the method to do this a _strategy._ There are two types of strategies: (i) **Cost-insensitive strategies**: When applying cost-insensitive strategies to the samples, we do not consider any constraints on the budget or performance that the user might have set. Each data sample is treated in the same way, independently of the whole batch. We define the following cost-insensitive strategies:
(a) _Single-model strategy:_ This strategy implies applying a single, fixed LM from the available LMs to each sample.
(b) _Performance-maximizing strategy:_ This strategy is based on the outputs of the meta-model. For each sample, we choose the LM that, according to the meta-model, is predicted to achieve the highest performance.
(c) _Thresholding strategy:_ This strategy is also based on the outputs of the meta-model. The user has to specify acceptable performance threshold that defines whether a task is solved or not. Outputs are binarized according to that threshold. A concrete example where this strategy might be useful are tasks that are evaluated with binary metrics such as accuracy. The strategy works by choosing the cheaperst LM that solves the data sample. In cases where none of the LMs solve the sample according to our meta-model, we examine two possibilities: choosing the smallest LM (and inherently cheapest) or choosing the biggest LM (considered the most powerful generally) for that data sample.
(ii) **Cost-sensitive strategies**: Contrary to cost-insensitive strategies, in this setting, we consider constraints, such as cost constraint, set by the user for the batch of data samples in its entirety. This transforms the problem into an optimization problem. We employ the following cost-sensitive strategies:
(a) _Cost-oriented ILP strategy:_ We formulate the problem of assigning an LM to each sample as an integer linear programming (ILP) problem. We define \(M\) as a set of LMs, \(S\) as a set of samples that need to be assigned to LMs, and \(C_{\text{max}}\) as the maximum total cost of running all the samples. A binary variable \(x_{ij}\) is introduced to describe the assignment (or lack of it) between a data sample \(j\) and an LM \(i\). If \(x_{ij}=1\), sample \(j\) is assigned to the model \(i\). The sample does not necessarily have to be assigned to any LM. Assignment of each sample \(j\) with each LM \(i\) is associated with cost \(c_{ij}\) and value \(p_{ij}\), where cost \(c_{ij}\) corresponds to the estimated cost, and value \(p_{ij}\) corresponds to the predicted performance metric when using LM \(i\) to solve the sample \(j\). The goal is to maximize the performance on the whole set of samples while respecting the cost constraint. This problem is then formalized as an ILP as follows:
maximize \[\sum_{i\in M,j\in S}p_{ij}x_{ij}\] (1) s.t. \[\sum_{i\in M}x_{ij}\leq 1,\ \ \forall j\in S\] (2) \[\sum_{i\in M,j\in S}c_{ij}x_{ij}\leq C_{\text{max}}\] (3)
In the ILP above, (2) ensures that every data sample is assigned to at most one LM.
(b) _Performance-oriented ILP strategy_: Similar to the previous case, we formulate this problem in the form of an ILP. The goal in this case is to minimize the cost, while respecting the performance constraint \(P_{\text{min}}\) the user has set. In short, following the same notation as before, this problem is formalized as follows:
minimize \[\sum_{i\in M,j\in S}c_{ij}x_{ij}\] (4) s.t. \[\sum_{i\in M}x_{ij}\leq 1,\quad\forall j\in S\] (5) \[\sum_{i\in M,j\in S}p_{ij}x_{ij}\leq P_{\text{min}}\] (6)
This strategy can be also implemented by thresholding the performance metric, as it is done for the thresholding local strategy. In this case, this strategy can be viewed as minimizing the cost for solving at least \(P_{\text{min}}\) samples.
(c) _Greedy strategy:_ This strategy works by going through the samples sequentially and taking the LM achieving the highest performance when solving the sample, according to the meta-model, until it reaches the specified cost constraint. If the cost constraint is reached, remaining data samples remain unassigned and are not run by any of the LMs in the pool. For our experiments (cf. Sec. 4.2), they are counted as incorrect (for accuracy evaluation) with the cost equal to zero.
## 4. Experiments
### Meta-model evaluation
**Data.** As the main source of data, we use raw LM runs from the HELM project (Hendle et al., 2017). Raw runs consist of inputs (queries and full prompts) sent to the LM, generation parameters, ground truth references, and LM outputs with additional details such as log probability and the time it took to execute the prompt.
Raw model runs are released for a wide range of datasets covering multiple tasks. While all of these tasks are in the same input format (query), standard metrics used to evaluate the quality of the output can be vastly different between tasks. For example, summarization output is often evaluated using ROUGE score (Krizhevsky et al., 2012) which takes continuous values from 0 to 1, and question-answering can be evaluated using EM (exact match), which is a binary metric.
For the sake of this paper, we focus on tasks for which there is a clear answer to whether the model's output solves the query or not. This means we focus on tasks that are normally evaluated only using binary metrics. In particular, the datasets we are working with are evaluated with one of the following metrics:
* **Exact match (EM)**: Output of LM matches the ground truth reference exactly as strings.
* **Quasi-exact match**: As defined by Liang et al. (2017), this metric expands the EM condition on the outputs that are slightly processed (e.g., by lower-casing, removing white-space, punctuation, and articles).
* **Equivalent**: LM output has to be mathematically equal to the ground truth reference.
Following the terminology introduced by Liang et al. (2017), we refer to all of these metrics, applied to the whole set of samples, as _accuracy_. For more details on datasets and types of tasks, see Table 1.
To train and evaluate the meta-model, as an input, we use queries to LMs. For the output, we calculate the score based on the ground truth reference and LM output. The metric used to calculate the score is dependent on the task and dataset the query comes from. We append LM token \([LM_{i}]\) to the input to differentiate for which LM we are trying to get the probability of solving the query task.
**LMs.** For the experiments, we opt to work with a set of OpenAI models tested by Liang et al. (2017). While the size of these LMs is not publicly available, all of them differ in their capabilities3. One advantage of this set of models is that there is a fairly straightforward way to calculate the cost for each sample as a price of running the query with the selected LM. To estimate the cost of the output, which is unknown prior to running each query, we decide to calculate the average length of the outputs (in tokens) from our raw runs dataset and apply the same pricing as for the query. For details on pricing at the time of training and average output lengths, see Table 2.
Footnote 3: [https://platform.openai.com/docs/models/overview](https://platform.openai.com/docs/models/overview)
**Implementation.** Our meta-model is a DistilBERT model4 (66M parameters), finetuned on the collected dataset of raw runs. The model was trained using the Adam optimizer with learning rate \(3\times 10^{-5}\) and 0.1 gradient clipping on the Euclidean norm. The model was trained for 3000 steps with batch size 16 and a polynomial learning rate scheduler with a final learning rate of 0. Training was performed on a machine with a single Tesla T4 16GB GPU, taking around 2h.
Footnote 4: Model that was finetuned was initialized with weights of ‘distilbert-base-uncased’ model ([https://huggingface.co/distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased))
As a baseline to compare against, we use a dummy classifier that always predicts the most frequent class, depending on the dataset which the query comes from. It is worth noting that during inference, the meta-model works only with the query, without the need to specify the dataset from which the query comes.
\begin{table}
\begin{tabular}{l l l l l l} \hline \hline Dataset & Task type & Evaluation metric & Train size & Val size & Test size \\ \hline MMLU & QA & Exact match (EM) & 434 & 47 & 86 \\ RAFT & Text classification & Quasi-exact match & 341 & 33 & 66 \\ WaFact & QA & Quasi-exact match & 669 & 669 & 1189 \\ Book & QA & Quasi-exact match & 765 & 85 & 150 \\ TruthfulQA & QA & Exact match (EM) & 500 & 56 & 98 \\ IDRD & Sentiment analysis & Quasi-exact match & 765 & 85 & 150 \\ Entity matching & Reasoning & Quasi-exact match & 1071 & 119 & 210 \\ Data imputation & Reasoning & Quasi-exact match & 324 & 37 & 63 \\ MMI & Reasoning & Quasi-exact match & 765 & 85 & 150 \\ MMI & Reasoning & Equivalent & 334 & 37 & 66 \\ GSMX & Reasoning & EM (90 = no indication) & 765 & 85 & 150 \\ LSAT & Reasoning & Quasi-exact match & 353 & 39 & 69 \\ LegalSupport & Reasoning & Quasi-exact match & 765 & 85 & 150 \\ CivilComments & Toxicity detection & Quasi-exact match & 765 & 85 & 150 \\ \hline Total & - & - & 14016 & 1547 & 2747 \\ \hline \hline \end{tabular}
\end{table}
Table 1. Specifications of the datasets.
\begin{table}
\begin{tabular}{l l l l l} \hline \hline & text-data 001 & text-babbage-001 & text-cine-001 & text-davricet-002 \\ \hline pricing (8 per l tokens) & 0.0004 & 0.0005 & 0.002 & 0.02 \\ average output length & 6.85 & 7.18 & 7.01 & 8.41 \\ \hline \hline \end{tabular}
\end{table}
Table 2. Specifications of available LMs.
**Evaluation metrics.** To evaluate the performance of the meta-model, we use standard metrics: accuracy, precision, recall, and F1 score. We calculate macro scores to give equal weight to each class, regardless of its frequency. We additionally calculate ROC-AUC and PR-AUC scores. To distinguish between the accuracy of the meta-model and that of the actual language models, we refer to the meta-model's accuracy as _meta-accuracy_.
**Results.** In Table 3 we present results of meta-model evaluation. Along with results on the whole testing dataset, we present results stratified by datasets, tasks, and LMs.
Overall performance of the meta-model is good which is reflected in a high score over all the metrics calculated. Even though the difference between meta-model and the dummy classifier is not big (due to the usually large class imbalance, as indicated by the meta-accuracy of the dummy classifier; cf. column 7 of Table 3), it is significant. Additionally, as already mentioned above, in a real-world setting, we would not have access to the dataset which the query comes from (the query might even correspond to a task not seen during meta-model training), such that the dummy classifier is not a practical option, but merely an evaluation baseline. Finally, the metric we ultimately care about is not meta-accuracy, but rather the accuracy of the LM chosen based on the meta-models predictions (as evaluated in Sec. 4.2).
Stratification over datasets and tasks uncovers that there are certain types of datasets and tasks for which it is harder to predict LMs' behavior. These are more complex tasks, which are not trivial for any of the LMs, such as subtypes of the reasoning task.
Stratification over LMs shows that it is harder to predict the behavior of the bigger LMs. Smaller LMs are generally able to work on simpler tasks (for example, see _CivilComments_ plot in Fig. 4 and _Sentiment analysis_ plot in Fig. 5), but they fail on more complex tasks (see _GSM8K_ plot in Fig. 4). Bigger LMs are, on the other hand, able to solve some complex tasks, which means that the meta-model has to understand more complicated patterns to correctly predict whether they can solve the task or not.
_Calibration._ We assess calibration via calibration plots obtained by grouping output probabilities into bins of equal width and then plotting the fraction of positive samples against the mean value of the bin. In Fig. 1(a) we can see that the meta-model is well calibrated, as the calibration curve is lying close to the diagonal, which represents perfect calibration. (For task-specific calibration plots, see Fig. 1(b).)
_Generalization._ In an attempt to evaluate how well this meta-model would generalize, we retrain it on all datasets but one at a time. We then evaluate the performance of these meta-models on the respective left-out datasets. Results are presented in Table 4. By comparing these numbers with the ones presented in Table 3, we can notice that, while the numbers are generally a little lower, the meta-model is able to generalize to the left-out dataset.
### Framework evaluation
**Implementation.** In order to evaluate the performance of the framework as a whole, we assign each sample in the test set to one of the LMs in our pool (or none) and calculate the accuracy achieved by running these samples with the assigned LMs. The assignment is done by applying different strategies to the outputs of the meta-model. Additionally, we evaluate cost as described earlier in Sec. 4.1. Based on those two values, we make a cost-accuracy plot. Cost-accuracy plots (cf. Fig. 3) present the relationship between accuracy achieved using the chosen strategy and the average cost per query needed to perform the run. For cost-sensitive strategies, we run this analysis for a range of cost constraints to determine the trend of cost-performance trade-off.
We opt to evaluate only one ILP based strategy--the cost-oriented one--because these two strategies can be seen as two sides of the same coin. While in a practical sense, the performance-oriented ILP strategy is useful because it offers the user the possibility to estimate the budget needed for the desired quality of the results, this is visible from the cost-oriented strategy results on the cost-accuracy plot when we run it for different cost constraints.
To solve the ILP problem for the cost-oriented strategy, we use the PuLP5 library with the PuLP_CBC_CMD solver. Given the simplicity of the ILP, the total time of assignment for all the queries is overshadowed by the run time of the meta-model. For our testing dataset, obtaining all probabilities from the meta-model takes a few minutes, while solving the ILP takes less than a second. Nonetheless, we leave the option to the user to specify the time limit for the ILP execution. If the problem is not solved by the time it reaches the limit, the user will be left with a possibly suboptimal solution.
Footnote 5: [https://coin-org.github.io/pulp/](https://coin-org.github.io/pulp/)
**Oracle.** We calculate the optimal assignment on the ground truth data by always choosing the cheapest LM that solves the sample. In cases when no LM solves the sample, we assume that the best option is not to send this sample to any of the available LMs. In this case, the sample is counted as incorrect when calculating the accuracy, and it is assigned with the cost equal to zero when calculating the average cost.
**Results.** In Fig. 3, we present the results of the framework evaluation in the form of a cost-accuracy plot. First, we can see that the oracle is not only cheaper than choosing the biggest LM (84.46% lower cost), but it also performs better (+10.74% in accuracy). This means that there are cases where a smaller LM performs not only on par, but better than the bigger options. Next, if we look at the single-model strategies results, we can notice a clear difference between the LMs. For comparison, the cheapest LM (text-ada-001) is 98.05% cheaper than the biggest LM (text-davinci-002), while achieving 18.18% (absolute) smaller accuracy. There is no significant difference between the two smallest models (text-ada-001 and text-babbage-001) neither in cost, nor in accuracy.
Two additional cost-insensitive strategies, the performance-maximizing approach and the thresholding strategy, exhibit comparable performance in terms of accuracy to the largest model (text-davinci-002). Notably, the thresholding strategy stands out for not only its effectiveness but also its much lower cost. In terms of cost effectiveness, it offers a 62.1% (absolute) reduction relative to text-davinci-002, while the performance-maximizing strategy offers an 11.5% cost reduction relative to text-davinci-002.
Next, we focus on the cost-sensitive strategies. Note that, for each of these, Fig. 3 shows multiple points, one per maximum available budget (corresponding to the value on the \(x\)-axis). By employing
cost-sensitive strategies, we are able to further save budget, while achieving essentially the same accuracy as the largest model (text-davinci-002). In particular, with the cost-oriented ILP strategy, we match text-davinci-002's accuracy for only 37.21% of the cost. The greedy strategy, on the other hand, performs much worse: in this case, we need 88.57% of the budget for the same result.
Although the oracle indicates that there is space for improvements in terms of accuracy as well, we did not manage to achieve better accuracy than text-davinci-002 with any budget given to the
\begin{table}
\begin{tabular}{l c c c c c c c|c c c c c} \hline \hline & \multicolumn{6}{c}{Meta-model} & \multicolumn{6}{c}{Dummy classifier (majority label per dataset)} \\ & Meta-accuracy & Precision & Recall & F1 & ROC-AUC & PR-AUC & Meta-accuracy & Precision & Recall & F1 \\ \hline \hline _Overall_ & 81.57\({}^{*}\) + 0.78 & 79.90\({}^{*}\) + 0.81 & 80.65\({}^{*}\) + 0.64 & 80.26\({}^{*}\) + 0.73 & 80.62\({}^{*}\) + 0.83 & 79.30\({}^{*}\) + 0.83 & 79.60\({}^{*}\) + 0.89 & 78.03\({}^{*}\) + 0.83 & 77.25\({}^{*}\) + 0.70 & 77.62\({}^{*}\) + 0.71 \\ \hline _Dataset_ & & & & & & & & & & & \\ MMLU & 73.08\({}^{*}\) + 1.53 & 69.22\({}^{*}\) + 6.48 & 61.63\({}^{*}\) + 0.44 & 61.89\({}^{*}\) + 1.12 & 61.96\({}^{*}\) + 4.37 & 58.47\({}^{*}\) + 0.90 & 68.59\({}^{*}\) + 0.41 & 34.32\({}^{*}\) + 2.21 & 50.00\({}^{*}\) + 0.00 & 40.69\({}^{*}\) + 1.51 \\ RAFT & 67.59\({}^{*}\) + 1.37 & 68.98\({}^{*}\) + 1.58 & 65.64\({}^{*}\) + 5.52 & 65.76\({}^{*}\) + 1.16 & 65.83\({}^{*}\) + 1.53 & 79.60\({}^{*}\) + 1.42 & 55.53\({}^{*}\) + 1.52 & 27.71\({}^{*}\) + 2.41 & 50.00\({}^{*}\) + 0.00 & 35.48\({}^{*}\) + 2.32 \\ WikiCT & 87.21\({}^{*}\) + 1.82 & 71.06\({}^{*}\) + 1.26 & 62.55\({}^{*}\) + 1.17 & 64.66\({}^{*}\) + 1.18 & 62.36\({}^{*}\) + 1.15 & 44.94\({}^{*}\) + 1.39 & 66.95\({}^{*}\) + 0.59 & 43.47\({}^{*}\) + 50.00\({}^{*}\) + 0.00 & 46.52\({}^{*}\) + 0.23 \\ Entity matching & 90.00\({}^{*}\) + 1.18 & 94.87\({}^{*}\) + 1.81 & 57.67\({}^{*}\) + 1.44 & 50.36\({}^{*}\) + 1.47 & 57.25\({}^{*}\) + 1.33 & 94.83\({}^{*}\) + 1.18 & 88.17\({}^{*}\) + 1.20 & 44.05\({}^{*}\) + 0.00 & 50.00\({}^{*}\) + 0.00 & 46.83\({}^{*}\) + 0.05 \\ Data imputation & 81.02\({}^{*}\) + 4.78 & 77.10\({}^{*}\) + 1.77 & 70.32\({}^{*}\) + 5.37 & 71.90\({}^{*}\) + 7.20 & 69.31\({}^{*}\) + 1.67 & 90.75\({}^{*}\) + 2.03 & 73.92\({}^{*}\) + 1.57 & 37.07\({}^{*}\) + 2.24 & 50.00\({}^{*}\) + 0.00 & 42.41\({}^{*}\) + 1.74 \\ BooD & 64.82\({}^{*}\) + 1.25 & 65.33\({}^{*}\) + 1.10 & 56.93\({}^{*}\) + 2.20 & 54.07\({}^{*}\) + 1.15 & 56.94\({}^{*}\) + 1.26 & 80.22\({}^{*}\) + 1.26 & 60.85\({}^{*}\) + 1.39 & 30.47\({}^{*}\) + 1.25 & 50.00\({}^{*}\) + 1.00 & 37.84\({}^{*}\) + 1.12 \\ TruthFulQA & 76.02\({}^{*}\) + 1.06 & 70.48\({}^{*}\) + 1.58 & 60.55\({}^{*}\) + 5.51 & 68.85\({}^{*}\) + 5.58 & 67.81\({}^{*}\) + 6.19 & 77.37\({}^{*}\) + 1.51 & 43.56\({}^{*}\) + 1.05 & 35.57\({}^{*}\) + 1.25 & 50.00\({}^{*}\) + 0.00 & 41.15\({}^{*}\) + 1.17 \\ IMDB & 86.35\({}^{*}\) + 2.21 & 43.42\({}^{*}\) + 1.49 & 49.99\({}^{*}\) + 6.30 & 72.04\({}^{*}\) + 7.04 & 95.91\({}^{*}\) + 1.09 & 86.90\({}^{*}\) + 1.20 & 68.90\({}^{*}\) + 1.20 & 43.58\({}^{*}\) + 1.10 & 50.00\({}^{*}\) + 0.00 & 46.54\({}^{*}\) + 0.07 \\ bAbI & 74.57\({}^{*}\) + 3.72 & 73.48\({}^{*}\) + 3.22 & 74.11\({}^{*}\) + 3.53 & 73.78\({}^{*}\) + 1.52 & 73.95\({}^{*}\) + 1.53 & 75.18\({}^{*}\) + 1.36 & 59.16\({}^{*}\) + 4.28 & 96.53\({}^{*}\) + 1.09 & 50.00\({}^{*}\) + 0.00 & 37.18\({}^{*}\) + 1.05 \\ MATH & 92.89\({}^{*}\) + 1.39 & 75.70\({}^{*}\) + 2.18 & 52.42\({}^{*}\) + 3.37 & 52.91\({}^{*}\) + 1.01 & 52.38\({}^{*}\) + 6.04 & 56.43\({}^{*}\) + 1.51 & 92.56\({}^{*}\) + 1.24 & 46.05\({}^{*}\) + 1.15 & 50.00\({}^{*}\) + 0.00 & 48.01\({}^{*}\) + 0.04 \\ GSMK & 90.22\({}^{*}\) + 1.77 & 45.51\({}^{*}\) + 5.00 & 50.00\({}^{*}\) + 6.77 & 60.50\({}^{*}\) + 5.00 & 50.00\({}^{*}\) + 54.50 & 54.50\({}^{*}\) + 1.11 & 90.55\({}^{*}\) + 5.89 & 50.00\({}^{*}\) + 47.64 & 67.05\({}^{*}\) + 0.05 \\ LSAT & 76.83\({}^{*}\) + 4.58 & 38.48\({}^{*}\) + 2.13 & 50.00\({}^{*}\) + 4.37 & -1.15 & 50.00\({}^{*}\) + 0.00 & 61.54\({}^{*}\) + 2.73 & 71.94\({}^{*}\) + 4.86 & 38.28\({}^{*}\) + 2.56 & 50.00\({}^{*}\) + 1.18 \\ LegalSupport & 49.45\({}^{*}\) + 3.3
framework. This happens because our meta-model is not able to correctly identify the cases when text-davinci-002 fails to do the task correctly, but some of the smaller models successfully do. This scenario happens in 10.72% of cases, but we manage to identify it only 0.68% of the time. For comparison, 33.03% of data samples are successfully solved by both text-davinci-002 and one of the smaller models, and we correctly recognize this scenario 78.56% of the time.
In Fig. 5 and Fig. 4 we present cost-accuracy plots stratified by tasks and datasets, respectively. First, by focusing on the oracle and text-davinci-002 results on these plots, we spot that, for most of the datasets and tasks, the oracle indicates that there is the same potential to improve both the accuracy and the cost as in the overall case. In rare cases (e.g., MATH, GSM8K) there is not much space for improvement in terms of accuracy, as small LMs fail to do the task in almost all cases.
Second, patterns similar to the ones present in Fig. 3 are also visible for most of the datasets and tasks. For example, using the cost-oriented ILP strategy on the question-answering (QA) task, we are able to achieve the same performance as text-davinci-002 for 71.76% of the price. For the reasoning and the text classification task, these percentages are equal to 83.41% and 19.15%, respectively.
For some of the tasks, smaller LMs are also able to solve the queries either in the same way or even better than the bigger LM. As an example of the former, for the sentiment analysis task, text-babbage-001 drops only 4.83% in accuracy compared to text-davinci-002 for just 2.35% of its price. In the latter case, on the toxicity detection task, using text-ada-001 results in a 19.87% jump in accuracy compared to text-davinci-002, while spending only 1.99% of the budget used for running text-davinci-002.
Thanks to this, depending on the dataset, there are cases where using the meta-model helps us achieve both lower cost and higher performance than text-davinci-002. For example, on the CivilComments dataset (which is a part of the toxicity detection task), using the cost-oriented ILP strategy results in 20.1% (absolute) higher accuracy for only 1.99% of the cost of running text-davinci-002. This result essentially matches the oracle for this dataset.
## 5. Discussion
### Further use cases of the framework
In the previous sections (cf. Sec. 3.2 and Sec. 4.2), we introduced and evaluated two ILP-based strategies: cost-oriented and performance-oriented. This is, however, not the only potential use of our framework. The user can form an objective function as a custom combination of the terms involving cost and performance, and, similarly, use any combination of the cost and performance constraints. This allows for a bigger flexibility in practical applications.
Additionally, CELMOC does not have to be used only to make a decision about an LM to run a certain query. By fixing the LM and calculating the output probabilities for different phrasings of the desired query, one could use the framework to identify the best possible prompt to send to the LM, without the need to spend money by sending them to the LM directly. For this application, it would be advisable to retrain the meta-model to fit the task better, as the current meta-model was not exposed to small changes in queries during training, and its behavior in such cases is unknown.
### Practical considerations
With the emergence of new LMs, our meta-model may quickly become stale, as it will not include the possibility to use the newer LMs with likely better abilities. Additionally, as claimed by Chen et al. (Chen et al., 2019), the behavior of GPT-3.5 and GPT-4 is changing over time. While in this paper we do not focus on these two LMs, it is not unreasonable to assume that, with newer versions, the LMs we have been working with exhibit the same behavioral shifts. To minimize the effect of these changes on our framework, meta-model should be retrained to take into account any updates in the existing LMs as well as the newly introduced LMs. Because our meta-model is
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline Dataset & Meta-acc. & Precision & Recall & F1 & ROC-AUC & PR-AUC \\ \hline \hline MMLV & 71.78\(\pm\).41 & 67.85\(\pm\).50 & 56.29\(\pm\)1.50 & 56.95\(\pm\)1.47 & 57.77\(\pm\).44 & 55.12\(\pm\).47 \\ RAPT & 54.20\(\pm\)1.53 & 53.49\(\pm\)1.50 & 52.83\(\pm\)1.51 & 56.71\(\pm\)1.54 & 53.01\(\pm\)1.47 & 70.87\(\pm\).52 \\ WikiText & 72.30\(\pm\)1.57 & 67.91\(\pm\)1.50 & 62.93\(\pm\)1.50 & 57.51\(\pm\)1.47 & 62.78\(\pm\)1.78 & 39.82\(\pm\)1.47 \\ Entity matching & 77.00\(\pm\)1.55 & 55.07 & 57.50\(\pm\)1.50 & 55.58\(\pm\)1.57 & 57.03\(\pm\)1.47 & 94.02\(\pm\)1.46 \\ Data imputation & 71.50\(\pm\)1.50 & 64.20\(\pm\)1.50 & 64.63\(\pm\)1.40 & 66.67\(\pm\)1.55 & 88.91\(\pm\)1.50 & 66.95\(\pm\)1.48 \\ BookQ & 59.22\(\pm\)1.50 & 59.30\(\pm\)1.50 & 59.45\(\pm\)1.60 & 59.23\(\pm\)1.58 & 59.80\(\pm\)1.56 & 76.95\(\pm\)1.48 \\ TruthfulQA & 75.04\(\pm\)1.50 & 68.59\(\pm\)1.50 & 61.06\(\pm\)1.64 & 66.14\(\pm\)1.63 & 65.32\(\pm\)1.47 & 58.89\(\pm\)1.49 \\ IMDB & 38.99\(\pm\)1.52 & 62.50\(\pm\)1.54 & 53.44\(\pm\)1.50 & 56.90\(\pm\)1.54 & 54.76\(\pm\)1.62 & 69.05\(\pm\)1.42 \\ bahd & 64.38\(\pm\)1.53 & 62.75\(\pm\)1.55 & 60.53\(\pm\)1.60 & 60.33\(\pm\)1.69 & 61.19\(\pm\)1.54 \\ MATH & 80.90\(\pm\)1.50 & 65.41\(\pm\)1.50 & 79.06\(\pm\)1.55 & 60.55\(\pm\)1.74 & 61.41\(\pm\)1.43 & 63.11\(\pm\)1.42 \\ GSM8K & 89.55\(\pm\)1.26 & 65.68\(\pm\)1.01 & 60.36\(\pm\)1.65 & 61.95\(\pm\)1.01 & 60.37\(\pm\)1.53 & 33.90\(\pm\)1.64 \\ LSAT & 74.18\(\pm\)1.14 & 75.10\(\pm\)1.50 & 50.44\(\pm\)1.47 & 47.64\(\pm\)1.01 & 50.37\(\pm\)1.38 & 26.09\(\pm\)1.49 \\ LegSupSup & 49.88\(\pm\)1.14 & 49.09\(\pm\)1.07 & 49.65\(\pm\)1.48 & 48.99\(\pm\)1.46 & 49.63\(\pm\)1.13 & 63.85\(\pm\)1.13 \\ CivilComments & 71.36\(\pm\)1.50 & 41.21\(\pm\)1.50 & 45.99\(\pm\)1.02 & 43.44\(\pm\)1.29 & 46.10\(\pm\)1.04 & 87.78\(\pm\)1.71 \\ \hline \hline \end{tabular}
\end{table}
Table 4. Performance of meta-model on a dataset left-out in the training. Each row of the table trains a meta-model on all datasets except the one denoted in the row. Testing is then performed on the denoted dataset.
Figure 3. Cost-accuracy plot. Accuracy and average cost per query (in US$) achieved by assigning every query from the set to an LM. The plot shows results obtained using assignment strategies from Sec. 3.2. Single model strategies for each LMs are marked by the number under them: (1) text-ada-001 (2) text-babbage-001 (3) text-curie-001 and (4) text-davinci-002. Two thresholding strategies for cases when none of the LMs solve the data sample are marked by a letter under them: choosing (a) the biggest and (b) the smallest LM. Error bars are 95% confidence intervals.
small, training is cheap and takes little time, so it is easy to retrain frequently.
In addition, we calculated the costs of running different LMs with the pricing from OpenAI at the time of meta-model training. These prices were changing in the past, and will probably change in the future. When such changes happen, the cost parameters should be updated in the framework, as it would affect the cost-accuracy plots and, consequently, the assignment of data samples to LMs.
## 6. Conclusion
In this paper, we address the problem of the increasing costs of running LMs for various tasks. We introduce CELMOC, a framework that automatically decides which LM is the best option for a given input query. By employing CELMOC to the union of 14 datasets, we are able to reduce costs by 62.79% while maintaining the same performance as the biggest LM in our pool. We manage to substantially cut costs on all the types of tasks, and on most of the datasets we evaluate on. We also release a library that enables easy use of our framework, as well as further developments.
Figure 4. Cost–accuracy plot per dataset. Accuracy and average cost per query (in US$) achieved by employing strategies from Sec. 3.2 on the test dataset stratified by datasets from Table 1. Single-model and thresholding strategies follow the same trends as in Fig. 3. Error bars are 95% confidence intervals.
Figure 5. Cost–accuracy plot per task. Accuracy and average cost per query (in US$) achieved by employing strategies from Sec. 3.2 on the test dataset stratified by tasks from Table 1. Single-model and thresholding strategies follow the same trends as in Fig. 3. Error bars are 95% confidence intervals. |
2303.04512 | Theory of Josephson current on a lattice model of grain boundary in
$d$-wave superconductors | Identifying the origins of suppression of the critical current at grain
boundaries of high-critical-temperature superconductors, such as cuprates and
iron-based superconductors, is a crucial issue to be solved for future
applications with polycrystalline materials.
Although the dominant factor of current suppression might arise during
material fabrication and/or processing, investigating it due to an internal
phase change of the pair potential is an important issue in understanding the
threshold of the critical current.
In this paper, we study the Josephson current on a symmetric [001]-tilt grain
boundary (GB) of a $d$-wave superconductor on a lattice model.
In addition to the suppression of the maximum Josephson current associated
with the internal phase change of the $d$-wave pair potential which has been
predicted in continuum models, we find a unique phase interference effect due
to folding of the Fermi surface in the lattice model.
In particular, the resultant maximum Josephson current at low-tilting-angle
regions tends to be suppressed more than that in preexisting theories.
Because similar suppressions of the critical current at GBs have been
reported in several experimental works, the present model can serve as a guide
to clarify the complicated transport mechanism in GBs. | Takashi Sakamori, Satoshi Kashiwaya, Rikizo Yano, Yukio Tanaka, Takafumi Hatano, Keiji Yada | 2023-03-08T11:06:42Z | http://arxiv.org/abs/2303.04512v2 | # Theory of Josephson current on a lattice model of grain boundary in \(d\)-wave superconductors
###### Abstract
Identifying the origins of suppression of the critical current at grain boundaries of high-critical-temperature superconductors, such as cuprates and iron-based superconductors, is a crucial issue to be solved for future applications with polycrystalline materials. Although the dominant factor of current suppression might arise during material fabrication and/or processing, investigating it due to an internal phase change of the pair potential is an important issue in understanding the threshold of the critical current. In this paper, we study the Josephson current on a symmetric [001]-tilt grain boundary (GB) of a \(d\)-wave superconductor on a lattice model. In addition to the suppression of the maximum Josephson current associated with the internal phase change of the \(d\)-wave pair potential which has been predicted in continuum models, we find a unique phase interference effect due to folding of the Fermi surface in the lattice model. In particular, the resultant maximum Josephson current at low-tilting-angle regions tends to be suppressed more than that in preexisting theories. Because similar suppressions of the critical current at GBs have been reported in several experimental works, the present model can serve as a guide to clarify the complicated transport mechanism in GBs.
+
Footnote †: : _Supercond. Sci. Technol._
+
Footnote †: : _Supercond. Sci. Technol._
_Keywords_: Josephson junction, HTSC, grain boundary, \(d\)-wave superconductor
## 1 Introduction
The application of high-critical-temperature superconductors (HTSCs), such as cuprates and iron-based superconductors (IBSs), to superconducting cables and joints has been the subject of research. It requires realizing a stable persistent current even if the circuit includes the grain boundaries (GBs) of the crystals because these boundaries inevitably exist inside cables and at their joints. Despite the successful development of
superconducting cables based on low-critical-temperature (low-\(T_{c}\)) superconductors, the realization of high-current-density cables based on HTSCs requires additional techniques for crystalline orientation control, such as ion-beam-assisted deposition. The critical current flow through GBs has been experimentally confirmed to be seriously suppressed when the tilting angle of the GB exceeds approximately \(5^{\circ}\)[1, 2, 3, 4] for cuprates and \(10^{\circ}\) for IBSs [5, 6, 7, 8]. Two origins of this severe suppression of the critical current have been proposed. One proposed origin is related to material issues, such as crystal growth, doping inhomogeneities, and impurity scattering around GBs [9, 10, 11]. The other proposed origin is related to the anisotropic pairing states of superconductors. Unlike conventional low-\(T_{c}\) superconductors, the pair potentials of cuprates and IBSs exhibit phase changes in the momentum space. According to the current-phase relation (CPR) of the Josephson current, the internal phase change contributes to the reversal of the current direction. Although the dominant origin of the suppression might be the former in actual cables and joints, the influences of the latter should also be clarified in detail.[12]
Focusing on cuprates, we note that the Josephson current formula for \(d\)-wave superconductor junctions has been developed mainly on the basis of the continuum theory. Sigrist and Rice [13] investigated the influence of the anisotropic pair potential on the Josephson current and proposed a theoretical reversal of the Josephson current direction due to the internal phase change. A reversal of the current's direction accompanies the transition of a 0-junction to a \(\pi\)-junction. This formula successfully explains the anomalous magnetic field dependence of superconducting quantum interference devices and the spontaneous magnetic flux in tri-crystals [14, 15, 16, 17]. These experimental results have been accepted as strong evidence for \(d\)-wave pairing symmetry of cuprates.
Another critical effect expected for \(d\)-wave Josephson junctions is the formation of zero-energy Andreev bound states (ZEABSs) at the interface [18, 19, 20, 21]. The presence of ZEABSs is predicted to enhance the higher harmonics of the CPR of the Josephson current [22, 23, 24]. The nonmonotonic temperature dependence of the critical current [25, 26], the nonsinusoidal CPR [27], and the higher harmonics of the CPR [28] due to the ZEABS have been detected experimentally in GB junctions of cuprates, supporting the validity of the theoretical models. The character of the CPR determines the free-energy minimum because the CPR is given by the derivative of the free energy with respect to the phase difference \(\phi\). When the free-energy minimum locates at \(\phi=0\) (\(\pi\)), it is referred to as a 0 (\(\pi\))-junction. In the presence of higher harmonics, the free-energy minimum might appear at neither \(\phi=0\) nor \(\pi\) but at \(\phi=\phi_{m}\); this junction is classified as a \(\phi\)-junction[22, 23]
The dependence of the maximum Josephson current on the tilting angle in HTSC GBs is an unresolved issue. The experimentally detected suppression of the critical current at GBs [2] is more drastic than the suppression predicted by the continuum models. The influences of the short coherence length have been introduced in the calculation using lattice models [29, 30]. Although the nonmonotonic temperature
dependence of the critical current and the existence of the \(\varphi\)-junction was reproduced in a lattice model [31], the tilting-angle dependence was inconsistent with the experimental observations. These facts indicate that the suppression mechanism needs to be clarified beyond the mechanisms treated in the preexisting models to explain the Josephson current through the GBs.
In the present work, we study symmetric [001]-tilt GB Josephson junctions using a lattice model in which the sites at the interface are shared by the lattices on the left and right sides (see figure 1). In this model, the periodicity of the unit cell \(m\) in the direction parallel to the interface changes in accordance with the tilting angle and the corresponding Brillouin zone (BZ) parallel to the interface is folded. When the tilting angle is reduced, folding of the BZ becomes significant with increasing \(m\). This effect strongly enhances the interference of the internal phase change of the anisotropic pair potential. We found that, as a result of the enhanced interference, the Josephson current is substantially suppressed with increasing \(m\).
The remainder of this paper is organized as follows. In section 2, we explain the model and formulation. In section 3, the results of numerical calculations are presented. In section 4, we summarize the obtained results and clarify the physics in the proposed model.
## 2 Model and formulation
We consider symmetric [001]-tilt GBs as shown in figure 1, where \((m10)\) and \((\bar{m}10)\) surfaces are connected by sharing the most end sites. Hereafter, this structure is referred to as the \((m10)\) GB. The \((m10)\) and \((\bar{m}10)\) structures are tilted by \(\pm\theta/2\) from the untilted square lattice with coordinates \((ia,ja)\) at the lattice points, where \(i\) and \(j\) are integers and \(a\) is the lattice constant. The coordinate of the lattice point \(x<0\) is \(ia\left(\cos\frac{\theta}{2},\sin\frac{\theta}{2}\right)+ja\left(-\sin\frac{ \theta}{2},\cos\frac{\theta}{2}\right)\); for \(x>0\), it is \(ia\left(\cos\frac{\theta}{2},-\sin\frac{\theta}{2}\right)+ja\left(\sin\frac{ \theta}{2},\cos\frac{\theta}{2}\right)\). We consider the \((m10)\) structure for \(x>0\). As shown in figure 1, the \((m10)\) structure has discrete translational symmetry along the \(y\) direction by \(\sqrt{m^{2}+1}a\). That is, the \(x\) coordinate of the lattice point at \(i=i_{0}\) and \(j=j_{0}\) is equivalent that at \(i=i_{0}-p\) and \(j=j_{0}-mq\) with integers \(p\) and \(q\) that index the unit cell. Thus, we consider the unit cell composed of \(m\) lined lattice points along the \(y_{R(L)}\) direction. Two translation vectors \(\hat{u}_{R(L)}\) and \(\hat{v}\) are then given by
\[\hat{u}_{R(L)} = \left(a\left(\sin\frac{\theta}{2}+\cos\frac{\theta}{2}\right),(- )a\left(\sin\frac{\theta}{2}-\cos\frac{\theta}{2}\right)\right)\] \[\hat{v} = \left(0,a\sqrt{m^{2}+1}\right). \tag{1}\]
We label the lattice point by three integers \(o\), \(p\), and \(q\); its coordinate \(r(o,p,q)\) is
\[\hat{r}(o,p,q)=o\left(\sin\frac{\theta}{2},-\mathrm{sgn}(p)\cos\frac{\theta}{ 2}\right)+p\hat{u}+q\hat{v}, \tag{2}\]
where \(o\) is the sublattice index, which takes values from \(0\) to \(m\) for \(p\geq 1\), and from \(0\) to \(-m\) for \(p\leq-1\). For \(p=0\), \(o\) takes values from \(-m\) to \(m\). The corresponding
lattice sites for the (210) GB are shown in figure 1. To consider a superconductor/normal metal/superconductor Josephson junction, we assume that the left (blue) and right (red) sites are in the superconducting states with the phases of \(\phi_{L}\) and \(\phi_{R}\), respectively, and that the center (green) sites with \(p=0\) are in the normal state. The phase difference between the left and right \(\phi\) is given by \(\phi_{L}-\phi_{R}\). For \(p=0\), the \((m10)\) structure at \(x<0\) and the \((\bar{m}10)\) structure at \(x>0\) share the same lattice point with \(o=0\). The resultant unit cell of the normal state has a dog-legged shape with \(2m+1\) lattice points.
For this lattice structure, we consider the tight-binding model. The original bulk Hamiltonian without tilting is a single-band model with a \(d\)-wave pair potential given by
\[\mathcal{H}_{R(L)}= \sum_{k}(\varepsilon_{k}-\mu)c_{k}^{\dagger}c_{k}+\sum_{k}\Delta _{k}c_{k}^{\dagger}c_{-k}^{\dagger}+h.c., \tag{3}\] \[\varepsilon_{k}= -2t(\cos k_{x}+\cos k_{y}),\] (4) \[\Delta_{k}^{R(L)}= \Delta_{0}(\cos k_{x}-\cos k_{y})e^{i\phi_{R(L)}}, \tag{5}\]
where \(t\) and \(\Delta_{0}\) are the hopping integral and the pair potential, respectively. The matrix components of the Hamiltonian in Bogoliubov-de Gennes form in real space for \(x\geq 0\)
Figure 1: Lattice structure of the (210) GB. Blue, red, and green solid circles denote the lattice points of the left superconductor, right superconductor, and the normal metal site, respectively. The \(x_{L,(R)},y_{L(R)}\) axes are in a tilted geometry.
and \(1\leq o\leq m\) are then
\[\langle o,p,q|\mathcal{H}|o-1,p+1,q\rangle = \begin{pmatrix}-t&\Delta(p)\\ \Delta^{*}(p)&t\end{pmatrix}, \tag{6}\] \[\langle o-1,p,q|\mathcal{H}|o,p,q\rangle = \begin{pmatrix}-t&-\Delta(p)\\ -\Delta^{*}(p)&t\end{pmatrix}, \tag{7}\]
and
\[\langle o=0,p,q|\mathcal{H}|o^{\prime}=m,p,q+1\rangle = \begin{pmatrix}-t&\Delta(p)\\ \Delta^{*}(p)&t\end{pmatrix}, \tag{8}\] \[\langle o=m,p,q|\mathcal{H}|o^{\prime}=0,p+1,q-1\rangle = \begin{pmatrix}-t&-\Delta(p)\\ -\Delta^{*}(p)&t\end{pmatrix}. \tag{9}\]
Similarly, we can consider corresponding matrix components for the \((\bar{m}10)\) structure in \(x<0\). Here, because the layer with \(p=0\) is in the normal state,
\[\Delta(p)=\begin{cases}e^{i\phi_{R}}\Delta_{0}/2&(p\geq 1)\\ 0&(p=0),\\ e^{i\phi_{L}}\Delta_{0}/2&(p\leq 1)\end{cases} \tag{10}\]
Using these matrices, we calculate the Green's function at the center sites and obtain the Josephson current through the normal-state sites.
To calculate the Josephson current, we define the current operator \(J\) from the equation of continuity in the lattice model:
\[e\frac{d}{dt}\{c_{ky,0,0}^{\dagger}(t)c_{ky,0,0}(t)\}=J_{LC}(k_{y})-J_{CR}(k_{ y}). \tag{11}\]
where \(c_{ky,p,o}\) (\(c_{ky,p,o}^{\dagger}\)) is the annihilation (creation) operator at the site of sublattice \(o\) in the \(p\)th layer with momentum \(k_{y}\). The left-hand side of Eq. (11) is obtained by the Heisenberg equation, and we obtain the current operator \(J_{CR}(k_{y})\):
\[J_{CR}(k_{y})=\frac{iet}{\hbar}(c_{ky,0,1}^{\dagger}c_{ky,0,0}-c_{ky,0,0}^{ \dagger}c_{ky,0,1}+e^{ik_{y}}c_{ky,0,m}^{\dagger}c_{ky,0,0}-e^{-ik_{y}}c_{ky,0,0}^{\dagger}c_{ky,0,m}), \tag{12}\]
The expectation value of \(J\) is calculated by the Green's function at \(\tau=0\), where \(\tau\) is the imaginary time. Because we consider the stationary state, \(J_{ky}(\phi)=\langle J_{LC}(k_{y})\rangle=\langle J_{CR}(k_{y})\rangle\) and \(\langle J_{CR}(k_{y})\rangle\) is given by
\[\langle J_{CR}(k_{y})\rangle=\frac{iet}{\hbar}(G_{C,0,1}(\tau=0,k _{y})-G_{C,1,0}(\tau=0,k_{y})\] \[+e^{ik_{y}}G_{C,0,m}(\tau=0,k_{y})-e^{-ik_{y}}G_{C,m,0}(\tau=0,k _{y})). \tag{13}\]
The Josephson current through the normal sites \(J(\phi)\) is given by
\[J(\phi)=\frac{1}{\sqrt{m^{2}+1}a}\int J_{ky}(k_{y},\phi)dk_{y}, \tag{14}\]
because the sites at \(x=0\) are aligned in the \(y\) direction with a period of \(\sqrt{m^{2}+1}a\).
The Green's function at \(\tau=0\) is obtained by the Matsubara Green's function:
\[G_{C}(\tau=0,k_{y})=\frac{1}{k_{B}T}\sum_{n}G_{C}(i\epsilon_{n},k_{y}), \tag{15}\]
where \(\epsilon_{n}\) is the Matsubara frequency. To calculate the Matsubara Green's function in this junction system, we first calculate the functions in the semi-infinite system: \(G_{L}(i\epsilon_{n})\) for \(p\leq-1\) and \(G_{R}(i\epsilon_{n})\) for \(p\geq 1\). The details of the calculation of \(G_{L}(i\epsilon_{n})\) and \(G_{R}(i\epsilon_{n})\) are given in the appendix. The Green's function at \(p=0\) in the \((m10)\) GB is then given by
\[G_{C}(i\epsilon_{n})=(i\epsilon_{n}I-H_{0,0}-H_{0,1}G_{R}(i\epsilon_{n})H_{1, 0}-H_{0,-1}G_{L}(i\epsilon_{n})H_{-1,0})^{-1}, \tag{16}\]
where \(H_{i,j}\) is the matrix \(H_{i,j}=\langle p=i|{\cal H}|p=j\rangle\).
We also calculate the Josephson current for the continuum model using the following equations for \(d\)-wave superconductor junctions [22, 23]:
\[R_{N}J=\frac{\pi\overline{R_{N}}k_{B}T}{e}\sum_{\epsilon_{n}}\int_{-\pi/2}^{ \pi/2}F(\varphi,i\epsilon_{n},\phi)\sin(\phi)\sigma_{N}\cos(\varphi)d\varphi, \tag{17}\]
\[\epsilon_{n}=2\pi k_{B}T\left(n+1/2\right), \tag{18}\]
\[F(\varphi,i\epsilon_{n},\phi)=\frac{2\Delta(\varphi_{+})\Delta(\varphi_{-})} {\Omega_{n,+}\Omega_{n,-}+\epsilon_{n}^{2}+\{1-2\sigma_{N}\sin^{2}(\phi/2)\} \Delta(\varphi_{+})\Delta(\varphi_{-})}, \tag{19}\]
\[\Omega_{n,\pm}={\rm sgn}(\epsilon_{n})\sqrt{\Delta^{2}(\varphi_{\pm})+ \epsilon_{n}^{2}}, \tag{20}\]
\[\Delta(\varphi_{\pm})=\Delta\cos(2(\varphi\mp\theta/2))e^{i\phi}, \tag{21}\]
\[\sigma_{N}=\frac{\cos^{2}(\varphi)}{\cos^{2}(\varphi)+Z^{2}}, \tag{22}\]
\[\overline{{R_{N}}}^{-1}=\int_{-\pi/2}^{\pi/2}\sigma_{N}\cos(\varphi)d\varphi, \tag{23}\]
and \(Z\) is the barrier parameter.
The corresponding formula for \(s\)-wave superconductor junctions is [32]
\[R_{N}J=\frac{\pi\Delta_{0}}{2e}\frac{1}{\sqrt{1-\sigma_{N}\sin^{2}(\phi/2)}} \tanh\left(\frac{\Delta_{0}}{2k_{B}T}\sqrt{1-\sigma_{N}\sin^{2}(\phi/2)} \right)\sin\phi. \tag{24}\]
## 3 Results
We used \(\mu=-t,k_{B}T_{c}=10^{-3}t,T=0.05T_{c}\); \(\Delta_{0}\) was taken to be Bardeen-Cooper-Schrieffer (BCS)-like in the calculations given by
\[\Delta_{0}(T)=\Delta_{0}(T=0)\tanh\left(1.74\sqrt{\frac{T_{c}-T}{T}}\right). \tag{25}\]
\[\Delta_{0}(T=0)=1.76k_{B}T_{c}. \tag{26}\]
The maximum value of \(\Delta_{k}^{L,R}\) is 1.5 \(\Delta_{0}\).
Figure 3: CPR for the (110) GB. The horizontal axis shows the phase difference, and the vertical axis shows the Josephson current.
Figure 2: Current-phase relation (CPR) for a (100) grain boundary (GB). The horizontal axis shows the phase difference and the vertical axis shows the Josephson current.
By changing \(\phi\), we obtained the CPR (figures 2 to 4). Because the present superconducting state does not break the time-reversal symmetry, the CPR is expanded as
\[J(\phi)=\sum J_{n}\sin(n\phi). \tag{27}\]
The CPR for a (100) GB (figure 2) exhibits standard sinusoidal 0-junction behavior with \(J_{1}>0\), whereas it exhibits \(\pi\)-junction behavior with \(J_{1}<0\) for the (110) GB (figure 3). Because \(J_{1}\) changes sign, a region exists where \(|J_{1}|\) becomes smaller than the other higher harmonics. In this case, the CPR shows so-called \(\phi\)-junction behavior [22, 23], where the higher harmonics of the CPR are enhanced (figure 4). The maximum Josephson current \(J_{max}\) is obtained from the maximum value of \(J(\phi)\). We then calculate \(J_{max}\) for \((m10)\) GBs for \(m=1\) to \(17\). We also calculate \(J_{max}\) for the case of \(m=\infty\) (i.e., the (100) GB). Among these structures, the (110) and (100) GBs have a perfect crystalline square lattice structure without defects.
Figure 5 shows the \(J_{max}\) of \((m10)\) GBs for the \(s\)-wave and \(d\)-wave cases. First,
Figure 4: CPR for the (210) GB. The horizontal axis shows the phase difference, and the vertical axis shows the Josephson current.
Figure 5: Maximum Josephson current \(J_{max}\) for \(s\)-wave (cross) and \(d\)-wave (circle) junctions with tilting angle \(\theta\).
\(J_{max}\) for the \(s\)-wave pair potential decreases with decreasing tilting angle \(\theta\), except in the case of \(\theta=0^{\circ}\), even though the pair potential does not exhibit the phase change. This suppression of \(J_{max}\) depends on the character of the present lattice model. In the present model, \((m10)\) GBs are connected by sharing the sites at \(x=0\) with periodicity \(\sqrt{m^{2}+1}a\) in the \(y\) direction. The number of conducting channels per length along the \(y\) direction is then proportional to \(1/\sqrt{m^{2}+1}=1/\sqrt{\tan^{-2}(\theta/2)+1}\). The decrease in the number of conducting channels leads to suppression of the magnitude of \(J_{max}\).
Next, we compare the \(J_{max}\) of \(s\)-wave and \(d\)-wave pair potentials. The \(J_{max}\) for the \(d\)-wave cases are found to be much smaller than those for the \(s\)-wave cases. To focus on the effect of the internal phase change, we plot \(J_{max}\) of \(d\)-wave superconductor junctions normalized by that of \(s\)-wave junctions in figure 6. We also show the \(\theta\) dependence of
Figure 6: The ratio of \(J_{max}\) for a \(d\)-wave junction to that of an \(s\)-wave junction. The dotted line shows the result predicted by Sigrist and Rice’s theory [13]. We also plot the corresponding results obtained by the continuum model for \(Z=0\), \(Z=1\), and \(Z=4\).
Figure 7: Momentum-resolved Josephson current \(J_{ky}\) for the (210) GB. We chose \(\phi\) where \(J\) gives the maximum Josephson current; \(k_{1,2,3}\) are defined in figure 8.
the ratio of \(J_{max}\) obtained by the phenomenological continuum model for comparison [13]. In the case of the phenomenological model, the suppression of \(J_{max}\) originates from the cancellation of the 0-phase and \(\pi\)-phase current components where the sign of \(J_{1}\) is positive and negative, respectively. Similar cancellation occurs in the present lattice model.
To confirm this calculation, we show the \(k_{y}\) dependence of the momentum-resolved Josephson current \(J_{ky}\) in figure 7, where \(k_{y}\) is the parallel momentum. We find that the sign of \(J_{ky}\) changes at \(k_{y}=k_{1}\) and \(k_{3}\). These momenta correspond to the positions of the nodes. That is, the sign change of the pair potential causes the sign change of the first-order Josephson coupling of \(J_{ky}\). The cancellation of the first-order Josephson coupling then occurs in the integration by \(k_{y}\), and the resulting Josephson current is suppressed. Thus, the magnitude of \(J_{max}\) for \(d\)-wave junctions is smaller than that for \(s\)-wave junctions.
Although the reason for the sign change of the momentum-resolved current in the lattice model is the same as that in the continuum model, the \(\theta\) dependence of the Josephson current shown in figure 6 exhibits different behaviors. For example, at lower \(\theta\) values, the continuum model shows a single minimum value, whereas the lattice model shows oscillating behavior. We find two reasons for this difference. One reason is the folding of the Fermi surface (FS), as shown in figure 8. This folding causes a change of the position of the nodes and leads to overlap of the FS. The other reason is the change of the effective barrier potential. As \(m\) increases, the number of conducting
Figure 8: Fermi surfaces and Brillouin zone for the (210) GB. The blue (red) lines correspond to the Fermi surfaces with positive (negative) pair potential; \(k_{1},k_{2}\) are the wavelengths at nodal points of the \(d\)-wave pair potential, and \(k_{2}\) is the wavelength where the number of the Fermi surface changes.
channels per unit length decreases. The corresponding effective barrier potential then becomes pronounced. The contribution to the 0-phase is then suppressed relative to the contribution of the \(\phi\)-phase, and the current is suppressed at lower \(\theta\). To observe these effects, we analyze the physical properties of the Josephson current in the lattice model in greater detail.
First, we show the phase difference with the minimum free energy, which is obtained by integrating the CPR with respect to \(\phi\), in figure 9. A 0-junction appears in the lower-\(\theta\) region, and a \(\pi\)-junction appears at higher-\(\theta\) values. We also observe \(\phi\)-junctions in certain places. In the case of the continuum model, a \(\phi\)-junction only appears between the 0-junction at low \(\theta\) and the \(\pi\)-junction at high \(\theta\). In the lattice model, a \(\phi\)-junction appears for the (410) GB at \(\theta\simeq 28^{\circ}\) between the 0-junction and \(\pi\)-junction regions. However, a \(\phi\)-junction also appears in the cases of the (210) and (610) GBs. This result is distinct from that of the continuum model.
To clarify this point, we examine figure 7 in greater detail. In figure 7, a sign change of the momentum-resolved Josephson current occurs at the position of the nodes, as previously noted. However, the position of nodes in the lattice model differs from that of the continuum model. In the continuum model, a node appears at \(k_{y}=k_{F}\sin(\pi/4\pm\theta/2)\) because the Fermi surface for a symmetric-tilt GB is just given by the rotation of the \(k_{x}\)- and \(k_{y}\)-axes. However, in the lattice model, when \(k_{y}=k_{F}\sin(\pi/4\pm\theta/2)\) is greater than the momentum of the BZ boundary, the FS is folded (figure 8). In the case of the (210) GB, the nodal position is closer than in the case of the continuum model because the nodes at \(k_{y}=k_{1}\) are on the folded FSs. The area of the \(\pi\)-phase (\(k_{3}<|k_{y}|<k_{1}\)) then becomes small, and the negative contribution to the current decreases. The \(\phi\)-phase then appears at \(T=0.05T_{c}\), and the resultant Josephson current is smaller than that of the continuum model. Similarly, a \(\phi\)-junction also appears in the case of the (610) GB between the 0-junction of the (510) and (710) GBs. This effect is highly sensitive to the position of the nodal position in the BZ of the tilted structure; thus, it strongly depends on the chemical potential \(\mu\).
Figure 9: Position of the free-energy minimum \(\phi_{m}\) plotted as a function of tilting angle \(\theta\).
A further difference between the lattice model and continuum model is the interference by the folded FSs. Considering the (210) GB as an example, two FSs exist at \(|k_{y}|>k_{2}\), as shown in figure 8, and the signs of the pair potentials on these two FSs are opposite. Thus, when we consider the incident and transmitted quasiparticles in the (210) GB with \(|k_{y}|>k_{2}\), the first order of the Josephson coupling is canceled because the quasiparticles' contributions to the current have opposite signs. Actually, as shown in figure 7, the magnitude of \(J_{ky}\) at \(|k_{y}|>k_{2}\) is much smaller than that of \(|k_{y}|<k_{3}\), where there is no folded FS.
Another feature of the proposed lattice model is that the normal conductance depends on its tilting angle. In the present model, we do not introduce the barrier potential or the change of the hopping integral \(t\) at the interface. In the case of the continuum model without a barrier potential (\(Z=0\)), a 0-\(\pi\) transition occurs at \(\theta=45^{\circ}\);
however, the \(\pi\)-phase appears at \(\theta\simeq 37^{\circ}\) for the (310) GB. Thus, an effective \(Z\) exists in the present model because of the decreases of the conducting channel density. We can also confirm this result from the temperature dependence of \(J_{max}\) in figure 10. The temperature dependence of \(J_{max}\) for the (210) GB exhibits saturation behavior at low temperatures and is similar to the previous result reported by Kulik-Omel'yanchuk [33]. However, the temperature dependence of \(J_{max}\) for the (310) and (610) GBs shows a rapid increase at low temperatures and nonmonotonic behaviors. The rapid increase of \(J_{max}\) is known to originate from the existence of the ZEABS at the interface. The results also suggest that this ZEABS induces the \(\pi\)-phase. As shown in figures 7 and 11, the 0-phase and \(\pi\)-phase coexist in the momentum-resolved Josephson current and the area of the \(\pi\)-phase corresponds to that of the ZEABS [22, 23]. When the effective \(Z\) increases with decreasing \(\theta\), the contribution of the \(\pi\)-phase increases at lower temperatures. The nonmonotonic temperature dependence due to the 0-\(\pi\) transition and the rapid increase of the Josephson current at lower temperatures then appear.
Finally, we note that a clear increase as decreasing \(\theta\) in the \(J_{max}\) at lower \(\theta\) values is not observed in the lattice model. As shown in figure 6, the maximum Josephson current is not suppressed for the (100) GB (\(\theta=0^{\circ}\)) because it is a perfect crystal structure. However, once the tilting angle becomes nonzero, the large suppression of the \(J_{max}\) is observed. In the present calculation, the Josephson current always passes through the sharing site at \(x=0\) and its density decreases with decreasing \(\theta\). However, when \(m\) becomes large, the edges of (\(m10\)) and (\(\bar{m}10\)) become nearly parallel and their interval is almost the same as the lattice constant \(a\). We then must consider the effect of direct hopping along the \(x\) direction at the interface. This hopping will make the (100) GB effective, likely resulting in an increase of the \(J_{max}\).
We here comment on the comparison between the present calculation results and the results of actual experiments. In many experimental studies of the GBs of HTSCs, the critical current has been reported to be severely suppressed with increasing tilting angle [1, 2, 3, 4, 5, 6, 7, 8]. The origin of this suppression cannot be explained using the previously reported continuum models, as shown in figure 6. By contrast, the suppression in the small \(\theta\) region obtained in the present model suggests that the folding of the FS and the defect formation successfully reproduced the experimental trends. However, the recovery of \(J_{max}\) with further increasing \(\theta\) does not coincide with the experimental trends. This observation indicates that additional effects, such as diffusive scattering near the interface, occur in actual experiments corresponding to large-\(\theta\) regions. We believe the present model is important because it provides a guide for experiments showing that a mismatch at GBs strongly influences the Josephson current properties, even if the mismatch angle is slight.
## 4 Summary
In this paper, we calculated the Josephson current on the symmetric [001]-tilt GB of \(d\)-wave superconductor Josephson junctions with (\(m10\))- and (\(\bar{m}10\))-oriented surfaces
on the lattice model. By changing the tilting angle, we found a wide variety of CPRs, including \(0\)-, \(\pi\)-, and \(\phi\)-junctions. In addition to the suppression of the maximum Josephson current associated with the internal phase change of the pair potential, as observed in the continuum models, we found that further phase interference occurs because of the folding of the FS. The obtained maximum Josephson current in the present lattice model is smaller than that in preexisting theories based on the continuum model in low-angle regions. Because similar suppression of the critical current corresponding to maximum Josephson current at GBs has been reported in several experimental studies, the obtained results can serve as a guide to clarify the complicated transport mechanism in GBs.
The roughness of the surface/interface is known to strongly influence the charge transport behavior in \(d\)-wave superconductor junctions [34, 35, 36]. The diffusive scattering near the interface destructively influences the contribution from ZEABSs because of the hidden odd-frequency odd-parity spin-singlet pairing [37, 38, 39, 40, 41]. Clarifying how the grain angle dependence of the critical current due to the interference of the phase from the folding of the BZ predicted in the present paper is influenced by the diffusive scattering would be an interesting topic for a future study.
In the present work, we have focused on the GB effect in \(d\)-wave superconductors. Studies of iron-based superconductors in which \(s_{\pm}\) pairing is a promising symmetry have also shown interesting results [42, 43]. Because there are several theoretical works related to surface Andreev bound states, quasiparticle tunneling, and Josephson effects in \(s_{\pm}\)-wave superconductors [44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55], it is interesting to calculate the Josephson current in GB \(s_{\pm}\) superconductor junctions.
This work was supported by Scientific Research (A) (KAKENHIGrant No. JP20H00131), Scientific Research (B) (KAKENHIGrant No. JP20H01857), and Scientific Research (Early-Career Scientists) (KAKENHIGrant No. 21K13854).
## Appendix A Recursive Green's function method for (\(m10\)) surface
In this appendix, we derive the Green's function in the junction of (\(m10\)) GB. To calculate this, we first calculate the surface Green's function at (\(m10\)) and (\(\bar{m}10\)) surfaces. For that purpose, we use the recursive Green's function method. As given
in the main text, the matrix elements for \((m10)\) surface is given by
\[\langle p,q|\mathcal{H}|p+1,q\rangle = \begin{pmatrix}0&0&0&\cdots&0\\ t_{x}&0&0&\cdots&0\\ 0&t_{x}&0&\ddots&0\\ \vdots&\ddots&\ddots&\ddots&0\\ 0&0&0&t_{x}&0\end{pmatrix},\] (A.1) \[\langle p,q|\mathcal{H}|p,q\rangle = \begin{pmatrix}h_{0}&t_{y}&0&\cdots&0\\ t_{y}^{\dagger}&h_{0}&t_{y}&\cdots&0\\ 0&t_{y}^{\dagger}&h_{0}&\ddots&0\\ \vdots&\ddots&\ddots&\ddots&t_{y}\\ 0&0&0&t_{y}^{\dagger}&h_{0}\end{pmatrix},\] (A.2) \[\langle p,q|\mathcal{H}|p,q+1\rangle = \begin{pmatrix}0&\cdots&0&t_{x}\\ 0&\ddots&0&0\\ \vdots&\ddots&\ddots&\vdots\\ 0&\cdots&0&0\end{pmatrix},\] (A.3) \[\langle p,q|\mathcal{H}|p+1,q-1\rangle = \begin{pmatrix}0&\cdots&\cdots&0\\ \vdots&\ddots&\ddots&\vdots\\ 0&0&\cdots&\vdots\\ t_{y}&0&\cdots&0\end{pmatrix},\] (A.4)
where
\[h_{0} = \begin{pmatrix}-\mu&0\\ 0&\mu\end{pmatrix},\] (A.5) \[t_{x} = \begin{pmatrix}-t&\Delta(p)\\ \Delta^{*}(p)&t\end{pmatrix},\] (A.6) \[t_{y} = \begin{pmatrix}-t&-\Delta(p)\\ -\Delta^{*}(p)&t\end{pmatrix}.\] (A.7)
By the Fourier transformation in \(q\), we can diagonalize \(\mathcal{H}\) in \(q\),
\[\langle p,k_{y}|\mathcal{H}|p+1,k_{y}\rangle = \left(\begin{array}{ccccc}0&0&0&\cdots&0\\ t_{x}&0&0&\cdots&0\\ 0&t_{x}&0&\ddots&0\\ \vdots&\ddots&\ddots&\ddots&0\\ t_{y}e^{iky}&0&0&t_{x}&0\end{array}\right) \tag{14}\] \[\equiv T_{x}(k_{y}),\] \[\langle p,k_{y}|\mathcal{H}|p,k_{y}\rangle = \left(\begin{array}{ccccc}h_{0}&t_{y}&0&\cdots&t_{x}e^{-ik_{y}} \\ t_{y}^{\dagger}&h_{0}&t_{y}&\cdots&0\\ 0&t_{y}^{\dagger}&h_{0}&\ddots&0\\ \vdots&\ddots&\ddots&\ddots&t_{y}\\ t_{x}^{\dagger}e^{ik_{y}}&0&0&t_{y}^{\dagger}&h_{0}\end{array}\right)\] (15) \[\equiv H_{0}(k_{y}),\]
where \(T_{x}\) and \(H_{0}\) are \(2(m+1)\times 2(m+1)\) matrices which include sublattice and particle-hole degree of freedom.
Here, we suppose that the surface Green's function at \(p=p_{0}+1\) in the \(n\)-layer system from \(p=p_{0}+1\) to \(p=p_{0}+n\) is known. Then, the surface Green's function at \(p=p_{0}\) in the \((n+1)\)-layer system from \(p=p_{0}\) to \(p=p_{0}+n\) is given by
\[G_{s}^{(n+1)}(i\epsilon_{n},k_{y})=\left(i\epsilon_{n}I-H_{0}-T_{x}G_{s}^{(n)} (i\epsilon_{n},k_{y})T_{x}^{\dagger}\right)^{-1}, \tag{16}\]
where \(G_{s}^{(n+1)}(i\epsilon_{n},k_{y})\) stands for the surface Green's function with Matsubara Frequency \(\epsilon_{n}\) at \(p=p_{0}\) (\(p=p_{0}+1\)) in the \((n+1)\)-layer (\(n\)-layer) system. Since it is trivial to obtain the Green's function of the 1-layer system by \(G_{s}^{(1)}(i\epsilon_{n},k_{y})=(i\epsilon_{n}I-H_{0})^{-1}\), we can calculate the surface Green's function in the system with any number of layers by the recursion relation in (16). By repeating this recursive equation, surface Green's function converges as \(G_{s}^{(n+1)}(i\epsilon_{n},k_{y})=G_{s}^{(n)}(i\epsilon_{n},k_{y})\). However, to get the convergence, we have to consider the system much larger than the coherence length. Thus, we can express the recursive equation in terms of a Mobius transformation. As we will explain later, \(T_{x}\) must have the inverse matrix to do this. As seen from (14), \(T_{x}\) is a lower triangular matrix without any diagonal components. Then, \(T_{x}\) does not have the inverse matrix. Thus, we express (16) in the another form. The matrix form of the total Hamiltonian is given by
\[H=\ \left(\begin{array}{ccccc}\ddots&\ddots&\ddots&&&\\ \ddots&H_{0}&T_{x}&0&\\ \ddots&T_{x}^{\dagger}&H_{0}&T_{x}&\ddots\\ &0&T_{x}^{\dagger}&H_{0}&\ddots\\ &&&\ddots&\ddots&\ddots\end{array}\right), \tag{17}\]
where \(H_{0}\) and \(T_{x}\) are \(2(m+1)\times 2(m+1)\) matrices. Consider the \(m\) contiguous layers of this system. Then, we can express this Hamiltonian by the \(2m\times 2m\) matrices as shown in figure 1.
By using these matrices, we make the recursion relation between surface Green's functions for the \(n\)-layer system and \((n+m)\)-layer system,
\[\check{G}_{s}^{n+m}=X_{\bullet}\check{G}_{s}^{n}, \tag{12}\]
where \(\check{G}_{s}\) is the submatrix of \(G_{s}\) including the sublattice \(o=1\) to \(m\) and therefore \(\check{G}_{s}\) is \(2m\times 2m\) matrix. Here, \({}_{\bullet}\) denotes the Mobius transformation defined by
\[\begin{pmatrix}a&b\\ c&d\end{pmatrix}_{\bullet}z\equiv(az+b)(cz+d)^{-1}. \tag{13}\]
\(X\) is given by
\[X=\prod_{i=1}^{m+1}X_{i}, \tag{14}\] \[X_{i}= \begin{pmatrix}0&\tilde{T}_{m+i,i}^{-1}\\ \tilde{T}_{m+i,i}^{\dagger}&(i\epsilon_{n}-\tilde{H}_{i})\tilde{T}_{m+i,i}^{-1 }\end{pmatrix} \tag{15}\]
where the subscript of \(\tilde{H}\) and \(\tilde{T}\) is integers in modulo \(m+1\). These equations rewrite the process of adding \(m\) layers containing \(m+1\) sublattices into a process of adding \(m\) sublattices \(m+1\) times. Since we can express the recursion relation in terms of a Mobius transformation in (12), the problem to calculate the surface Green's function for semiinfinite layer is reduced into the eigenvalue problem of the matrix \(X\).
\(X\) is diagonalized as
\[X=Q\begin{pmatrix}\lambda_{1}&&&O\\ &\lambda_{2}&&\\ &&\ddots&\\ O&&&\lambda_{2m}\end{pmatrix}Q^{-1} \tag{16}\]
\(\lambda_{1,2,\ldots,2m}\) are the eigenvalue of the matrix \(X\).
\[|\lambda_{1}|<|\lambda_{2}|<\ldots<|\lambda_{2m}| \tag{17}\]
\[\tilde{G}_{s}^{\infty}=Q_{\bullet}I \tag{18}\]
We can calculate \(G_{s}^{\infty}\) by adding a single layer from \(\tilde{G}_{s}^{\infty}\).
|
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.